title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
sequencelengths
1
5.62k
url
stringlengths
79
342
Chapter 5. ValidatingWebhookConfiguration [admissionregistration.k8s.io/v1]
Chapter 5. ValidatingWebhookConfiguration [admissionregistration.k8s.io/v1] Description ValidatingWebhookConfiguration describes the configuration of and admission webhook that accept or reject and object without changing it. Type object 5.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object metadata; More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata . webhooks array Webhooks is a list of webhooks and the affected resources and operations. webhooks[] object ValidatingWebhook describes an admission webhook and the resources and operations it applies to. 5.1.1. .webhooks Description Webhooks is a list of webhooks and the affected resources and operations. Type array 5.1.2. .webhooks[] Description ValidatingWebhook describes an admission webhook and the resources and operations it applies to. Type object Required name clientConfig sideEffects admissionReviewVersions Property Type Description admissionReviewVersions array (string) AdmissionReviewVersions is an ordered list of preferred AdmissionReview versions the Webhook expects. API server will try to use first version in the list which it supports. If none of the versions specified in this list supported by API server, validation will fail for this object. If a persisted webhook configuration specifies allowed versions and does not include any versions known to the API Server, calls to the webhook will fail and be subject to the failure policy. clientConfig object WebhookClientConfig contains the information to make a TLS connection with the webhook failurePolicy string FailurePolicy defines how unrecognized errors from the admission endpoint are handled - allowed values are Ignore or Fail. Defaults to Fail. matchPolicy string matchPolicy defines how the "rules" list is used to match incoming requests. Allowed values are "Exact" or "Equivalent". - Exact: match a request only if it exactly matches a specified rule. For example, if deployments can be modified via apps/v1, apps/v1beta1, and extensions/v1beta1, but "rules" only included apiGroups:["apps"], apiVersions:["v1"], resources: ["deployments"] , a request to apps/v1beta1 or extensions/v1beta1 would not be sent to the webhook. - Equivalent: match a request if modifies a resource listed in rules, even via another API group or version. For example, if deployments can be modified via apps/v1, apps/v1beta1, and extensions/v1beta1, and "rules" only included apiGroups:["apps"], apiVersions:["v1"], resources: ["deployments"] , a request to apps/v1beta1 or extensions/v1beta1 would be converted to apps/v1 and sent to the webhook. Defaults to "Equivalent" name string The name of the admission webhook. Name should be fully qualified, e.g., imagepolicy.kubernetes.io, where "imagepolicy" is the name of the webhook, and kubernetes.io is the name of the organization. Required. namespaceSelector LabelSelector NamespaceSelector decides whether to run the webhook on an object based on whether the namespace for that object matches the selector. If the object itself is a namespace, the matching is performed on object.metadata.labels. If the object is another cluster scoped resource, it never skips the webhook. For example, to run the webhook on any objects whose namespace is not associated with "runlevel" of "0" or "1"; you will set the selector as follows: "namespaceSelector": { "matchExpressions": [ { "key": "runlevel", "operator": "NotIn", "values": [ "0", "1" ] } ] } If instead you want to only run the webhook on any objects whose namespace is associated with the "environment" of "prod" or "staging"; you will set the selector as follows: "namespaceSelector": { "matchExpressions": [ { "key": "environment", "operator": "In", "values": [ "prod", "staging" ] } ] } See https://kubernetes.io/docs/concepts/overview/working-with-objects/labels for more examples of label selectors. Default to the empty LabelSelector, which matches everything. objectSelector LabelSelector ObjectSelector decides whether to run the webhook based on if the object has matching labels. objectSelector is evaluated against both the oldObject and newObject that would be sent to the webhook, and is considered to match if either object matches the selector. A null object (oldObject in the case of create, or newObject in the case of delete) or an object that cannot have labels (like a DeploymentRollback or a PodProxyOptions object) is not considered to match. Use the object selector only if the webhook is opt-in, because end users may skip the admission webhook by setting the labels. Default to the empty LabelSelector, which matches everything. rules array Rules describes what operations on what resources/subresources the webhook cares about. The webhook cares about an operation if it matches any Rule. However, in order to prevent ValidatingAdmissionWebhooks and MutatingAdmissionWebhooks from putting the cluster in a state which cannot be recovered from without completely disabling the plugin, ValidatingAdmissionWebhooks and MutatingAdmissionWebhooks are never called on admission requests for ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects. rules[] object RuleWithOperations is a tuple of Operations and Resources. It is recommended to make sure that all the tuple expansions are valid. sideEffects string SideEffects states whether this webhook has side effects. Acceptable values are: None, NoneOnDryRun (webhooks created via v1beta1 may also specify Some or Unknown). Webhooks with side effects MUST implement a reconciliation system, since a request may be rejected by a future step in the admission chain and the side effects therefore need to be undone. Requests with the dryRun attribute will be auto-rejected if they match a webhook with sideEffects == Unknown or Some. timeoutSeconds integer TimeoutSeconds specifies the timeout for this webhook. After the timeout passes, the webhook call will be ignored or the API call will fail based on the failure policy. The timeout value must be between 1 and 30 seconds. Default to 10 seconds. 5.1.3. .webhooks[].clientConfig Description WebhookClientConfig contains the information to make a TLS connection with the webhook Type object Property Type Description caBundle string caBundle is a PEM encoded CA bundle which will be used to validate the webhook's server certificate. If unspecified, system trust roots on the apiserver are used. service object ServiceReference holds a reference to Service.legacy.k8s.io url string url gives the location of the webhook, in standard URL form ( scheme://host:port/path ). Exactly one of url or service must be specified. The host should not refer to a service running in the cluster; use the service field instead. The host might be resolved via external DNS in some apiservers (e.g., kube-apiserver cannot resolve in-cluster DNS as that would be a layering violation). host may also be an IP address. Please note that using localhost or 127.0.0.1 as a host is risky unless you take great care to run this webhook on all hosts which run an apiserver which might need to make calls to this webhook. Such installs are likely to be non-portable, i.e., not easy to turn up in a new cluster. The scheme must be "https"; the URL must begin with "https://". A path is optional, and if present may be any string permissible in a URL. You may use the path to pass an arbitrary string to the webhook, for example, a cluster identifier. Attempting to use a user or basic auth e.g. "user:password@" is not allowed. Fragments ("#... ") and query parameters ("?... ") are not allowed, either. 5.1.4. .webhooks[].clientConfig.service Description ServiceReference holds a reference to Service.legacy.k8s.io Type object Required namespace name Property Type Description name string name is the name of the service. Required namespace string namespace is the namespace of the service. Required path string path is an optional URL path which will be sent in any request to this service. port integer If specified, the port on the service that hosting webhook. Default to 443 for backward compatibility. port should be a valid port number (1-65535, inclusive). 5.1.5. .webhooks[].rules Description Rules describes what operations on what resources/subresources the webhook cares about. The webhook cares about an operation if it matches any Rule. However, in order to prevent ValidatingAdmissionWebhooks and MutatingAdmissionWebhooks from putting the cluster in a state which cannot be recovered from without completely disabling the plugin, ValidatingAdmissionWebhooks and MutatingAdmissionWebhooks are never called on admission requests for ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects. Type array 5.1.6. .webhooks[].rules[] Description RuleWithOperations is a tuple of Operations and Resources. It is recommended to make sure that all the tuple expansions are valid. Type object Property Type Description apiGroups array (string) APIGroups is the API groups the resources belong to. ' ' is all groups. If ' ' is present, the length of the slice must be one. Required. apiVersions array (string) APIVersions is the API versions the resources belong to. ' ' is all versions. If ' ' is present, the length of the slice must be one. Required. operations array (string) Operations is the operations the admission hook cares about - CREATE, UPDATE, DELETE, CONNECT or * for all of those operations and any future admission operations that are added. If '*' is present, the length of the slice must be one. Required. resources array (string) Resources is a list of resources this rule applies to. For example: 'pods' means pods. 'pods/log' means the log subresource of pods. ' ' means all resources, but not subresources. 'pods/ ' means all subresources of pods. ' /scale' means all scale subresources. ' /*' means all resources and their subresources. If wildcard is present, the validation rule will ensure resources do not overlap with each other. Depending on the enclosing object, subresources might not be allowed. Required. scope string scope specifies the scope of this rule. Valid values are "Cluster", "Namespaced", and " " "Cluster" means that only cluster-scoped resources will match this rule. Namespace API objects are cluster-scoped. "Namespaced" means that only namespaced resources will match this rule. " " means that there are no scope restrictions. Subresources match the scope of their parent resource. Default is "*". 5.2. API endpoints The following API endpoints are available: /apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations DELETE : delete collection of ValidatingWebhookConfiguration GET : list or watch objects of kind ValidatingWebhookConfiguration POST : create a ValidatingWebhookConfiguration /apis/admissionregistration.k8s.io/v1/watch/validatingwebhookconfigurations GET : watch individual changes to a list of ValidatingWebhookConfiguration. deprecated: use the 'watch' parameter with a list operation instead. /apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations/{name} DELETE : delete a ValidatingWebhookConfiguration GET : read the specified ValidatingWebhookConfiguration PATCH : partially update the specified ValidatingWebhookConfiguration PUT : replace the specified ValidatingWebhookConfiguration /apis/admissionregistration.k8s.io/v1/watch/validatingwebhookconfigurations/{name} GET : watch changes to an object of kind ValidatingWebhookConfiguration. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 5.2.1. /apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations Table 5.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of ValidatingWebhookConfiguration Table 5.2. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 5.3. Body parameters Parameter Type Description body DeleteOptions schema Table 5.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind ValidatingWebhookConfiguration Table 5.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 5.6. HTTP responses HTTP code Reponse body 200 - OK ValidatingWebhookConfigurationList schema 401 - Unauthorized Empty HTTP method POST Description create a ValidatingWebhookConfiguration Table 5.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.8. Body parameters Parameter Type Description body ValidatingWebhookConfiguration schema Table 5.9. HTTP responses HTTP code Reponse body 200 - OK ValidatingWebhookConfiguration schema 201 - Created ValidatingWebhookConfiguration schema 202 - Accepted ValidatingWebhookConfiguration schema 401 - Unauthorized Empty 5.2.2. /apis/admissionregistration.k8s.io/v1/watch/validatingwebhookconfigurations Table 5.10. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of ValidatingWebhookConfiguration. deprecated: use the 'watch' parameter with a list operation instead. Table 5.11. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 5.2.3. /apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations/{name} Table 5.12. Global path parameters Parameter Type Description name string name of the ValidatingWebhookConfiguration Table 5.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a ValidatingWebhookConfiguration Table 5.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 5.15. Body parameters Parameter Type Description body DeleteOptions schema Table 5.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ValidatingWebhookConfiguration Table 5.17. HTTP responses HTTP code Reponse body 200 - OK ValidatingWebhookConfiguration schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ValidatingWebhookConfiguration Table 5.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 5.19. Body parameters Parameter Type Description body Patch schema Table 5.20. HTTP responses HTTP code Reponse body 200 - OK ValidatingWebhookConfiguration schema 201 - Created ValidatingWebhookConfiguration schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ValidatingWebhookConfiguration Table 5.21. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.22. Body parameters Parameter Type Description body ValidatingWebhookConfiguration schema Table 5.23. HTTP responses HTTP code Reponse body 200 - OK ValidatingWebhookConfiguration schema 201 - Created ValidatingWebhookConfiguration schema 401 - Unauthorized Empty 5.2.4. /apis/admissionregistration.k8s.io/v1/watch/validatingwebhookconfigurations/{name} Table 5.24. Global path parameters Parameter Type Description name string name of the ValidatingWebhookConfiguration Table 5.25. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind ValidatingWebhookConfiguration. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 5.26. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/extension_apis/validatingwebhookconfiguration-admissionregistration-k8s-io-v1
Chapter 2. Installing RHEL image builder
Chapter 2. Installing RHEL image builder Before using RHEL image builder, you must install it. 2.1. RHEL image builder system requirements The host that runs RHEL image builder must meet the following requirements: Table 2.1. RHEL image builder system requirements Parameter Minimal Required Value System type A dedicated host or virtual machine. Note that RHEL image builder is not supported in containers, including Red Hat Universal Base Images (UBI). Processor 2 cores Memory 4 GiB Disk space 20 GiB of free space in the ` /var/cache/` filesystem Access privileges root Network Internet connectivity to the Red Hat Content Delivery Network (CDN). Note If you do not have internet connectivity, use RHEL image builder in isolated networks. For that, you must override the default repositories to point to your local repositories to not connect to Red Hat Content Delivery Network (CDN). Ensure that you have your content mirrored internally or use Red Hat Satellite. Additional resources Configuring RHEL image builder repositories Provisioning to Satellite using a Red Hat image builder image 2.2. Installing RHEL image builder Install RHEL image builder to have access to all the osbuild-composer package functionalities. Prerequisites You are logged in to the RHEL host on which you want to install RHEL image builder. The host is subscribed to Red Hat Subscription Manager (RHSM) or Red Hat Satellite. You have enabled the BaseOS and AppStream repositories to be able to install the RHEL image builder packages. Procedure Install RHEL image builder and other necessary packages: osbuild-composer - A service to build customized RHEL operating system images. composer-cli - This package enables access to the CLI interface. cockpit-composer - This package enables access to the Web UI interface. The web console is installed as a dependency of the cockpit-composer package. Enable and start RHEL image builder socket: If you want to use RHEL image builder in the web console, enable and start it. The osbuild-composer and cockpit services start automatically on first access. Load the shell configuration script so that the autocomplete feature for the composer-cli command starts working immediately without logging out and in: Important The osbuild-composer package is the new backend engine that will be the preferred default and focus of all new functionality beginning with Red Hat Enterprise Linux 8.3 and later. The backend lorax-composer package is considered deprecated, will only receive select fixes for the remainder of the Red Hat Enterprise Linux 8 life cycle and will be omitted from future major releases. It is recommended to uninstall lorax-composer in favor of osbuild-composer. Verification Verify that the installation works by running composer-cli : Troubleshooting You can use a system journal to track RHEL image builder activities. Additionally, you can find the log messages in the file. To find the journal output for traceback, run the following commands: To show the local worker, such as the [email protected] , a template service that can start multiple service instances: To show the running services: 2.3. Reverting to lorax-composer RHEL image builder backend The osbuild-composer backend, though much more extensible, does not currently achieve feature parity with the lorax-composer backend. To revert to the backend, follow the steps: Prerequisites You have installed the osbuild-composer package Procedure Remove the osbuild-composer backend. In the /etc/yum.conf file , add an exclude entry for osbuild-composer package. Install the lorax-composer package. Enable and start the lorax-composer service to start after each reboot. Additional resources Create a Case at Red Hat Support
[ "yum install osbuild-composer composer-cli cockpit-composer", "systemctl enable --now osbuild-composer.socket", "systemctl enable --now cockpit.socket", "source /etc/bash_completion.d/composer-cli", "composer-cli status show", "journalctl | grep osbuild", "journalctl -u osbuild-worker*", "journalctl -u osbuild-composer.service", "yum remove osbuild-composer yum remove weldr-client", "cat /etc/yum.conf [main] gpgcheck=1 installonly_limit=3 clean_requirements_on_remove=True best=True skip_if_unavailable=False exclude=osbuild-composer weldr-client", "yum install lorax-composer composer-cli", "systemctl enable --now lorax-composer.socket systemctl start lorax-composer" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/composing_a_customized_rhel_system_image/installing-composer_composing-a-customized-rhel-system-image
Chapter 112. Identity Management security settings
Chapter 112. Identity Management security settings Learn more about security-related features of Identity Management. 112.1. How Identity Management applies default security settings By default, Identity Management (IdM) on RHEL 8 uses the system-wide crypto policy. The benefit of this policy is that you do not need to harden individual IdM components manually. Important Red Hat recommends that you use the system-wide crypto policy. Changing individual security settings can break components of IdM. For example, Java in RHEL 8 does not fully support the TLS 1.3 protocol. Therefore, using this protocol can cause failures in IdM. Additional resources See the crypto-policies(7) man page on your system 112.2. Anonymous LDAP binds in Identity Management By default, anonymous binds to the Identity Management (IdM) LDAP server are enabled. Anonymous binds can expose certain configuration settings or directory values. However, some utilities, such as realmd , or older RHEL clients require anonymous binds enabled to discover domain settings when enrolling a client. Additional resources Disabling anonymous binds 112.3. Disabling anonymous binds You can disable anonymous binds on the Identity Management (IdM) 389 Directory Server instance by using LDAP tools to reset the nsslapd-allow-anonymous-access attribute. These are the valid values for the nsslapd-allow-anonymous-access attribute: on : allows all anonymous binds (default) rootdse : allows anonymous binds only for root DSE information off : disallows any anonymous binds Red Hat does not recommend completely disallowing anonymous binds by setting the attribute to off , because this also blocks external clients from checking the server configuration. LDAP and web clients are not necessarily domain clients, so they connect anonymously to read the root DSE file to get connection information. By changing the value of the nsslapd-allow-anonymous-access attribute to rootdse , you allow access to the root DSE and server configuration without any access to the directory data. Warning Certain clients rely on anonymous binds to discover IdM settings. Additionally, the compat tree can break for legacy clients that are not using authentication. Perform this procedure only if your clients do not require anonymous binds. Prerequisites You can authenticate as the Directory Manager to write to the LDAP server. You can authenticate as the root user to restart IdM services. Procedure Change the nsslapd-allow-anonymous-access attribute to rootdse . Restart the 389 Directory Server instance to load the new setting. Verification Display the value of the nsslapd-allow-anonymous-access attribute. Additional resources nsslapd-allow-anonymous-access in Directory Server 11 documentation Anonymous LDAP binds in Identity Management
[ "ldapmodify -x -D \"cn=Directory Manager\" -W -h server.example.com -p 389 Enter LDAP Password: dn: cn=config changetype: modify replace: nsslapd-allow-anonymous-access nsslapd-allow-anonymous-access: rootdse modifying entry \"cn=config\"", "systemctl restart dirsrv.target", "ldapsearch -x -D \"cn=Directory Manager\" -b cn=config -W -h server.example.com -p 389 nsslapd-allow-anonymous-access | grep nsslapd-allow-anonymous-access Enter LDAP Password: requesting: nsslapd-allow-anonymous-access nsslapd-allow-anonymous-access: rootdse" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_identity_management/identity-management-security-settings_configuring-and-managing-idm
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code and documentation. We are beginning with these four terms: master, slave, blacklist, and whitelist. Due to the enormity of this endeavor, these changes will be gradually implemented over upcoming releases. For more details on making our language more inclusive, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/9/html/security_hardening_guide_for_sap_hana/conscious-language-message_security-hardening
Chapter 4. Hardening Your System with Tools and Services
Chapter 4. Hardening Your System with Tools and Services 4.1. Desktop Security Red Hat Enterprise Linux 7 offers several ways for hardening the desktop against attacks and preventing unauthorized accesses. This section describes recommended practices for user passwords, session and account locking, and safe handling of removable media. 4.1.1. Password Security Passwords are the primary method that Red Hat Enterprise Linux 7 uses to verify a user's identity. This is why password security is so important for protection of the user, the workstation, and the network. For security purposes, the installation program configures the system to use Secure Hash Algorithm 512 ( SHA512 ) and shadow passwords. It is highly recommended that you do not alter these settings. If shadow passwords are deselected during installation, all passwords are stored as a one-way hash in the world-readable /etc/passwd file, which makes the system vulnerable to offline password cracking attacks. If an intruder can gain access to the machine as a regular user, he can copy the /etc/passwd file to his own machine and run any number of password cracking programs against it. If there is an insecure password in the file, it is only a matter of time before the password cracker discovers it. Shadow passwords eliminate this type of attack by storing the password hashes in the file /etc/shadow , which is readable only by the root user. This forces a potential attacker to attempt password cracking remotely by logging into a network service on the machine, such as SSH or FTP. This sort of brute-force attack is much slower and leaves an obvious trail as hundreds of failed login attempts are written to system files. Of course, if the cracker starts an attack in the middle of the night on a system with weak passwords, the cracker may have gained access before dawn and edited the log files to cover his tracks. In addition to format and storage considerations is the issue of content. The single most important thing a user can do to protect his account against a password cracking attack is create a strong password. Note Red Hat recommends using a central authentication solution, such as Red Hat Identity Management (IdM). Using a central solution is preferred over using local passwords. For details, see: Introduction to Red Hat Identity Management Defining Password Policies 4.1.1.1. Creating Strong Passwords When creating a secure password, the user must remember that long passwords are stronger than short and complex ones. It is not a good idea to create a password of just eight characters, even if it contains digits, special characters and uppercase letters. Password cracking tools, such as John The Ripper, are optimized for breaking such passwords, which are also hard to remember by a person. In information theory, entropy is the level of uncertainty associated with a random variable and is presented in bits. The higher the entropy value, the more secure the password is. According to NIST SP 800-63-1, passwords that are not present in a dictionary comprised of 50000 commonly selected passwords should have at least 10 bits of entropy. As such, a password that consists of four random words contains around 40 bits of entropy. A long password consisting of multiple words for added security is also called a passphrase , for example: If the system enforces the use of uppercase letters, digits, or special characters, the passphrase that follows the above recommendation can be modified in a simple way, for example by changing the first character to uppercase and appending " 1! ". Note that such a modification does not increase the security of the passphrase significantly. Another way to create a password yourself is using a password generator. The pwmake is a command-line tool for generating random passwords that consist of all four groups of characters - uppercase, lowercase, digits and special characters. The utility allows you to specify the number of entropy bits that are used to generate the password. The entropy is pulled from /dev/urandom . The minimum number of bits you can specify is 56, which is enough for passwords on systems and services where brute force attacks are rare. 64 bits is adequate for applications where the attacker does not have direct access to the password hash file. For situations when the attacker might obtain the direct access to the password hash or the password is used as an encryption key, 80 to 128 bits should be used. If you specify an invalid number of entropy bits, pwmake will use the default of bits. To create a password of 128 bits, enter the following command: While there are different approaches to creating a secure password, always avoid the following bad practices: Using a single dictionary word, a word in a foreign language, an inverted word, or only numbers. Using less than 10 characters for a password or passphrase. Using a sequence of keys from the keyboard layout. Writing down your passwords. Using personal information in a password, such as birth dates, anniversaries, family member names, or pet names. Using the same passphrase or password on multiple machines. While creating secure passwords is imperative, managing them properly is also important, especially for system administrators within larger organizations. The following section details good practices for creating and managing user passwords within an organization. 4.1.1.2. Forcing Strong Passwords If an organization has a large number of users, the system administrators have two basic options available to force the use of strong passwords. They can create passwords for the user, or they can let users create their own passwords while verifying the passwords are of adequate strength. Creating the passwords for the users ensures that the passwords are good, but it becomes a daunting task as the organization grows. It also increases the risk of users writing their passwords down, thus exposing them. For these reasons, most system administrators prefer to have the users create their own passwords, but actively verify that these passwords are strong enough. In some cases, administrators may force users to change their passwords periodically through password aging. When users are asked to create or change passwords, they can use the passwd command-line utility, which is PAM -aware ( Pluggable Authentication Modules ) and checks to see if the password is too short or otherwise easy to crack. This checking is performed by the pam_pwquality.so PAM module. Note In Red Hat Enterprise Linux 7, the pam_pwquality PAM module replaced pam_cracklib , which was used in Red Hat Enterprise Linux 6 as a default module for password quality checking. It uses the same back end as pam_cracklib . The pam_pwquality module is used to check a password's strength against a set of rules. Its procedure consists of two steps: first it checks if the provided password is found in a dictionary. If not, it continues with a number of additional checks. pam_pwquality is stacked alongside other PAM modules in the password component of the /etc/pam.d/passwd file, and the custom set of rules is specified in the /etc/security/pwquality.conf configuration file. For a complete list of these checks, see the pwquality.conf (8) manual page. Example 4.1. Configuring password strength-checking in pwquality.conf To enable using pam_quality , add the following line to the password stack in the /etc/pam.d/passwd file: Options for the checks are specified one per line. For example, to require a password with a minimum length of 8 characters, including all four classes of characters, add the following lines to the /etc/security/pwquality.conf file: To set a password strength-check for character sequences and same consecutive characters, add the following lines to /etc/security/pwquality.conf : In this example, the password entered cannot contain more than 3 characters in a monotonic sequence, such as abcd , and more than 3 identical consecutive characters, such as 1111 . Note As the root user is the one who enforces the rules for password creation, they can set any password for themselves or for a regular user, despite the warning messages. 4.1.1.3. Configuring Password Aging Password aging is another technique used by system administrators to defend against bad passwords within an organization. Password aging means that after a specified period (usually 90 days), the user is prompted to create a new password. The theory behind this is that if a user is forced to change his password periodically, a cracked password is only useful to an intruder for a limited amount of time. The downside to password aging, however, is that users are more likely to write their passwords down. To specify password aging under Red Hat Enterprise Linux 7, make use of the chage command. Important In Red Hat Enterprise Linux 7, shadow passwords are enabled by default. For more information, see the Red Hat Enterprise Linux 7 System Administrator's Guide . The -M option of the chage command specifies the maximum number of days the password is valid. For example, to set a user's password to expire in 90 days, use the following command: chage -M 90 username In the above command, replace username with the name of the user. To disable password expiration, use the value of -1 after the -M option. For more information on the options available with the chage command, see the table below. Table 4.1. chage command line options Option Description -d days Specifies the number of days since January 1, 1970 the password was changed. -E date Specifies the date on which the account is locked, in the format YYYY-MM-DD. Instead of the date, the number of days since January 1, 1970 can also be used. -I days Specifies the number of inactive days after the password expiration before locking the account. If the value is 0 , the account is not locked after the password expires. -l Lists current account aging settings. -m days Specify the minimum number of days after which the user must change passwords. If the value is 0 , the password does not expire. -M days Specify the maximum number of days for which the password is valid. When the number of days specified by this option plus the number of days specified with the -d option is less than the current day, the user must change passwords before using the account. -W days Specifies the number of days before the password expiration date to warn the user. You can also use the chage command in interactive mode to modify multiple password aging and account details. Use the following command to enter interactive mode: chage <username> The following is a sample interactive session using this command: You can configure a password to expire the first time a user logs in. This forces users to change passwords immediately. Set up an initial password. To assign a default password, enter the following command at a shell prompt as root : passwd username Warning The passwd utility has the option to set a null password. Using a null password, while convenient, is a highly insecure practice, as any third party can log in and access the system using the insecure user name. Avoid using null passwords wherever possible. If it is not possible, always make sure that the user is ready to log in before unlocking an account with a null password. Force immediate password expiration by running the following command as root : chage -d 0 username This command sets the value for the date the password was last changed to the epoch (January 1, 1970). This value forces immediate password expiration no matter what password aging policy, if any, is in place. Upon the initial log in, the user is now prompted for a new password. 4.1.2. Account Locking In Red Hat Enterprise Linux 7, the pam_faillock PAM module allows system administrators to lock out user accounts after a specified number of failed attempts. Limiting user login attempts serves mainly as a security measure that aims to prevent possible brute force attacks targeted to obtain a user's account password. With the pam_faillock module, failed login attempts are stored in a separate file for each user in the /var/run/faillock directory. Note The order of lines in the failed attempt log files is important. Any change in this order can lock all user accounts, including the root user account when the even_deny_root option is used. Follow these steps to configure account locking: To lock out any non-root user after three unsuccessful attempts and unlock that user after 10 minutes, add two lines to the auth section of the /etc/pam.d/system-auth and /etc/pam.d/password-auth files. After your edits, the entire auth section in both files should look like this: Lines number 2 and 4 have been added. Add the following line to the account section of both files specified in the step: To apply account locking for the root user as well, add the even_deny_root option to the pam_faillock entries in the /etc/pam.d/system-auth and /etc/pam.d/password-auth files: When the user john attempts to log in for the fourth time after failing to log in three times previously, his account is locked upon the fourth attempt: To prevent the system from locking users out even after multiple failed logins, add the following line just above the line where pam_faillock is called for the first time in both /etc/pam.d/system-auth and /etc/pam.d/password-auth . Also replace user1 , user2 , and user3 with the actual user names. To view the number of failed attempts per user, run, as root , the following command: To unlock a user's account, run, as root , the following command: Important Running cron jobs resets the failure counter of pam_faillock of that user that is running the cron job, and thus pam_faillock should not be configured for cron . See the Knowledge Centered Support (KCS) solution for more information. Keeping Custom Settings with authconfig When modifying authentication configuration using the authconfig utility, the system-auth and password-auth files are overwritten with the settings from the authconfig utility. This can be avoided by creating symbolic links in place of the configuration files, which authconfig recognizes and does not overwrite. In order to use custom settings in the configuration files and authconfig simultaneously, configure account locking using the following steps: Check whether the system-auth and password-auth files are already symbolic links pointing to system-auth-ac and password-auth-ac (this is the system default): If the output is similar to the following, the symbolic links are in place, and you can skip to step number 3: If the system-auth and password-auth files are not symbolic links, continue with the step. Rename the configuration files: Create configuration files with your custom settings: The /etc/pam.d/system-auth-local file should contain the following lines: The /etc/pam.d/password-auth-local file should contain the following lines: Create the following symbolic links: For more information on various pam_faillock configuration options, see the pam_faillock (8) manual page. Removing the nullok option The nullok option, which allows users to log in with a blank password if the password field in the /etc/shadow file is empty, is enabled by default. To disable the nullok option, remove the nullok string from configuration files in the /etc/pam.d/ directory, such as /etc/pam.d/system-auth or /etc/pam.d/password-auth . See the Will nullok option allow users to login without entering a password? KCS solution for more information. 4.1.3. Session Locking Users may need to leave their workstation unattended for a number of reasons during everyday operation. This could present an opportunity for an attacker to physically access the machine, especially in environments with insufficient physical security measures (see Section 1.2.1, "Physical Controls" ). Laptops are especially exposed since their mobility interferes with physical security. You can alleviate these risks by using session locking features which prevent access to the system until a correct password is entered. Note The main advantage of locking the screen instead of logging out is that a lock allows the user's processes (such as file transfers) to continue running. Logging out would stop these processes. 4.1.3.1. Locking Virtual Consoles Using vlock To lock a virtual console, use the vlock utility. Install it by entering the following command as root: After installation, you can lock any console session by using the vlock command without any additional parameters. This locks the currently active virtual console session while still allowing access to the others. To prevent access to all virtual consoles on the workstation, execute the following: In this case, vlock locks the currently active console and the -a option prevents switching to other virtual consoles. See the vlock(1) man page for additional information. 4.1.4. Enforcing Read-Only Mounting of Removable Media To enforce read-only mounting of removable media (such as USB flash disks), the administrator can use a udev rule to detect removable media and configure them to be mounted read-only using the blockdev utility. This is sufficient for enforcing read-only mounting of physical media. Using blockdev to Force Read-Only Mounting of Removable Media To force all removable media to be mounted read-only, create a new udev configuration file named, for example, 80-readonly-removables.rules in the /etc/udev/rules.d/ directory with the following content: SUBSYSTEM=="block",ATTRS{removable}=="1",RUN{program}="/sbin/blockdev --setro %N" The above udev rule ensures that any newly connected removable block (storage) device is automatically configured as read-only using the blockdev utility. Applying New udev Settings For these settings to take effect, the new udev rules need to be applied. The udev service automatically detects changes to its configuration files, but new settings are not applied to already existing devices. Only newly connected devices are affected by the new settings. Therefore, you need to unmount and unplug all connected removable media to ensure that the new settings are applied to them when they are plugged in. To force udev to re-apply all rules to already existing devices, enter the following command as root : Note that forcing udev to re-apply all rules using the above command does not affect any storage devices that are already mounted. To force udev to reload all rules (in case the new rules are not automatically detected for some reason), use the following command:
[ "randomword1 randomword2 randomword3 randomword4", "pwmake 128", "password required pam_pwquality.so retry=3", "minlen = 8 minclass = 4", "maxsequence = 3 maxrepeat = 3", "~]# chage juan Changing the aging information for juan Enter the new value, or press ENTER for the default Minimum Password Age [0]: 10 Maximum Password Age [99999]: 90 Last Password Change (YYYY-MM-DD) [2006-08-18]: Password Expiration Warning [7]: Password Inactive [-1]: Account Expiration Date (YYYY-MM-DD) [1969-12-31]:", "1 auth required pam_env.so 2 auth required pam_faillock.so preauth silent audit deny=3 unlock_time=600 3 auth sufficient pam_unix.so nullok try_first_pass 4 auth [default=die] pam_faillock.so authfail audit deny=3 unlock_time=600 5 auth requisite pam_succeed_if.so uid >= 1000 quiet_success 6 auth required pam_deny.so", "account required pam_faillock.so", "auth required pam_faillock.so preauth silent audit deny=3 even_deny_root unlock_time=600 auth sufficient pam_unix.so nullok try_first_pass auth [default=die] pam_faillock.so authfail audit deny=3 even_deny_root unlock_time=600 account required pam_faillock.so", "~]USD su - john Account locked due to 3 failed logins su: incorrect password", "auth [success=1 default=ignore] pam_succeed_if.so user in user1:user2:user3", "~]USD faillock john: When Type Source Valid 2013-03-05 11:44:14 TTY pts/0 V", "faillock --user <username> --reset", "~]# ls -l /etc/pam.d/{password,system}-auth", "lrwxrwxrwx. 1 root root 16 24. Feb 09.29 /etc/pam.d/password-auth -> password-auth-ac lrwxrwxrwx. 1 root root 28 24. Feb 09.29 /etc/pam.d/system-auth -> system-auth-ac", "~]# mv /etc/pam.d/system-auth /etc/pam.d/system-auth-ac ~]# mv /etc/pam.d/password-auth /etc/pam.d/password-auth-ac", "~]# vi /etc/pam.d/system-auth-local", "auth required pam_faillock.so preauth silent audit deny=3 unlock_time=600 auth include system-auth-ac auth [default=die] pam_faillock.so authfail silent audit deny=3 unlock_time=600 account required pam_faillock.so account include system-auth-ac password include system-auth-ac session include system-auth-ac", "~]# vi /etc/pam.d/password-auth-local", "auth required pam_faillock.so preauth silent audit deny=3 unlock_time=600 auth include password-auth-ac auth [default=die] pam_faillock.so authfail silent audit deny=3 unlock_time=600 account required pam_faillock.so account include password-auth-ac password include password-auth-ac session include password-auth-ac", "~]# ln -sf /etc/pam.d/system-auth-local /etc/pam.d/system-auth ~]# ln -sf /etc/pam.d/password-auth-local /etc/pam.d/password-auth", "~]# yum install kbd", "vlock -a", "SUBSYSTEM==\"block\",ATTRS{removable}==\"1\",RUN{program}=\"/sbin/blockdev --setro %N\"", "~# udevadm trigger", "~# udevadm control --reload" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/security_guide/chap-hardening_your_system_with_tools_and_services
Release Notes for Spring Boot 2.7
Release Notes for Spring Boot 2.7 Red Hat support for Spring Boot 2.7 For use with Spring Boot 2.7.18 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_support_for_spring_boot/2.7/html/release_notes_for_spring_boot_2.7/index
Providing feedback on JBoss EAP documentation
Providing feedback on JBoss EAP documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Please include the Document URL , the section number and describe the issue . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/using_jboss_eap_xp_4.0.0/proc_providing-feedback-on-red-hat-documentation_default
Chapter 4. Examples
Chapter 4. Examples This chapter demonstrates the use of Red Hat build of Apache Qpid ProtonJ2 through example programs. For more examples, see the Apache Qpid ProtonJ2 examples . 4.1. Sending messages This client program connects to a server using <serverHost> and <serverPort> , creates a sender for target <address> , sends 100 messages containing String and exits. Example: Sending messages package org.apache.qpid.protonj2.client.examples; import org.apache.qpid.protonj2.client.Client; import org.apache.qpid.protonj2.client.Connection; import org.apache.qpid.protonj2.client.ConnectionOptions; import org.apache.qpid.protonj2.client.Message; import org.apache.qpid.protonj2.client.Sender; import org.apache.qpid.protonj2.client.Tracker; 1 public class Send { private static final int MESSAGE_COUNT = 100; 2 public static void main(String[] argv) throws Exception { final String serverHost = System.getProperty("HOST", "localhost"); 3 final int serverPort = Integer.getInteger("PORT", 5672); 4 final String address = System.getProperty("ADDRESS", "send-receive-example"); final Client client = Client.create(); 5 final ConnectionOptions options = new ConnectionOptions(); 6 options.user(System.getProperty("USER")); options.password(System.getProperty("PASSWORD")); try (Connection connection = client.connect(serverHost, serverPort, options); 7 Sender sender = connection.openSender(address)) { 8 for (int i = 0; i < MESSAGE_COUNT; ++i) { Message<String> message = Message.create(String.format("Hello World! [%s]", i)); 9 Tracker tracker = sender.send(message); 10 tracker.awaitSettlement(); 11 System.out.println(String.format("Sent message to %s: %s", sender.address(), message.body())); } } } } <.> packages required by Red Hat build of Apache Qpid ProtonJ2. <.> The number of messages to send. <.> serverHost is the network address of the host or virtual host for the AMQP connection and can be configured by setting the Environment variable HOST . <.> serverPort is the port on the host that the broker is accepting connections and can be configured by setting the environment variable PORT . <.> Client is the container that can create multiple Connections to a broker. <.> options is used for various setting, including User and Password . See Section 5.1, "Connection Options" for more information. <.> connection is the AMQP Connection to a broker. <.> Create a sender for transferring messages to the broker. <.> In the message send loop a new message is created. <.> The message is sent to the broker. <.> Wait for the broker to settle the message. Running the example To run the example program, see Chapter 3, Getting started . 4.2. Receiving messages This client program connects to a server using <connection-url> , creates a receiver for source <address> , and receives messages until it is terminated or it reaches <count> messages. Example: Receiving messages package org.apache.qpid.protonj2.client.examples; import org.apache.qpid.protonj2.client.Client; import org.apache.qpid.protonj2.client.Connection; import org.apache.qpid.protonj2.client.ConnectionOptions; import org.apache.qpid.protonj2.client.Delivery; import org.apache.qpid.protonj2.client.Message; import org.apache.qpid.protonj2.client.Receiver; 1 public class Receive { private static final int MESSAGE_COUNT = 100; 2 public static void main(String[] args) throws Exception { final String serverHost = System.getProperty("HOST", "localhost"); 3 final int serverPort = Integer.getInteger("PORT", 5672); 4 final String address = System.getProperty("ADDRESS", "send-receive-example"); 5 final Client client = Client.create(); 6 final ConnectionOptions options = new ConnectionOptions(); 7 options.user(System.getProperty("USER")); options.password(System.getProperty("PASSWORD")); try (Connection connection = client.connect(serverHost, serverPort, options); 8 Receiver receiver = connection.openReceiver(address)) { 9 for (int i = 0; i < MESSAGE_COUNT; ++i) { Delivery delivery = receiver.receive(); 10 Message<String> message = delivery.message(); 11 System.out.println("Received message with body: " + message.body()); } } } } 1 1 required packages for Red Hat build of Apache Qpid ProtonJ2. 2 2 The number of messages to receive. 3 3 serverHost is the network address of the host or virtual host for the AMQP connection and can be configured by setting the Environment variable HOST . 4 4 serverPort is the port on the host that the broker is accepting connections and can be configured by setting the environment variable PORT . 5 5 6 Client is the container that can create multiple Connections to a broker. 6 7 options is used for various setting, including User and Password . See Section 5.1, "Connection Options" for more information. 7 8 connection is the AMQP Connection to a broker. 8 9 Create a receiver for receiving messages from the broker. 9 10 In the message receive loop a new delivery is received. 10 11 The message is obtained from the delivery . Running the example To run the example program, see Chapter 3, Getting started .
[ "package org.apache.qpid.protonj2.client.examples; import org.apache.qpid.protonj2.client.Client; import org.apache.qpid.protonj2.client.Connection; import org.apache.qpid.protonj2.client.ConnectionOptions; import org.apache.qpid.protonj2.client.Message; import org.apache.qpid.protonj2.client.Sender; import org.apache.qpid.protonj2.client.Tracker; 1 public class Send { private static final int MESSAGE_COUNT = 100; 2 public static void main(String[] argv) throws Exception { final String serverHost = System.getProperty(\"HOST\", \"localhost\"); 3 final int serverPort = Integer.getInteger(\"PORT\", 5672); 4 final String address = System.getProperty(\"ADDRESS\", \"send-receive-example\"); final Client client = Client.create(); 5 final ConnectionOptions options = new ConnectionOptions(); 6 options.user(System.getProperty(\"USER\")); options.password(System.getProperty(\"PASSWORD\")); try (Connection connection = client.connect(serverHost, serverPort, options); 7 Sender sender = connection.openSender(address)) { 8 for (int i = 0; i < MESSAGE_COUNT; ++i) { Message<String> message = Message.create(String.format(\"Hello World! [%s]\", i)); 9 Tracker tracker = sender.send(message); 10 tracker.awaitSettlement(); 11 System.out.println(String.format(\"Sent message to %s: %s\", sender.address(), message.body())); } } } }", "package org.apache.qpid.protonj2.client.examples; import org.apache.qpid.protonj2.client.Client; import org.apache.qpid.protonj2.client.Connection; import org.apache.qpid.protonj2.client.ConnectionOptions; import org.apache.qpid.protonj2.client.Delivery; import org.apache.qpid.protonj2.client.Message; import org.apache.qpid.protonj2.client.Receiver; 1 public class Receive { private static final int MESSAGE_COUNT = 100; 2 public static void main(String[] args) throws Exception { final String serverHost = System.getProperty(\"HOST\", \"localhost\"); 3 final int serverPort = Integer.getInteger(\"PORT\", 5672); 4 final String address = System.getProperty(\"ADDRESS\", \"send-receive-example\"); 5 final Client client = Client.create(); 6 final ConnectionOptions options = new ConnectionOptions(); 7 options.user(System.getProperty(\"USER\")); options.password(System.getProperty(\"PASSWORD\")); try (Connection connection = client.connect(serverHost, serverPort, options); 8 Receiver receiver = connection.openReceiver(address)) { 9 for (int i = 0; i < MESSAGE_COUNT; ++i) { Delivery delivery = receiver.receive(); 10 Message<String> message = delivery.message(); 11 System.out.println(\"Received message with body: \" + message.body()); } } } }" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_qpid_protonj2/1.0/html/using_qpid_protonj2/examples
Chapter 10. LDAP Authentication Setup for Red Hat Quay
Chapter 10. LDAP Authentication Setup for Red Hat Quay Lightweight Directory Access Protocol (LDAP) is an open, vendor-neutral, industry standard application protocol for accessing and maintaining distributed directory information services over an Internet Protocol (IP) network. Red Hat Quay supports using LDAP as an identity provider. 10.1. Considerations when enabling LDAP Prior to enabling LDAP for your Red Hat Quay deployment, you should consider the following. Existing Red Hat Quay deployments Conflicts between usernames can arise when you enable LDAP for an existing Red Hat Quay deployment that already has users configured. For example, one user, alice , was manually created in Red Hat Quay prior to enabling LDAP. If the username alice also exists in the LDAP directory, Red Hat Quay automatically creates a new user, alice-1 , when alice logs in for the first time using LDAP. Red Hat Quay then automatically maps the LDAP credentials to the alice account. For consistency reasons, this might be erroneous for your Red Hat Quay deployment. It is recommended that you remove any potentially conflicting local account names from Red Hat Quay prior to enabling LDAP. Manual User Creation and LDAP authentication When Red Hat Quay is configured for LDAP, LDAP-authenticated users are automatically created in Red Hat Quay's database on first log in, if the configuration option FEATURE_USER_CREATION is set to true . If this option is set to false , the automatic user creation for LDAP users fails, and the user is not allowed to log in. In this scenario, the superuser needs to create the desired user account first. Conversely, if FEATURE_USER_CREATION is set to true , this also means that a user can still create an account from the Red Hat Quay login screen, even if there is an equivalent user in LDAP. 10.2. Configuring LDAP for Red Hat Quay Use the following procedure to configure LDAP for your Red Hat Quay deployment. Procedure You can use the Red Hat Quay config tool to configure LDAP. Using the Red Hat Quay config tool, locate the Authentication section. Select LDAP from the dropdown menu, and update the LDAP configuration fields as required. Optional. On the Team synchronization box, and click Enable Team Syncrhonization Support . With team synchronization enabled, Red Hat Quay administrators who are also superusers can set teams to have their membership synchronized with a backing group in LDAP. For Resynchronization duration enter 60m . This option sets the resynchronization duration at which a team must be re-synchronized. This field must be set similar to the following examples: 30m , 1h , 1d . Optional. For Self-service team syncing setup , you can click Allow non-superusers to enable and manage team syncing to allow superusers the ability to enable and manage team syncing under the organizations that they are administrators for. Locate the LDAP URI box and provide a full LDAP URI, including the ldap:// or ldaps:// prefix, for example, ldap://117.17.8.101 . Under Base DN , provide a name which forms the base path for looking up all LDAP records, for example, o=<organization_id> , dc=<example_domain_component> , dc=com . Under User Relative DN , provide a list of Distinguished Name path(s), which form the secondary base path(s) for looking up all user LDAP records relative to the Base DN defined above. For example, uid=<name> , ou=Users , o=<organization_id> , dc=<example_domain_component> , dc=com . This path, or these paths, is tried if the user is not found through the primary relative DN. Note User Relative DN is relative to Base DN , for example, ou=Users and not ou=Users,dc=<example_domain_component>,dc=com . Optional. Provide Secondary User Relative DNs if there are multiple Organizational Units where user objects are located. You can type in the Organizational Units and click Add to add multiple RDNs. For example, ou=Users,ou=NYC and ou=Users,ou=SFO . The User Relative DN searches with subtree scope. For example, if your organization has Organization Units NYC and SFO under the Users OU (that is, ou=SFO,ou=Users and ou=NYC,ou=Users ), Red Hat Quay can authenticate users from both the NYC and SFO Organizational Units if the User Relative DN is set to Users ( ou=Users ). Optional. Fill in the Additional User Filter Expression field for all user lookup queries if desired. Distinguished Names used in the filter must be full based. The Base DN is not added automatically added to this field, and you must wrap the text in parentheses, for example, (memberOf=cn=developers,ou=groups,dc=<example_domain_component>,dc=com) . Fill in the Administrator DN field for the Red Hat Quay administrator account. This account must be able to login and view the records for all users accounts. For example: uid=<name>,ou=Users,o=<organization_id>,dc=<example_domain_component>,dc=com . Fill in the Administrator DN Password field. This is the password for the administrator distinguished name. Important The password for this field is stored in plaintext inside of the config.yaml file. Setting up a dedicated account of using a password hash is highly recommended. Optional. Fill in the UID Attribute field. This is the name of the property field in the LDAP user records that stores your user's username. Most commonly, uid is entered for this field. This field can be used to log into your Red Hat Quay deployment. Optional. Fill in the Mail Attribute field. This is the name of the property field in your LDAP user records that stores your user's e-mail addresses. Most commonly, mail is entered for this field. This field can be used to log into your Red Hat Quay deployment. Note The username to log in must exist in the User Relative DN . If you are using Microsoft Active Directory to setup your LDAP deployment, you must use sAMAccountName for your UID attribute. Optional. You can add a custom SSL/TLS certificate by clicking Choose File under the Custom TLS Certificate optionl. Additionally, you can enable fallbacks to insecure, non-TLS connections by checking the Allow fallback to non-TLS connections box. If you upload an SSl/TLS certificate, you must provide an ldaps:// prefix, for example, LDAP_URI: ldaps://ldap_provider.example.org . Alternatively, you can update your config.yaml file directly to include all relevant information. For example: --- AUTHENTICATION_TYPE: LDAP --- LDAP_ADMIN_DN: uid=<name>,ou=Users,o=<organization_id>,dc=<example_domain_component>,dc=com LDAP_ADMIN_PASSWD: ABC123 LDAP_ALLOW_INSECURE_FALLBACK: false LDAP_BASE_DN: - o=<organization_id> - dc=<example_domain_component> - dc=com LDAP_EMAIL_ATTR: mail LDAP_UID_ATTR: uid LDAP_URI: ldap://<example_url>.com LDAP_USER_FILTER: (memberof=cn=developers,ou=Users,dc=<domain_name>,dc=com) LDAP_USER_RDN: - ou=<example_organization_unit> - o=<organization_id> - dc=<example_domain_component> - dc=com After you have added all required LDAP fields, click the Save Configuration Changes button to validate the configuration. All validation must succeed before proceeding. Additional configuration can be performed by selecting the Continue Editing button. 10.3. Enabling the LDAP_RESTRICTED_USER_FILTER configuration field The LDAP_RESTRICTED_USER_FILTER configuration field is a subset of the LDAP_USER_FILTER configuration field. When configured, this option allows Red Hat Quay administrators the ability to configure LDAP users as restricted users when Red Hat Quay uses LDAP as its authentication provider. Use the following procedure to enable LDAP restricted users on your Red Hat Quay deployment. Prerequisites Your Red Hat Quay deployment uses LDAP as its authentication provider. You have configured the LDAP_USER_FILTER field in your config.yaml file. Procedure In your deployment's config.yaml file, add the LDAP_RESTRICTED_USER_FILTER parameter and specify the group of restricted users, for example, members : --- AUTHENTICATION_TYPE: LDAP --- LDAP_ADMIN_DN: uid=<name>,ou=Users,o=<organization_id>,dc=<example_domain_component>,dc=com LDAP_ADMIN_PASSWD: ABC123 LDAP_ALLOW_INSECURE_FALLBACK: false LDAP_BASE_DN: - o=<organization_id> - dc=<example_domain_component> - dc=com LDAP_EMAIL_ATTR: mail LDAP_UID_ATTR: uid LDAP_URI: ldap://<example_url>.com LDAP_USER_FILTER: (memberof=cn=developers,ou=Users,o=<example_organization_unit>,dc=<example_domain_component>,dc=com) LDAP_RESTRICTED_USER_FILTER: (<filterField>=<value>) LDAP_USER_RDN: - ou=<example_organization_unit> - o=<organization_id> - dc=<example_domain_component> - dc=com Start, or restart, your Red Hat Quay deployment. After enabling the LDAP_RESTRICTED_USER_FILTER feature, your LDAP Red Hat Quay users are restricted from reading and writing content, and creating organizations. 10.4. Enabling the LDAP_SUPERUSER_FILTER configuration field With the LDAP_SUPERUSER_FILTER field configured, Red Hat Quay administrators can configure Lightweight Directory Access Protocol (LDAP) users as superusers if Red Hat Quay uses LDAP as its authentication provider. Use the following procedure to enable LDAP superusers on your Red Hat Quay deployment. Prerequisites Your Red Hat Quay deployment uses LDAP as its authentication provider. You have configured the LDAP_USER_FILTER field field in your config.yaml file. Procedure In your deployment's config.yaml file, add the LDAP_SUPERUSER_FILTER parameter and add the group of users you want configured as super users, for example, root : --- AUTHENTICATION_TYPE: LDAP --- LDAP_ADMIN_DN: uid=<name>,ou=Users,o=<organization_id>,dc=<example_domain_component>,dc=com LDAP_ADMIN_PASSWD: ABC123 LDAP_ALLOW_INSECURE_FALLBACK: false LDAP_BASE_DN: - o=<organization_id> - dc=<example_domain_component> - dc=com LDAP_EMAIL_ATTR: mail LDAP_UID_ATTR: uid LDAP_URI: ldap://<example_url>.com LDAP_USER_FILTER: (memberof=cn=developers,ou=Users,o=<example_organization_unit>,dc=<example_domain_component>,dc=com) LDAP_SUPERUSER_FILTER: (<filterField>=<value>) LDAP_USER_RDN: - ou=<example_organization_unit> - o=<organization_id> - dc=<example_domain_component> - dc=com Start, or restart, your Red Hat Quay deployment. After enabling the LDAP_SUPERUSER_FILTER feature, your LDAP Red Hat Quay users have superuser privileges. The following options are available to superusers: Manage users Manage organizations Manage service keys View the change log Query the usage logs Create globally visible user messages 10.5. Common LDAP configuration issues The following errors might be returned with an invalid configuration. Invalid credentials . If you receive this error, the Administrator DN or Administrator DN password values are incorrect. Ensure that you are providing accurate Administrator DN and password values. *Verification of superuser %USERNAME% failed . This error is returned for the following reasons: The username has not been found. The user does not exist in the remote authentication system. LDAP authorization is configured improperly. Cannot find the current logged in user . When configuring LDAP for Red Hat Quay, there may be situations where the LDAP connection is established successfully using the username and password provided in the Administrator DN fields. However, if the current logged-in user cannot be found within the specified User Relative DN path using the UID Attribute or Mail Attribute fields, there are typically two potential reasons for this: The current logged in user does not exist in the User Relative DN path. The Administrator DN does not have rights to search or read the specified LDAP path. To fix this issue, ensure that the logged in user is included in the User Relative DN path, or provide the correct permissions to the Administrator DN account. 10.6. LDAP configuration fields For a full list of LDAP configuration fields, see LDAP configuration fields
[ "--- AUTHENTICATION_TYPE: LDAP --- LDAP_ADMIN_DN: uid=<name>,ou=Users,o=<organization_id>,dc=<example_domain_component>,dc=com LDAP_ADMIN_PASSWD: ABC123 LDAP_ALLOW_INSECURE_FALLBACK: false LDAP_BASE_DN: - o=<organization_id> - dc=<example_domain_component> - dc=com LDAP_EMAIL_ATTR: mail LDAP_UID_ATTR: uid LDAP_URI: ldap://<example_url>.com LDAP_USER_FILTER: (memberof=cn=developers,ou=Users,dc=<domain_name>,dc=com) LDAP_USER_RDN: - ou=<example_organization_unit> - o=<organization_id> - dc=<example_domain_component> - dc=com", "--- AUTHENTICATION_TYPE: LDAP --- LDAP_ADMIN_DN: uid=<name>,ou=Users,o=<organization_id>,dc=<example_domain_component>,dc=com LDAP_ADMIN_PASSWD: ABC123 LDAP_ALLOW_INSECURE_FALLBACK: false LDAP_BASE_DN: - o=<organization_id> - dc=<example_domain_component> - dc=com LDAP_EMAIL_ATTR: mail LDAP_UID_ATTR: uid LDAP_URI: ldap://<example_url>.com LDAP_USER_FILTER: (memberof=cn=developers,ou=Users,o=<example_organization_unit>,dc=<example_domain_component>,dc=com) LDAP_RESTRICTED_USER_FILTER: (<filterField>=<value>) LDAP_USER_RDN: - ou=<example_organization_unit> - o=<organization_id> - dc=<example_domain_component> - dc=com", "--- AUTHENTICATION_TYPE: LDAP --- LDAP_ADMIN_DN: uid=<name>,ou=Users,o=<organization_id>,dc=<example_domain_component>,dc=com LDAP_ADMIN_PASSWD: ABC123 LDAP_ALLOW_INSECURE_FALLBACK: false LDAP_BASE_DN: - o=<organization_id> - dc=<example_domain_component> - dc=com LDAP_EMAIL_ATTR: mail LDAP_UID_ATTR: uid LDAP_URI: ldap://<example_url>.com LDAP_USER_FILTER: (memberof=cn=developers,ou=Users,o=<example_organization_unit>,dc=<example_domain_component>,dc=com) LDAP_SUPERUSER_FILTER: (<filterField>=<value>) LDAP_USER_RDN: - ou=<example_organization_unit> - o=<organization_id> - dc=<example_domain_component> - dc=com" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.10/html/manage_red_hat_quay/ldap-authentication-setup-for-quay-enterprise
Chapter 7. Setting up Spring Boot applications for HawtIO Online with Jolokia
Chapter 7. Setting up Spring Boot applications for HawtIO Online with Jolokia Note If stopping a Camel route is changing the health status to DOWN and triggering a pod restart by OpenShift, a possible solution to avoid this behavior is to set: camel.routecontroller.enabled = true It will enable the supervised route controller so that the route will be with status Stopped and the overall status of the health check is UP . This section describes the enabling of monitoring of a Spring Boot application by HawtIO. It starts from first principles in setting up a simple example application. Note This application runs on OpenShift and is discovered and monitored by HawtIO online. If you already have a Spring Boot application implemented, skip to Section 7.2, "Adding Jolokia Starter dependency to the application" . Note The following is based on the jolokia sample application in the Apache Camel Spring-Boot examples repository. Prerequisites Maven has been installed and mvn is available on the Command-line (CLI). 7.1. Setting up a sample Spring Boot application To create a new Spring Boot application, you can either create the maven project directory structure manually, or execute an archetype to generate the scaffolding for a standard java project, which you can customize for individual applications. Customize these values as needed: archetypeVersion 4.8.0.redhat-00022 groupId io.hawtio.online.examples artifactId hawtio-online-example-camel-springboot-os version 1.0.0 Run the maven archetype: mvn archetype:generate \ -DarchetypeGroupId=org.apache.camel.archetypes \ -DarchetypeArtifactId=camel-archetype-spring-boot \ -DarchetypeVersion=4.8.0.redhat-00022 \ -DgroupId=io.hawt.online.examples \ -DartifactId=hawtio-online-example \ -Dversion=1.0.0 \ -DinteractiveMode=false \ -Dpackage=io.hawtio Change into the new project named artifactId (in the above example: hawtio-online-example ) An example hello world application is created, and you can compile it. At this point, the application should be executable locally. Use the mvn spring-boot:run maven goal to test the application: USD mvn spring-boot:run 7.2. Adding Jolokia Starter dependency to the application In order to allow HawtIO to monitor the Camel route in the application, you must add the camel-jolokia-starter dependency. It contains all the necessary transitive dependencies. Add the needed dependencies to the <dependencies> section: <dependencies> ... <!-- Camel --> ... <!-- Dependency is mandatory for exposing Jolokia endpoint --> <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-jolokia-starter</artifactId> </dependency> <!-- Optional: enables debugging support for Camel --> <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-debug</artifactId> <version>4.8.0</version> </dependency> ... </dependencies> For configuration details, see the Jolokia component documentation To enable inflight monitoring also add the following property to the application.properties file according to the Spring Boot documentation : camel.springboot.inflight-repository-browse-enabled=true 7.3. Configuring the application for Deployment to OpenShift The starter already manages the configuration for the Kubernetes/OpenShift environment, so no specific extra configuration is needed. The only mandatory configuration is the name of the port exposed by the POD, it must be named jolokia . spec: containers: - name: my-container ports: - name: jolokia containerPort: 8778 protocol: TCP ........ ....... 7.4. Deploying the Spring Boot application to OpenShift Prerequisites The appropriate project selected (see Documentation ); All files have been configured. Run the following maven command: mvn clean install -DskipTests -P openshift The application is compiled with S2I and deployed to OpenShift. Verify that the Spring Boot application is running correctly: Follow the Verification steps detailed in the Deploying Red Hat build of Quarkus Java applications to OpenShift Container Platform section of the Red Hat build of Quarkus documentation. When your new Spring Boot application is running correctly, it is discovered by the HawtIO instance (depending on its mode - 'Namespace' mode requires it to be in the same project). The new container should be displayed like in the following screenshot: Click Connect to examine the Spring Boot application can be with HawtIO: 7.5. Additional resources Deploying Red Hat build of Quarkus Java applications to OpenShift Container Platform Camel Spring Boot Starter Configuration Deploying a Spring Boot Camel application to OpenShift A Spring Boot application: Getting started Using the JKube OpenShift plugin
[ "camel.routecontroller.enabled = true", "mvn archetype:generate -DarchetypeGroupId=org.apache.camel.archetypes -DarchetypeArtifactId=camel-archetype-spring-boot -DarchetypeVersion=4.8.0.redhat-00022 -DgroupId=io.hawt.online.examples -DartifactId=hawtio-online-example -Dversion=1.0.0 -DinteractiveMode=false -Dpackage=io.hawtio", "mvn spring-boot:run", "<dependencies> <!-- Camel --> <!-- Dependency is mandatory for exposing Jolokia endpoint --> <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-jolokia-starter</artifactId> </dependency> <!-- Optional: enables debugging support for Camel --> <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-debug</artifactId> <version>4.8.0</version> </dependency> </dependencies>", "camel.springboot.inflight-repository-browse-enabled=true", "spec: containers: - name: my-container ports: - name: jolokia containerPort: 8778 protocol: TCP ..... ....", "mvn clean install -DskipTests -P openshift" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/hawtio_diagnostic_console_guide/setting-up-applications-for-hawtio-online-jolokia
Chapter 7. Advisories related to this release
Chapter 7. Advisories related to this release The following advisories are issued to document bug fixes and CVE fixes included in this release: RHSA-2023:6738 RHSA-2023:6887 RHEA-2023:6888 RHSA-2023:6889 RHBA-2023:6897 RHBA-2023:6898 Revised on 2024-05-09 14:50:03 UTC
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/21/html/release_notes_for_red_hat_build_of_openjdk_21.0.1/openjdk-2101-advisory_openjdk
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Submitting feedback through Jira (account required) Log in to the Jira website. Click Create in the top navigation bar Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Click Create at the bottom of the dialogue.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/developing_c_and_cpp_applications_in_rhel_9/proc_providing-feedback-on-red-hat-documentation_developing-applications
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/21/html/release_notes_for_eclipse_temurin_21.0.4/making-open-source-more-inclusive
Chapter 12. AWS S3 Sink
Chapter 12. AWS S3 Sink Upload data to AWS S3. The Kamelet expects the following headers to be set: file / ce-file : as the file name to upload If the header won't be set the exchange ID will be used as file name. 12.1. Configuration Options The following table summarizes the configuration options available for the aws-s3-sink Kamelet: Property Name Description Type Default Example accessKey * Access Key The access key obtained from AWS. string bucketNameOrArn * Bucket Name The S3 Bucket name or ARN. string region * AWS Region The AWS region to connect to. string "eu-west-1" secretKey * Secret Key The secret key obtained from AWS. string autoCreateBucket Autocreate Bucket Setting the autocreation of the S3 bucket bucketName. boolean false Note Fields marked with an asterisk (*) are mandatory. 12.2. Dependencies At runtime, the aws-s3-sink Kamelet relies upon the presence of the following dependencies: camel:aws2-s3 camel:kamelet 12.3. Usage This section describes how you can use the aws-s3-sink . 12.3.1. Knative Sink You can use the aws-s3-sink Kamelet as a Knative sink by binding it to a Knative object. aws-s3-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: aws-s3-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: aws-s3-sink properties: accessKey: "The Access Key" bucketNameOrArn: "The Bucket Name" region: "eu-west-1" secretKey: "The Secret Key" 12.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 12.3.1.2. Procedure for using the cluster CLI Save the aws-s3-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f aws-s3-sink-binding.yaml 12.3.1.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind channel:mychannel aws-s3-sink -p "sink.accessKey=The Access Key" -p "sink.bucketNameOrArn=The Bucket Name" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key" This command creates the KameletBinding in the current namespace on the cluster. 12.3.2. Kafka Sink You can use the aws-s3-sink Kamelet as a Kafka sink by binding it to a Kafka topic. aws-s3-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: aws-s3-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: aws-s3-sink properties: accessKey: "The Access Key" bucketNameOrArn: "The Bucket Name" region: "eu-west-1" secretKey: "The Secret Key" 12.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 12.3.2.2. Procedure for using the cluster CLI Save the aws-s3-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f aws-s3-sink-binding.yaml 12.3.2.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic aws-s3-sink -p "sink.accessKey=The Access Key" -p "sink.bucketNameOrArn=The Bucket Name" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key" This command creates the KameletBinding in the current namespace on the cluster. 12.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/aws-s3-sink.kamelet.yaml
[ "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: aws-s3-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: aws-s3-sink properties: accessKey: \"The Access Key\" bucketNameOrArn: \"The Bucket Name\" region: \"eu-west-1\" secretKey: \"The Secret Key\"", "apply -f aws-s3-sink-binding.yaml", "kamel bind channel:mychannel aws-s3-sink -p \"sink.accessKey=The Access Key\" -p \"sink.bucketNameOrArn=The Bucket Name\" -p \"sink.region=eu-west-1\" -p \"sink.secretKey=The Secret Key\"", "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: aws-s3-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: aws-s3-sink properties: accessKey: \"The Access Key\" bucketNameOrArn: \"The Bucket Name\" region: \"eu-west-1\" secretKey: \"The Secret Key\"", "apply -f aws-s3-sink-binding.yaml", "kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic aws-s3-sink -p \"sink.accessKey=The Access Key\" -p \"sink.bucketNameOrArn=The Bucket Name\" -p \"sink.region=eu-west-1\" -p \"sink.secretKey=The Secret Key\"" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.9/html/kamelets_reference/aws-s3-sink
Part III. Configure and Verify
Part III. Configure and Verify
null
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/part-basic_administration
Chapter 1. Installing an on-premise cluster using the Assisted Installer
Chapter 1. Installing an on-premise cluster using the Assisted Installer You can install OpenShift Container Platform on on-premise hardware or on-premise VMs by using the Assisted Installer. Installing OpenShift Container Platform by using the Assisted Installer supports x86_64 , AArch64 , ppc64le , and s390x CPU architectures. 1.1. Using the Assisted Installer The Assisted Installer is a user-friendly installation solution offered on the Red Hat Hybrid Cloud Console . The Assisted Installer supports the various deployment platforms with a focus on bare metal, Nutanix, and vSphere infrastructures. The Assisted Installer provides installation functionality as a service. This software-as-a-service (SaaS) approach has the following advantages: Web user interface: The web user interface performs cluster installation without the user having to create the installation configuration files manually. No bootstrap node: A bootstrap node is not required when installing with the Assisted Installer. The bootstrapping process executes on a node within the cluster. Hosting: The Assisted Installer hosts: Ignition files The installation configuration A discovery ISO The installer Streamlined installation workflow: Deployment does not require in-depth knowledge of OpenShift Container Platform. The Assisted Installer provides reasonable defaults and provides the installer as a service, which: Eliminates the need to install and run the OpenShift Container Platform installer locally. Ensures the latest version of the installer up to the latest tested z-stream releases. Older versions remain available, if needed. Enables building automation by using the API without the need to run the OpenShift Container Platform installer locally. Advanced networking: The Assisted Installer supports IPv4 networking with SDN and OVN, IPv6 and dual stack networking with OVN only, NMState-based static IP addressing, and an HTTP/S proxy. OVN is the default Container Network Interface (CNI) for OpenShift Container Platform 4.12 and later releases. SDN is supported up to OpenShift Container Platform 4.14, but is not supported for OpenShift Container Platform 4.15 and later releases. Preinstallation validation: The Assisted Installer validates the configuration before installation to ensure a high probability of success. The validation process includes the following checks: Ensuring network connectivity Ensuring sufficient network bandwidth Ensuring connectivity to the registry Ensuring time synchronization between cluster nodes Verifying that the cluster nodes meet the minimum hardware requirements Validating the installation configuration parameters REST API: The Assisted Installer has a REST API, enabling automation. The Assisted Installer supports installing OpenShift Container Platform on premises in a connected environment, including with an optional HTTP/S proxy. It can install the following: Highly available OpenShift Container Platform or single-node OpenShift (SNO) OpenShift Container Platform on bare metal, Nutanix, or vSphere with full platform integration, or other virtualization platforms without integration Optional: OpenShift Virtualization, multicluster engine, Logical Volume Manager (LVM) Storage, and OpenShift Data Foundation Note Currently, OpenShift Virtualization and LVM Storage are not supported on IBM Z(R) ( s390x ) architecture. The user interface provides an intuitive interactive workflow where automation does not exist or is not required. Users may also automate installations using the REST API. See the Assisted Installer for OpenShift Container Platform documentation for details. 1.2. API support for the Assisted Installer Supported APIs for the Assisted Installer are stable for a minimum of three months from the announcement of deprecation.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/installing_on-premise_with_assisted_installer/installing-on-prem-assisted
Chapter 6. Clustered Locks
Chapter 6. Clustered Locks Clustered locks are data structures that are distributed and shared across nodes in a Data Grid cluster. Clustered locks allow you to run code that is synchronized between nodes. 6.1. Lock API Data Grid provides a ClusteredLock API that lets you concurrently execute code on a cluster when using Data Grid in embedded mode. The API consists of the following: ClusteredLock exposes methods to implement clustered locks. ClusteredLockManager exposes methods to define, configure, retrieve, and remove clustered locks. EmbeddedClusteredLockManagerFactory initializes ClusteredLockManager implementations. Ownership Data Grid supports NODE ownership so that all nodes in a cluster can use a lock. Reentrancy Data Grid clustered locks are non-reentrant so any node in the cluster can acquire a lock but only the node that creates the lock can release it. If two consecutive lock calls are sent for the same owner, the first call acquires the lock if it is available and the second call is blocked. Reference EmbeddedClusteredLockManagerFactory ClusteredLockManager ClusteredLock 6.2. Using Clustered Locks Learn how to use clustered locks with Data Grid embedded in your application. Prerequisites Add the infinispan-clustered-lock dependency to your pom.xml : <dependency> <groupId>org.infinispan</groupId> <artifactId>infinispan-clustered-lock</artifactId> </dependency> Procedure Initialize the ClusteredLockManager interface from a Cache Manager. This interface is the entry point for defining, retrieving, and removing clustered locks. Give a unique name for each clustered lock. Acquire locks with the lock.tryLock(1, TimeUnit.SECONDS) method. // Set up a clustered Cache Manager. GlobalConfigurationBuilder global = GlobalConfigurationBuilder.defaultClusteredBuilder(); // Configure the cache mode, in this case it is distributed and synchronous. ConfigurationBuilder builder = new ConfigurationBuilder(); builder.clustering().cacheMode(CacheMode.DIST_SYNC); // Initialize a new default Cache Manager. DefaultCacheManager cm = new DefaultCacheManager(global.build(), builder.build()); // Initialize a Clustered Lock Manager. ClusteredLockManager clm1 = EmbeddedClusteredLockManagerFactory.from(cm); // Define a clustered lock named 'lock'. clm1.defineLock("lock"); // Get a lock from each node in the cluster. ClusteredLock lock = clm1.get("lock"); AtomicInteger counter = new AtomicInteger(0); // Acquire the lock as follows. // Each 'lock.tryLock(1, TimeUnit.SECONDS)' method attempts to acquire the lock. // If the lock is not available, the method waits for the timeout period to elapse. When the lock is acquired, other calls to acquire the lock are blocked until the lock is released. CompletableFuture<Boolean> call1 = lock.tryLock(1, TimeUnit.SECONDS).whenComplete((r, ex) -> { if (r) { System.out.println("lock is acquired by the call 1"); lock.unlock().whenComplete((nil, ex2) -> { System.out.println("lock is released by the call 1"); counter.incrementAndGet(); }); } }); CompletableFuture<Boolean> call2 = lock.tryLock(1, TimeUnit.SECONDS).whenComplete((r, ex) -> { if (r) { System.out.println("lock is acquired by the call 2"); lock.unlock().whenComplete((nil, ex2) -> { System.out.println("lock is released by the call 2"); counter.incrementAndGet(); }); } }); CompletableFuture<Boolean> call3 = lock.tryLock(1, TimeUnit.SECONDS).whenComplete((r, ex) -> { if (r) { System.out.println("lock is acquired by the call 3"); lock.unlock().whenComplete((nil, ex2) -> { System.out.println("lock is released by the call 3"); counter.incrementAndGet(); }); } }); CompletableFuture.allOf(call1, call2, call3).whenComplete((r, ex) -> { // Print the value of the counter. System.out.println("Value of the counter is " + counter.get()); // Stop the Cache Manager. cm.stop(); }); 6.3. Configuring Internal Caches for Locks Clustered Lock Managers include an internal cache that stores lock state. You can configure the internal cache either declaratively or programmatically. Procedure Define the number of nodes in the cluster that store the state of clustered locks. The default value is -1 , which replicates the value to all nodes. Specify one of the following values for the cache reliability, which controls how clustered locks behave when clusters split into partitions or multiple nodes leave: AVAILABLE : Nodes in any partition can concurrently operate on locks. CONSISTENT : Only nodes that belong to the majority partition can operate on locks. This is the default value. Programmatic configuration import org.infinispan.lock.configuration.ClusteredLockManagerConfiguration; import org.infinispan.lock.configuration.ClusteredLockManagerConfigurationBuilder; import org.infinispan.lock.configuration.Reliability; ... GlobalConfigurationBuilder global = GlobalConfigurationBuilder.defaultClusteredBuilder(); final ClusteredLockManagerConfiguration config = global.addModule(ClusteredLockManagerConfigurationBuilder.class).numOwner(2).reliability(Reliability.AVAILABLE).create(); DefaultCacheManager cm = new DefaultCacheManager(global.build()); ClusteredLockManager clm1 = EmbeddedClusteredLockManagerFactory.from(cm); clm1.defineLock("lock"); Declarative configuration <infinispan xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:infinispan:config:14.0 https://infinispan.org/schemas/infinispan-config-14.0.xsd" xmlns="urn:infinispan:config:14.0"> <cache-container default-cache="default"> <transport/> <local-cache name="default"> <locking concurrency-level="100" acquire-timeout="1000"/> </local-cache> <clustered-locks xmlns="urn:infinispan:config:clustered-locks:14.0" num-owners = "3" reliability="AVAILABLE"> <clustered-lock name="lock1" /> <clustered-lock name="lock2" /> </clustered-locks> </cache-container> <!-- Cache configuration goes here. --> </infinispan> Reference ClusteredLockManagerConfiguration Clustered Locks Configuration Schema
[ "<dependency> <groupId>org.infinispan</groupId> <artifactId>infinispan-clustered-lock</artifactId> </dependency>", "// Set up a clustered Cache Manager. GlobalConfigurationBuilder global = GlobalConfigurationBuilder.defaultClusteredBuilder(); // Configure the cache mode, in this case it is distributed and synchronous. ConfigurationBuilder builder = new ConfigurationBuilder(); builder.clustering().cacheMode(CacheMode.DIST_SYNC); // Initialize a new default Cache Manager. DefaultCacheManager cm = new DefaultCacheManager(global.build(), builder.build()); // Initialize a Clustered Lock Manager. ClusteredLockManager clm1 = EmbeddedClusteredLockManagerFactory.from(cm); // Define a clustered lock named 'lock'. clm1.defineLock(\"lock\"); // Get a lock from each node in the cluster. ClusteredLock lock = clm1.get(\"lock\"); AtomicInteger counter = new AtomicInteger(0); // Acquire the lock as follows. // Each 'lock.tryLock(1, TimeUnit.SECONDS)' method attempts to acquire the lock. // If the lock is not available, the method waits for the timeout period to elapse. When the lock is acquired, other calls to acquire the lock are blocked until the lock is released. CompletableFuture<Boolean> call1 = lock.tryLock(1, TimeUnit.SECONDS).whenComplete((r, ex) -> { if (r) { System.out.println(\"lock is acquired by the call 1\"); lock.unlock().whenComplete((nil, ex2) -> { System.out.println(\"lock is released by the call 1\"); counter.incrementAndGet(); }); } }); CompletableFuture<Boolean> call2 = lock.tryLock(1, TimeUnit.SECONDS).whenComplete((r, ex) -> { if (r) { System.out.println(\"lock is acquired by the call 2\"); lock.unlock().whenComplete((nil, ex2) -> { System.out.println(\"lock is released by the call 2\"); counter.incrementAndGet(); }); } }); CompletableFuture<Boolean> call3 = lock.tryLock(1, TimeUnit.SECONDS).whenComplete((r, ex) -> { if (r) { System.out.println(\"lock is acquired by the call 3\"); lock.unlock().whenComplete((nil, ex2) -> { System.out.println(\"lock is released by the call 3\"); counter.incrementAndGet(); }); } }); CompletableFuture.allOf(call1, call2, call3).whenComplete((r, ex) -> { // Print the value of the counter. System.out.println(\"Value of the counter is \" + counter.get()); // Stop the Cache Manager. cm.stop(); });", "import org.infinispan.lock.configuration.ClusteredLockManagerConfiguration; import org.infinispan.lock.configuration.ClusteredLockManagerConfigurationBuilder; import org.infinispan.lock.configuration.Reliability; GlobalConfigurationBuilder global = GlobalConfigurationBuilder.defaultClusteredBuilder(); final ClusteredLockManagerConfiguration config = global.addModule(ClusteredLockManagerConfigurationBuilder.class).numOwner(2).reliability(Reliability.AVAILABLE).create(); DefaultCacheManager cm = new DefaultCacheManager(global.build()); ClusteredLockManager clm1 = EmbeddedClusteredLockManagerFactory.from(cm); clm1.defineLock(\"lock\");", "<infinispan xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"urn:infinispan:config:14.0 https://infinispan.org/schemas/infinispan-config-14.0.xsd\" xmlns=\"urn:infinispan:config:14.0\"> <cache-container default-cache=\"default\"> <transport/> <local-cache name=\"default\"> <locking concurrency-level=\"100\" acquire-timeout=\"1000\"/> </local-cache> <clustered-locks xmlns=\"urn:infinispan:config:clustered-locks:14.0\" num-owners = \"3\" reliability=\"AVAILABLE\"> <clustered-lock name=\"lock1\" /> <clustered-lock name=\"lock2\" /> </clustered-locks> </cache-container> <!-- Cache configuration goes here. --> </infinispan>" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/embedding_data_grid_in_java_applications/clustered_locks
Appendix A. Deployment migration options
Appendix A. Deployment migration options This section includes topics related validation of DCN storage, as well as migrating or changing architectures. A.1. Validating edge storage Ensure that the deployment of central and edge sites are working by testing glance multi-store and instance creation. You can import images into glance that are available on the local filesystem or available on a web server. Note Always store an image copy in the central site, even if there are no instances using the image at the central location. Prerequisites Check the stores that are available through the Image service by using the glance stores-info command. In the following example, three stores are available: central, dcn1, and dcn2. These correspond to glance stores at the central location and edge sites, respectively: A.1.1. Importing from a local file You must upload the image to the central location's store first, then copy the image to remote sites. Ensure that your image file is in RAW format. If the image is not in raw format, you must convert the image before importing it into the Image service: A.1.2. Importing an image from a web server If the image is hosted on a web server, you can use the GlanceImageImportPlugins parameter to upload the image to multiple stores. This procedure assumes that the default image conversion plugin is enabled in glance. This feature automatically converts QCOW2 file formats into RAW images, which are optimal for Ceph RBD. You can confirm that a glance image is in RAW format by running the glance image-show ID | grep disk_format . Procedure Use the image-create-via-import parameter of the glance command to import an image from a web server. Use the --stores parameter. In this example, the qcow2 cirros image is downloaded from the official Cirros site, converted to RAW by glance, and imported into the central site and edge site 1 as specified by the --stores parameter. Alternatively you can replace --stores with --all-stores True to upload the image to all of the stores. A.1.3. Copying an image to a new site You can copy existing images from the central location to edge sites, which gives you access to previously created images at newly established locations. Use the UUID of the glance image for the copy operation: Note In this example, the --stores option specifies that the cirros image will be copied from the central site to edge sites dcn1 and dcn2. Alternatively, you can use the --all-stores True option, which uploads the image to all the stores that don't currently have the image. Confirm a copy of the image is in each store. Note that the stores key, which is the last item in the properties map, is set to central,dcn0,dcn1 .: Note Always store an image copy in the central site even if there is no VM using it on that site. A.1.4. Confirming that an instance at an edge site can boot with image based volumes You can use an image at the edge site to create a persistent root volume. Procedure Identify the ID of the image to create as a volume, and pass that ID to the openstack volume create command: Identify the volume ID of the newly created volume and pass it to the openstack server create command: You can verify that the volume is based on the image by running the rbd command within a ceph-mon container at the dcn0 edge site to list the volumes pool. Confirm that you can create a cinder snapshot of the root volume of the instance. Ensure that the server is stopped to quiesce data to create a clean snapshot. Use the --force option, because the volume status remains in-use when the instance is off. List the contents of the volumes pool on the dcn0 Ceph cluster to show the newly created snapshot. A.1.5. Confirming image snapshots can be created and copied between sites Verify that you can create a new image at the dcn0 site. Ensure that the server is stopped to quiesce data to create a clean snapshot: Copy the image from the dcn0 edge site back to the hub location, which is the default back end for glance: For more information on glance multistore operations, see Image service with multiple stores . A.2. Migrating to a spine and leaf deployment It is possible to migrate an existing cloud with a pre-existing network configuration to one with a spine leaf architecture. For this, the following conditions are needed: All bare metal ports must have their physical-network property value set to ctlplane . The parameter enable_routed_networks is added and set to true in undercloud.conf, followed by a re-run of the undercloud installation command, openstack undercloud install . Once the undercloud is re-deployed, the overcloud is considered a spine leaf, with a single leaf leaf0 . You can add additional provisioning leaves to the deployment through the following steps. Add the desired subnets to undercloud.conf as shown in Configuring routed spine-leaf in the undercloud . Re-run the undercloud installation command, openstack undercloud install . Add the desired additional networks and roles to the overcloud templates, network_data.yaml and roles_data.yaml respectively. Note If you are using the {{network.name}}InterfaceRoutes parameter in the network configuration file, then you'll need to ensure that the NetworkDeploymentActions parameter includes the value UPDATE . Finally, re-run the overcloud installation script that includes all relevant heat templates for your cloud deployment. A.3. Migrating to a multistack deployment You can migrate from a single stack deployment to a multistack deployment by treating the existing deployment as the central site, and adding additional edge sites. You cannot split the existing stack. You can scale down the existing stack to remove compute nodes if needed. These compute nodes can then be added to edge sites. Note This action creates workload interruptions if all compute nodes are removed. A.4. Backing up and restoring across edge sites You can back up and restore Block Storage service (cinder) volumes across distributed compute node (DCN) architectures in edge site and availability zones. The cinder-backup service runs in the central availability zone (AZ), and backups are stored in the central AZ. The Block Storage service does not store backups at DCN sites. Prerequisites Deploy the optional Block Storage backup service. For more information, see Block Storage backup service deployment in Backing up Block Storage volumes . Block Storage (cinder) REST API microversion 3.51 or later. All sites must use a common openstack cephx client name. For more information, see Creating a Ceph key for external access in Deploying a Distributed Compute Node (DCN) architecture . Procedure Create a backup of a volume in the first DCN site: Replace <volume_backup> with a name for the volume backup. Replace <az_central> with the name of the central availability zone that hosts the cinder-backup service. Replace <edge_volume> with the name of the volume that you want to back up. Note If you experience issues with Ceph keyrings, you might need to restart the cinder-backup container so that the keyrings copy from the host to the container successfully. Restore the backup to a new volume in the second DCN site: Replace <az_2> with the name of the availability zone where you want to restore the backup. Replace <new_volume> with a name for the new volume. Replace <volume_backup> with the name of the volume backup that you created in the step. Replace <volume_size> with a value in GB equal to or greater than the size of the original volume. A.5. Overcloud adoption and preparation in a DCN environment You must perform the following tasks for overcloud adoption: Each site is fully upgraded separately, one by one, starting with the central location. Adopt the network and host provisioning configuration exports into the overcloud, for central location stack.```suggestion:-0+0 Define new containers and additional compatibility configuration. After adoption, you must run the upgrade preparation script, which performs the following tasks: Updates the overcloud plan to OpenStack Platform 17.1 Prepares the nodes for the upgrade For information about the duration and impact of this upgrade procedure, see Upgrade duration and impact . Prerequisites All nodes are in the ACTIVE state: If any nodes are in the MAINTENANCE state, set them to ACTIVE : Replace <node_uuid> with the UUID of the node. Procedure Log in to the undercloud host as the stack user. Source the stackrc undercloud credentials file: Verify that the following files that were exported during the undercloud upgrade contain the expected configuration for the overcloud upgrade. You can find the following files in the ~/overcloud-deploy directory: tripleo-<stack>-passwords.yaml tripleo-<stack>-network-data.yaml tripleo-<stack>-virtual-ips.yaml tripleo-<stack>-baremetal-deployment.yaml Note If the files were not generated after the undercloud upgrade, contact Red Hat Support. Important If you have a multi-cell environment, review Overcloud adoption for multi-cell environments for an example of copying the files to each cell stack. On the main stack, copy the passwords.yaml file to the ~/overcloud-deploy/USD(<stack>) directory. Repeat this step on each stack in your environment: Replace <stack> with the name of your stack. If you are performing the preparation and adoption at the central location, copy the network-data.yaml file to the stack user's home directory and deploy the networks. Do this only for the central location: For more information, see Provisioning and deploying your overcloud in Installing and managing Red Hat OpenStack Platform with director . If you are performing the preparation and adoption at the central location, copy the virtual-ips.yaml file to the stack user's home directory and provision the network VIPs. Do this only for the central location: On the main stack, copy the baremetal-deployment.yaml file to the stack user's home directory and provision the overcloud nodes. Repeat this step on each stack in your environment: Note This is the final step of the overcloud adoption. If your overcloud adoption takes longer than 10 minutes to complete, contact Red Hat Support. Complete the following steps to prepare the containers: Back up the containers-prepare-parameter.yaml file that you used for the undercloud upgrade: Define the following environment variables before you run the script to update the containers-prepare-parameter.yaml file: NAMESPACE : The namespace for the UBI9 images. For example, NAMESPACE='"namespace":"example.redhat.com:5002",' EL8_NAMESPACE : The namespace for the UBI8 images. NEUTRON_DRIVER : The driver to use and determine which OpenStack Networking (neutron) container to use. Set to the type of containers you used to deploy the original stack. For example, set to NEUTRON_DRIVER='"neutron_driver":"ovn",' to use OVN-based containers. EL8_TAGS : The tags of the UBI8 images, for example, EL8_TAGS='"tag":"17.1",' . Replace "17.1", with the tag that you use in your content view. EL9_TAGS : The tags of the UBI9 images, for example, EL9_TAGS='"tag":"17.1",' . Replace "17.1", with the tag that you use in your content view. For more information about the tag parameter, see Container image preparation parameters in Customizing your Red Hat OpenStack Platform deployment . CONTROL_PLANE_ROLES : The list of control plane roles using the --role option, for example, --role ControllerOpenstack, --role Database, --role Messaging, --role Networker, --role CephStorage . To view the list of control plane roles in your environment, run the following command: Replace <stack> with the name of your stack. COMPUTE_ROLES : The list of Compute roles using the --role option, for example, --Compute-1 . To view the list of Compute roles in your environment, run the following command: CEPH_OVERRIDE : If you deployed Red Hat Ceph Storage, specify the Red Hat Ceph Storage 5 container images. For example: CEPH_OVERRIDE='"ceph_image":"rhceph-5-rhel8","ceph_tag":"<latest>",' Replace <latest> with the latest ceph_tag version, for example, 5-499 . The following is an example of the containers-prepare-parameter.yaml file configuration: Run the following script to to update the containers-prepare-parameter.yaml file: Warning If you deployed Red Hat Ceph Storage, ensure that the CEPH_OVERRIDE environment variable is set to the correct values before executing the following command. Failure to do so results in issues when upgrading Red Hat Ceph Storage. The multi-rhel-container-image-prepare.py script supports the following parameters: --output-env-file Writes the environment file that contains the default ContainerImagePrepare value. --local-push-destination Triggers an upload to a local registry. --enable-registry-login Enables the flag that allows the system to attempt to log in to a remote registry prior to pulling the containers. Use this flag when --local-push-destination is not used and the target systems have network connectivity to remote registries. Do not use this flag for an overcloud that might not have network connectivity to a remote registry. --enable-multi-rhel Enables multi-rhel. --excludes Lists the services to exclude. --major-override Lists the override parameters for a major release. --minor-override Lists the override parameters for a minor release. --role The list of roles. --role-file The role_data.yaml file. If you deployed Red Hat Ceph Storage, open the containers-prepare-parameter.yaml file to confirm that the Red Hat Ceph Storage 5 container images are specified and that there are no references to Red Hat Ceph Storage 6 container images. If you have a director-deployed Red Hat Ceph Storage deployment, create a file called ceph_params.yaml and include the following content: Important Do not remove the ceph_params.yaml file after the RHOSP upgrade is complete. This file must be present in director-deployed Red Hat Ceph Storage environments. Additionally, any time you run openstack overcloud deploy , you must include the ceph_params.yaml file, for example, -e ceph_params.yaml . Note If your Red Hat Ceph Storage deployment includes short names, you must set the CephSpecFqdn parameter to false . If set to true , the inventory generates with both the short names and domain names, causing the Red Hat Ceph Storage upgrade to fail. Create an environment file called upgrades-environment.yaml in your templates directory and include the following content: Replace <dns_servers> with a comma-separated list of your DNS server IP addresses, for example, ["10.0.0.36", "10.0.0.37"] . Replace <undercloud_FQDN> with the fully qualified domain name (FQDN) of the undercloud host, for example, "undercloud-0.ctlplane.redhat.local:8787" . For more information about the upgrade parameters that you can configure in the environment file, see Upgrade parameters . If you are performing the preparation and adoption at an edge location, set the AuthCloudName parameter to the name of the central location: If multiple Image service (glance) stores are deployed, set the Image service API policy for copy-image to allow all rules: On the undercloud, create a file called overcloud_upgrade_prepare.sh in your templates directory. You must create this file for each stack in your environment. This file includes the original content of your overcloud deploy file and the environment files that are relevant to your environment. If you are creating the overcloud_upgrade_prepare.sh for a DCN edge location, you must include the following templates: An environment template that contains exported central site parameters. You can find this file in /home/stack/overcloud-deploy/centra/central-export.yaml . generated-networks-deployed.yaml , the resulting file from running the openstack overcloud network provision command at the central location. generated-vip-deployed.yaml , the resulting file from running the openstack overcloud network vip provision command at the central location. + For example: Note If you have a multi-cell environment, review Overcloud adoption for multi-cell environments for an example of creating the overcloud_upgrade_prepare.sh file for each cell stack. In the original network-environment.yaml file ( /home/stack/templates/network/network-environment.yaml ), remove all the resource_registry resources that point to OS::TripleO::*::Net::SoftwareConfig . In the overcloud_upgrade_prepare.sh file, include the following options relevant to your environment: The environment file ( upgrades-environment.yaml ) with the upgrade-specific parameters ( -e ). The environment file ( containers-prepare-parameter.yaml ) with your new container image locations ( -e ). In most cases, this is the same environment file that the undercloud uses. The environment file ( skip_rhel_release.yaml ) with the release parameters (-e). Any custom configuration environment files ( -e ) relevant to your deployment. If applicable, your custom roles ( roles_data ) file by using --roles-file . For Ceph deployments, the environment file ( ceph_params.yaml ) with the Ceph parameters (-e). The files that were generated during overcloud adoption ( networks-deployed.yaml , vip-deployed.yaml , baremetal-deployment.yaml ) (-e). If applicable, the environment file ( ipa-environment.yaml ) with your IPA service (-e). If you are using composable networks, the ( network_data ) file by using --network-file . Note Do not include the network-isolation.yaml file in your overcloud deploy file or the overcloud_upgrade_prepare.sh file. Network isolation is defined in the network_data.yaml file. If you use a custom stack name, pass the name with the --stack option. Note You must include the nova-hw-machine-type-upgrade.yaml file in your templates until all of your RHEL 8 Compute nodes are upgraded to RHEL 9 in the environment. If this file is excluded, an error appears in the nova_compute.log in the /var/log/containers/nova directory. After you upgrade all of your RHEL 8 Compute nodes to RHEL 9, you can remove this file from your configuration and update the stack. In the director-deployed Red Hat Ceph Storage use case, if you enabled the Shared File Systems service (manila) with CephFS through NFS on the deployment that you are upgrading, you must specify an additional environment file at the end of the overcloud_upgrade_prepare.sh script file. You must add the environment file at the end of the script because it overrides another environment file that is specified earlier in the script: In the external Red Hat Ceph Storage use case, if you enabled the Shared File Systems service (manila) with CephFS through NFS on the deployment that you are upgrading, you must check that the associated environment file in the overcloud_upgrade_prepare.sh script points to the tripleo-based ceph-nfs role. If present, remove the following environment file: And add the following environment file: Run the upgrade preparation script for each stack in your environment: Note If you have a multi-cell environment, you must run the script for each overcloud_upgrade_prepare.sh file that you created for each cell stack. For an example, see Overcloud adoption for multi-cell environments . Wait until the upgrade preparation completes. Download the container images:
[ "glance stores-info +----------+----------------------------------------------------------------------------------+ | Property | Value | +----------+----------------------------------------------------------------------------------+ | stores | [{\"default\": \"true\", \"id\": \"central\", \"description\": \"central rbd glance | | | store\"}, {\"id\": \"dcn0\", \"description\": \"dcn0 rbd glance store\"}, | | | {\"id\": \"dcn1\", \"description\": \"dcn1 rbd glance store\"}] | +----------+----------------------------------------------------------------------------------+", "file cirros-0.5.1-x86_64-disk.img cirros-0.5.1-x86_64-disk.img: QEMU QCOW2 Image (v3), 117440512 bytes qemu-img convert -f qcow2 -O raw cirros-0.5.1-x86_64-disk.img cirros-0.5.1-x86_64-disk.raw", "Import the image into the default back end at the central site:", "glance image-create --disk-format raw --container-format bare --name cirros --file cirros-0.5.1-x86_64-disk.raw --store central", "glance image-create-via-import --disk-format qcow2 --container-format bare --name cirros --uri http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img --import-method web-download --stores central,dcn1", "ID=USD(openstack image show cirros -c id -f value) glance image-import USDID --stores dcn0,dcn1 --import-method copy-image", "openstack image show USDID | grep properties | properties | direct_url= rbd://d25504ce-459f-432d-b6fa-79854d786f2b/images/8083c7e7-32d8-4f7a-b1da-0ed7884f1076/snap , locations= [{u'url : u'rbd://d25504ce-459f-432d-b6fa-79854d786f2b/images/8083c7e7-32d8-4f7a-b1da-0ed7884f1076/snap', u'metadata': {u'store': u'central'}}, {u'url': u'rbd://0c10d6b5-a455-4c4d-bd53-8f2b9357c3c7/images/8083c7e7-32d8-4f7a-b1da-0ed7884f1076/snap', u'metadata': {u'store': u'dcn0'}}, {u'url': u'rbd://8649d6c3-dcb3-4aae-8c19-8c2fe5a853ac/images/8083c7e7-32d8-4f7a-b1da-0ed7884f1076/snap', u'metadata': {u'store': u'dcn1'}}] , os_glance_failed_import= ', os_glance_importing_to_stores= ', os_hash_algo='sha512 , os_hash_value= b795f047a1b10ba0b7c95b43b2a481a59289dc4cf2e49845e60b194a911819d3ada03767bbba4143b44c93fd7f66c96c5a621e28dff51d1196dae64974ce240e , os_hidden= False , stores= central,dcn0,dcn1 |", "IMG_ID=USD(openstack image show cirros -c id -f value) openstack volume create --size 8 --availability-zone dcn0 pet-volume-dcn0 --image USDIMG_ID", "VOL_ID=USD(openstack volume show -f value -c id pet-volume-dcn0) openstack server create --flavor tiny --key-name dcn0-key --network dcn0-network --security-group basic --availability-zone dcn0 --volume USDVOL_ID pet-server-dcn0", "sudo podman exec ceph-mon-USDHOSTNAME rbd --cluster dcn0 -p volumes ls -l NAME SIZE PARENT FMT PROT LOCK volume-28c6fc32-047b-4306-ad2d-de2be02716b7 8 GiB images/8083c7e7-32d8-4f7a-b1da-0ed7884f1076@snap 2 excl USD", "openstack server stop pet-server-dcn0 openstack volume snapshot create pet-volume-dcn0-snap --volume USDVOL_ID --force openstack server start pet-server-dcn0", "sudo podman exec ceph-mon-USDHOSTNAME rbd --cluster dcn0 -p volumes ls -l NAME SIZE PARENT FMT PROT LOCK volume-28c6fc32-047b-4306-ad2d-de2be02716b7 8 GiB images/8083c7e7-32d8-4f7a-b1da-0ed7884f1076@snap 2 excl volume-28c6fc32-047b-4306-ad2d-de2be02716b7@snapshot-a1ca8602-6819-45b4-a228-b4cd3e5adf60 8 GiB images/8083c7e7-32d8-4f7a-b1da-0ed7884f1076@snap 2 yes", "NOVA_ID=USD(openstack server show pet-server-dcn0 -f value -c id) openstack server stop USDNOVA_ID openstack server image create --name cirros-snapshot USDNOVA_ID openstack server start USDNOVA_ID", "IMAGE_ID=USD(openstack image show cirros-snapshot -f value -c id) glance image-import USDIMAGE_ID --stores central --import-method copy-image", "NetworkDeploymentActions: ['CREATE','UPDATE'])", "cinder --os-volume-api-version 3.51 backup-create --name <volume_backup> --availability-zone <az_central> <edge_volume>", "cinder --os-volume-api-version 3.51 create --availability-zone <az_2> --name <new_volume> --backup-id <volume_backup> <volume_size>", "openstack baremetal node list", "openstack baremetal node maintenance unset <node_uuid>", "source ~/stackrc", "cp ~/overcloud-deploy/<stack>/tripleo-<stack>-passwords.yaml ~/overcloud-deploy/<stack>/<stack>-passwords.yaml", "cp /home/stack/overcloud-deploy/central/tripleo-central-network-data.yaml ~/ mkdir /home/stack/overcloud_adopt openstack overcloud network provision --debug --output /home/stack/overcloud_adopt/generated-networks-deployed.yaml tripleo-central-network-data.yaml", "cp /home/stack/overcloud-deploy/central/tripleo-central-virtual-ips.yaml ~/ openstack overcloud network vip provision --debug --stack <stack> --output /home/stack/overcloud_adopt/generated-vip-deployed.yaml tripleo-central-virtual-ips.yaml", "cp ~/overcloud-deploy/<stack>/tripleo-<stack>-baremetal-deployment.yaml ~/ openstack overcloud node provision --debug --stack <stack> --output /home/stack/overcloud_adopt/baremetal-central-deployment.yaml tripleo-<stack>-baremetal-deployment.yaml", "cp containers-prepare-parameter.yaml containers-prepare-parameter.yaml.orig", "export STACK=<stack> sudo awk '/tripleo_role_name/ {print \"--role \" USD2}' /var/lib/mistral/USD{STACK}/tripleo-ansible-inventory.yaml | grep -vi compute", "sudo awk '/tripleo_role_name/ {print \"--role \" USD2}' /var/lib/mistral/USD{STACK}/tripleo-ansible-inventory.yaml | grep -i compute", "NAMESPACE='\"namespace\":\"registry.redhat.io/rhosp-rhel9\",' EL8_NAMESPACE='\"namespace\":\"registry.redhat.io/rhosp-rhel8\",' NEUTRON_DRIVER='\"neutron_driver\":\"ovn\",' EL8_TAGS='\"tag\":\"17.1\",' EL9_TAGS='\"tag\":\"17.1\",' CONTROL_PLANE_ROLES=\"--role Controller\" COMPUTE_ROLES=\"--role Compute\" CEPH_TAGS='\"ceph_tag\":\"5\",'", "python3 /usr/share/openstack-tripleo-heat-templates/tools/multi-rhel-container-image-prepare.py USD{COMPUTE_ROLES} USD{CONTROL_PLANE_ROLES} --enable-multi-rhel --excludes collectd --excludes nova-libvirt --minor-override \"{USD{EL8_TAGS}USD{EL8_NAMESPACE}USD{CEPH_OVERRIDE}USD{NEUTRON_DRIVER}\\\"no_tag\\\":\\\"not_used\\\"}\" --major-override \"{USD{EL9_TAGS}USD{NAMESPACE}USD{CEPH_OVERRIDE}USD{NEUTRON_DRIVER}\\\"no_tag\\\":\\\"not_used\\\"}\" --output-env-file /home/stack/containers-prepare-parameter.yaml", "parameter_defaults: CephSpecFqdn: true CephConfigPath: \"/etc/ceph\" CephAnsibleRepo: \"rhceph-5-tools-for-rhel-8-x86_64-rpms\" DeployedCeph: true", "parameter_defaults: ExtraConfig: nova::workarounds::disable_compute_service_check_for_ffu: true DnsServers: [\"<dns_servers>\"] DockerInsecureRegistryAddress: <undercloud_FQDN> UpgradeInitCommand: | sudo subscription-manager repos --disable=* if USD( grep -q 9.2 /etc/os-release ) then sudo subscription-manager repos --enable=rhel-9-for-x86_64-baseos-eus-rpms --enable=rhel-9-for-x86_64-appstream-eus-rpms --enable=rhel-9-for-x86_64-highavailability-eus-rpms --enable=openstack-17.1-for-rhel-9-x86_64-rpms --enable=fast-datapath-for-rhel-9-x86_64-rpms sudo podman ps | grep -q ceph && subscription-manager repos --enable=rhceph-5-tools-for-rhel-9-x86_64-rpms sudo subscription-manager release --set=9.2 else sudo subscription-manager repos --enable=rhel-8-for-x86_64-baseos-tus-rpms --enable=rhel-8-for-x86_64-appstream-tus-rpms --enable=rhel-8-for-x86_64-highavailability-tus-rpms --enable=openstack-17.1-for-rhel-8-x86_64-rpms --enable=fast-datapath-for-rhel-8-x86_64-rpms sudo podman ps | grep -q ceph && subscription-manager repos --enable=rhceph-5-tools-for-rhel-8-x86_64-rpms sudo subscription-manager release --set=8.4 fi if USD(sudo podman ps | grep -q ceph ) then sudo dnf -y install cephadm fi", "parameter_defaults: AuthCloudName: central", "parameter_defaults: GlanceApiPolicies: {glance-copy_image: {key 'copy-image', value: \"\"}}", "#!/bin/bash openstack overcloud upgrade prepare --yes --timeout 460 --templates /usr/share/openstack-tripleo-heat-templates --ntp-server 192.168.24.1 --stack <stack> -r /home/stack/roles_data.yaml -e /home/stack/templates/internal.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovs.yaml -e /home/stack/templates/network/network-environment.yaml -e /home/stack/templates/inject-trust-anchor.yaml -e /home/stack/templates/hostnames.yml -e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml -e /home/stack/templates/nodes_data.yaml -e /home/stack/templates/debug.yaml -e /home/stack/templates/firstboot.yaml -e /home/stack/templates/upgrades-environment.yaml -e /home/stack/overcloud-params.yaml -e /home/stack/overcloud-deploy/<stack>/overcloud-network-environment.yaml -e /home/stack/overcloud-adopt/<stack>-passwords.yaml -e /home/stack/overcloud_adopt/<stack>-baremetal-deployment.yaml -e /home/stack/overcloud_adopt/generated-networks-deployed.yaml -e /home/stack/overcloud_adopt/generated-vip-deployed.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/nova-hw-machine-type-upgrade.yaml -e /home/stack/skip_rhel_release.yaml -e ~/containers-prepare-parameter.yaml", "-e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/manila-cephfsganesha-config.yaml", "-e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/manila-cephfsganesha-config.yaml", "-e /usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsganesha-config.yaml", "source stackrc chmod 755 /home/stack/overcloud_upgrade_prepare.sh sh /home/stack/overcloud_upgrade_prepare.sh", "openstack overcloud external-upgrade run --stack <stack> --tags container_image_prepare" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/deploying_a_distributed_compute_node_dcn_architecture/deployment_migration_options
Installing OpenShift Container Platform with the Assisted Installer
Installing OpenShift Container Platform with the Assisted Installer Assisted Installer for OpenShift Container Platform 2025 User Guide Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/installing_openshift_container_platform_with_the_assisted_installer/index
Image APIs
Image APIs OpenShift Container Platform 4.12 Reference guide for image APIs Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html-single/image_apis/index
Chapter 23. Configuring Direct Deploy
Chapter 23. Configuring Direct Deploy When provisioning nodes, director mounts the overcloud base operating system image on an iSCSI mount and then copies the image to disk on each node. Direct deploy is an alternative method that writes disk images from a HTTP location directly to disk on bare metal nodes. 23.1. Configuring the direct deploy interface on the undercloud The iSCSI deploy interface is the default deploy interface. However, you can enable the direct deploy interface to download an image from a HTTP location to the target disk. Note Your overcloud node memory tmpfs must have at least 8GB of RAM. Procedure Create or modify a custom environment file /home/stack/undercloud_custom_env.yaml and specify the IronicDefaultDeployInterface . By default, the Bare Metal service (ironic) agent on each node obtains the image stored in the Object Storage service (swift) through a HTTP link. Alternatively, ironic can stream this image directly to the node through the ironic-conductor HTTP server. To change the service that provides the image, set the IronicImageDownloadSource to http in the /home/stack/undercloud_custom_env.yaml file: Include the custom environment file in the DEFAULT section of the undercloud.conf file. Perform the undercloud installation:
[ "parameter_defaults: IronicDefaultDeployInterface: direct", "parameter_defaults: IronicDefaultDeployInterface: direct IronicImageDownloadSource: http", "custom_env_files = /home/stack/undercloud_custom_env.yaml", "openstack undercloud install" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/director_installation_and_usage/configuring_direct_deploy
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/migration_toolkit_for_runtimes/1.2/html/intellij_idea_plugin_guide/making-open-source-more-inclusive
2.9.2. Useful Websites
2.9.2. Useful Websites http://www.gnu.org/software/grub/ - The home page of the GNU GRUB project. This site contains information concerning the state of GRUB development and an FAQ. http://www.redhat.com/mirrors/LDP/HOWTO/mini/Multiboot-with-GRUB.html - Investigates various uses for GRUB, including booting operating systems other than Linux. http://www.linuxgazette.com/issue64/kohli.html - An introductory article discussing the configuration of GRUB on a system from scratch, including an overview of GRUB command line options.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-grub-useful-websites
Chapter 15. Log Record Fields
Chapter 15. Log Record Fields The following fields can be present in log records exported by the logging. Although log records are typically formatted as JSON objects, the same data model can be applied to other encodings. To search these fields from Elasticsearch and Kibana, use the full dotted field name when searching. For example, with an Elasticsearch /_search URL , to look for a Kubernetes pod name, use /_search/q=kubernetes.pod_name:name-of-my-pod . The top level fields may be present in every record. message The original log entry text, UTF-8 encoded. This field may be absent or empty if a non-empty structured field is present. See the description of structured for more. Data type text Example value HAPPY structured Original log entry as a structured object. This field may be present if the forwarder was configured to parse structured JSON logs. If the original log entry was a valid structured log, this field will contain an equivalent JSON structure. Otherwise this field will be empty or absent, and the message field will contain the original log message. The structured field can have any subfields that are included in the log message, there are no restrictions defined here. Data type group Example value map[message:starting fluentd worker pid=21631 ppid=21618 worker=0 pid:21631 ppid:21618 worker:0] @timestamp A UTC value that marks when the log payload was created or, if the creation time is not known, when the log payload was first collected. The "@" prefix denotes a field that is reserved for a particular use. By default, most tools look for "@timestamp" with ElasticSearch. Data type date Example value 2015-01-24 14:06:05.071000000 Z hostname The name of the host where this log message originated. In a Kubernetes cluster, this is the same as kubernetes.host . Data type keyword ipaddr4 The IPv4 address of the source server. Can be an array. Data type ip ipaddr6 The IPv6 address of the source server, if available. Can be an array. Data type ip level The logging level from various sources, including rsyslog(severitytext property) , a Python logging module, and others. The following values come from syslog.h , and are preceded by their numeric equivalents : 0 = emerg , system is unusable. 1 = alert , action must be taken immediately. 2 = crit , critical conditions. 3 = err , error conditions. 4 = warn , warning conditions. 5 = notice , normal but significant condition. 6 = info , informational. 7 = debug , debug-level messages. The two following values are not part of syslog.h but are widely used: 8 = trace , trace-level messages, which are more verbose than debug messages. 9 = unknown , when the logging system gets a value it doesn't recognize. Map the log levels or priorities of other logging systems to their nearest match in the preceding list. For example, from python logging , you can match CRITICAL with crit , ERROR with err , and so on. Data type keyword Example value info pid The process ID of the logging entity, if available. Data type keyword service The name of the service associated with the logging entity, if available. For example, syslog's APP-NAME and rsyslog's programname properties are mapped to the service field. Data type keyword
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/logging/cluster-logging-exported-fields
Chapter 6. Subscriptions
Chapter 6. Subscriptions 6.1. Subscription offerings Red Hat OpenShift Data Foundation subscription is based on "core-pairs," similar to Red Hat OpenShift Container Platform. The Red Hat OpenShift Data Foundation 2-core subscription is based on the number of logical cores on the CPUs in the system where OpenShift Container Platform runs. As with OpenShift Container Platform: OpenShift Data Foundation subscriptions are stackable to cover larger hosts. Cores can be distributed across as many virtual machines (VMs) as needed. For example, ten 2-core subscriptions will provide 20 cores and in case of IBM Power a 2-core subscription at SMT level of 8 will provide 2 cores or 16 vCPUs that can be used across any number of VMs. OpenShift Data Foundation subscriptions are available with Premium or Standard support. 6.2. Disaster recovery subscription requirement Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites to successfully implement a disaster recovery solution: A valid Red Hat OpenShift Data Foundation Advanced entitlement A valid Red Hat Advanced Cluster Management for Kubernetes subscription Any Red Hat OpenShift Data Foundation Cluster containing PVs participating in active replication either as a source or destination requires OpenShift Data Foundation Advanced entitlement. This subscription should be active on both source and destination clusters. To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . 6.3. Cores versus vCPUs and hyperthreading Making a determination about whether or not a particular system consumes one or more cores is currently dependent on whether or not that system has hyperthreading available. Hyperthreading is only a feature of Intel CPUs. Visit the Red Hat Customer Portal to determine whether a particular system supports hyperthreading. Virtualized OpenShift nodes using logical CPU threads, also known as simultaneous multithreading (SMT) for AMD EPYC CPUs or hyperthreading with Intel CPUs, calculate their core utilization for OpenShift subscriptions based on the number of cores/CPUs assigned to the node, however each subscription covers 4 vCPUs/cores when logical CPU threads are used. Red Hat's subscription management tools assume logical CPU threads are enabled by default on all systems. For systems where hyperthreading is enabled and where one hyperthread equates to one visible system core, the calculation of cores is a ratio of 2 cores to 4 vCPUs. Therefore, a 2-core subscription covers 4 vCPUs in a hyperthreaded system. A large virtual machine (VM) might have 8 vCPUs, equating to 4 subscription cores. As subscriptions come in 2-core units, you will need two 2-core subscriptions to cover these 4 cores or 8 vCPUs. Where hyperthreading is not enabled, and where each visible system core correlates directly to an underlying physical core, the calculation of cores is a ratio of 2 cores to 2 vCPUs. 6.3.1. Cores versus vCPUs and simultaneous multithreading (SMT) for IBM Power Making a determination about whether or not a particular system consumes one or more cores is currently dependent on the level of simultaneous multithreading configured (SMT). IBM Power provides simultaneous multithreading levels of 1, 2, 4 or 8 for each core which correspond to the number of vCPUs as in the table below. Table 6.1. Different SMT levels and their corresponding vCPUs SMT level SMT=1 SMT=2 SMT=4 SMT=8 1 Core # vCPUs=1 # vCPUs=2 # vCPUs=4 # vCPUs=8 2 Cores # vCPUs=2 # vCPUs=4 # vCPUs=8 # vCPUs=16 4 Cores # vCPUs=4 # vCPUs=8 # vCPUs=16 # vCPUs=32 For systems where SMT is configured the calculation for the number of cores required for subscription purposes depends on the SMT level. Therefore, a 2-core subscription corresponds to 2 vCPUs on SMT level of 1, and to 4 vCPUs on SMT level of 2, and to 8 vCPUs on SMT level of 4 and to 16 vCPUs on SMT level of 8 as seen in the table above. A large virtual machine (VM) might have 16 vCPUs, which at a SMT level 8 will require a 2 core subscription based on dividing the # of vCPUs by the SMT level (16 vCPUs / 8 for SMT-8 = 2). As subscriptions come in 2-core units, you will need one 2-core subscription to cover these 2 cores or 16 vCPUs. 6.4. Splitting cores Systems that require an odd number of cores need to consume a full 2-core subscription. For example, a system that is calculated to require only 1 core will end up consuming a full 2-core subscription once it is registered and subscribed. When a single virtual machine (VM) with 2 vCPUs uses hyperthreading resulting in 1 calculated vCPU, a full 2-core subscription is required; a single 2-core subscription may not be split across two VMs with 2 vCPUs using hyperthreading. See section Cores versus vCPUs and hyperthreading for more information. It is recommended that virtual instances be sized so that they require an even number of cores. 6.4.1. Shared Processor Pools for IBM Power IBM Power have a notion of shared processor pools. The processors in a shared processor pool can be shared across the nodes in the cluster. The aggregate compute capacity required for a Red Hat OpenShift Data Foundation should be a multiple of core-pairs. 6.5. Subscription requirements Red Hat OpenShift Data Foundation components can run on either OpenShift Container Platform worker or infrastructure nodes, for which you can use either Red Hat CoreOS (RHCOS) or Red Hat Enterprise Linux (RHEL) 8.4 as the host operating system. RHEL 7 is now deprecated. OpenShift Data Foundation subscriptions are required for every OpenShift Container Platform subscribed core with a ratio of 1:1. When using infrastructure nodes, the rule to subscribe all OpenShift worker node cores for OpenShift Data Foundation applies even though they don't need any OpenShift Container Platform or any OpenShift Data Foundation subscriptions. You can use labels to state whether a node is a worker or an infrastructure node. For more information, see How to use dedicated worker nodes for Red Hat OpenShift Data Foundation in the Managing and Allocating Storage Resources guide.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/planning_your_deployment/subscriptions_rhodf
Chapter 8. Creating non-secure HTTP load balancers
Chapter 8. Creating non-secure HTTP load balancers You can create the following load balancers for non-secure HTTP network traffic: Section 8.1, "Creating an HTTP load balancer with a health monitor" Section 8.2, "Creating an HTTP load balancer that uses a floating IP" Section 8.3, "Creating an HTTP load balancer with session persistence" 8.1. Creating an HTTP load balancer with a health monitor For networks that are not compatible with Red Hat OpenStack Platform Networking service (neutron) floating IPs, create a load balancer to manage network traffic for non-secure HTTP applications. Create a health monitor to ensure that your back-end members remain available. Prerequisites A private subnet that contains back-end servers that host non-secure HTTP applications on TCP port 80. The back-end servers on the private subnet are configured with a health check at the URL path / . A shared external (public) subnet that you can reach from the internet. Procedure Source your credentials file. Example Create a load balancer ( lb1 ) on a public subnet ( public_subnet ). Note Values inside parentheses are sample values that are used in the example commands in this procedure. Substitute these sample values with ones that are appropriate for your site. Example Verify the state of the load balancer. Example Before going to the step, ensure that the provisioning_status is ACTIVE . Create a listener ( listener1 ) on a port ( 80 ). Example Verify the state of the listener. Example Before going to the step, ensure that the status is ACTIVE . Create the listener default pool ( pool1 ). Example Create a health monitor on the pool ( pool1 ) that connects to the back-end servers and tests the path ( / ). Example Add load balancer members ( 192.0.2.10 and 192.0.2.11 ) on the private subnet ( private_subnet ) to the default pool. Example Verification View and verify the load balancer (lb1) settings: Example Sample output When a health monitor is present and functioning properly, you can check the status of each member. A working member ( b85c807e-4d7c-4cbd-b725-5e8afddf80d2 ) has an ONLINE value for its operating_status . Example Sample output Additional resources loadbalancer in the Command Line Interface Reference 8.2. Creating an HTTP load balancer that uses a floating IP To manage network traffic for non-secure HTTP applications, create a load balancer with a virtual IP (VIP) that depends on a floating IP. The advantage of using a floating IP is that you retain control of the assigned IP, which is necessary if you need to move, destroy, or recreate your load balancer. It is a best practice to also create a health monitor to ensure that your back-end members remain available. Note Floating IPs do not work with IPv6 networks. Prerequisites A private subnet that contains back-end servers that host non-secure HTTP applications on TCP port 80. The back-end servers are configured with a health check at the URL path / . A floating IP to use with a load balancer VIP. A Red Hat OpenStack Platform Networking service (neutron) shared external (public) subnet that you can reach from the internet to use for the floating IP. Procedure Source your credentials file. Example Create a load balancer ( lb1 ) on a private subnet ( private_subnet ). Note Values inside parentheses are sample values that are used in the example commands in this procedure. Substitute these sample values with ones that are appropriate for your site. Example Note the value of load_balancer_vip_port_id , because you must provide it in a later step. Verify the state of the load balancer. Example Before going to the step, ensure that the provisioning_status is ACTIVE . Create a listener ( listener1 ) on a port ( 80 ). Example Create the listener default pool ( pool1 ). Example Create a health monitor on the pool ( pool1 ) that connects to the back-end servers and tests the path ( / ). Example Add load balancer members ( 192.0.2.10 and 192.0.2.11 ) on the private subnet to the default pool. Example Create a floating IP address on the shared external subnet ( public ). Example Note the value of floating_ip_address , because you must provide it in a later step. Associate this floating IP ( 203.0.113.0 ) with the load balancer vip_port_id ( 69a85edd-5b1c-458f-96f2-b4552b15b8e6 ). Example Verification Verify HTTP traffic flows across the load balancer by using the floating IP ( 203.0.113.0 ). Example Sample output When a health monitor is present and functioning properly, you can check the status of each member. A working member ( b85c807e-4d7c-4cbd-b725-5e8afddf80d2 ) has an ONLINE value for its operating_status . Example Sample output Additional resources loadbalancer in the Command Line Interface Reference floating in the Command Line Interface Reference 8.3. Creating an HTTP load balancer with session persistence To manage network traffic for non-secure HTTP applications, you can create load balancers that track session persistence. Doing so ensures that when a request comes in, the load balancer directs subsequent requests from the same client to the same back-end server. Session persistence optimizes load balancing by saving time and memory. Prerequisites A private subnet that contains back-end servers that host non-secure HTTP applications on TCP port 80. The back-end servers are configured with a health check at the URL path / . A shared external (public) subnet that you can reach from the internet. The non-secure web applications whose network traffic you are load balancing have cookies enabled. Procedure Source your credentials file. Example Create a load balancer ( lb1 ) on a public subnet ( public_subnet ). Note Values inside parentheses are sample values that are used in the example commands in this procedure. Substitute these sample values with ones that are appropriate for your site. Example Verify the state of the load balancer. Example Before going to the step, ensure that the provisioning_status is ACTIVE . Create a listener ( listener1 ) on a port ( 80 ). Example Create the listener default pool ( pool1 ) that defines session persistence on a cookie ( PHPSESSIONID ). Example Create a health monitor on the pool ( pool1 ) that connects to the back-end servers and tests the path ( / ). Example Add load balancer members ( 192.0.2.10 and 192.0.2.11 ) on the private subnet ( private_subnet ) to the default pool. Example Verification View and verify the load balancer (lb1) settings: Example Sample output When a health monitor is present and functioning properly, you can check the status of each member. A working member ( b85c807e-4d7c-4cbd-b725-5e8afddf80d2 ) has an ONLINE value for its operating_status . Example Sample output Additional resources loadbalancer in the Command Line Interface Reference
[ "source ~/overcloudrc", "openstack loadbalancer create --name lb1 --vip-subnet-id public_subnet", "openstack loadbalancer show lb1", "openstack loadbalancer listener create --name listener1 --protocol HTTP --protocol-port 80 lb1", "openstack loadbalancer listener show listener1", "openstack loadbalancer pool create --name pool1 --lb-algorithm ROUND_ROBIN --listener listener1 --protocol HTTP", "openstack loadbalancer healthmonitor create --delay 15 --max-retries 4 --timeout 10 --type HTTP --url-path / pool1", "openstack loadbalancer member create --subnet-id private_subnet --address 192.0.2.10 --protocol-port 80 pool1 openstack loadbalancer member create --subnet-id private_subnet --address 192.0.2.11 --protocol-port 80 pool1", "openstack loadbalancer show lb1", "+---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | admin_state_up | True | | created_at | 2022-01-15T11:11:09 | | description | | | flavor | | | id | 788fe121-3dec-4e1b-8360-4020642238b0 | | listeners | 09f28053-fde8-4c78-88b9-0f191d84120e | | name | lb1 | | operating_status | ONLINE | | pools | 627842b3-eed8-4f5f-9f4a-01a738e64d6a | | project_id | dda678ca5b1241e7ad7bf7eb211a2fd7 | | provider | amphora | | provisioning_status | ACTIVE | | updated_at | 2022-01-15T11:12:13 | | vip_address | 198.51.100.12 | | vip_network_id | 9bca13be-f18d-49a5-a83d-9d487827fd16 | | vip_port_id | 69a85edd-5b1c-458f-96f2-b4552b15b8e6 | | vip_qos_policy_id | None | | vip_subnet_id | 5bd7334b-49b3-4849-b3a2-b0b83852dba1 | +---------------------+--------------------------------------+", "openstack loadbalancer member show pool1 b85c807e-4d7c-4cbd-b725-5e8afddf80d2", "+---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | address | 192.0.2.10 | | admin_state_up | True | | created_at | 2022-01-15T11:16:23 | | id | b85c807e-4d7c-4cbd-b725-5e8afddf80d2 | | name | | | operating_status | ONLINE | | project_id | dda678ca5b1241e7ad7bf7eb211a2fd7 | | protocol_port | 80 | | provisioning_status | ACTIVE | | subnet_id | 5bd7334b-49b3-4849-b3a2-b0b83852dba1 | | updated_at | 2022-01-15T11:20:45 | | weight | 1 | | monitor_port | None | | monitor_address | None | | backup | False | +---------------------+--------------------------------------+", "source ~/overcloudrc", "openstack loadbalancer create --name lb1 --vip-subnet-id private_subnet", "openstack loadbalancer show lb1", "openstack loadbalancer listener create --name listener1 --protocol HTTP --protocol-port 80 lb1", "openstack loadbalancer pool create --name pool1 --lb-algorithm ROUND_ROBIN --listener listener1 --protocol HTTP", "openstack loadbalancer healthmonitor create --delay 15 --max-retries 4 --timeout 10 --type HTTP --url-path / pool1", "openstack loadbalancer member create --subnet-id private_subnet --address 192.0.2.10 --protocol-port 80 pool1 openstack loadbalancer member create --subnet-id private_subnet --address 192.0.2.11 --protocol-port 80 pool1", "openstack floating ip create public", "openstack floating ip set --port 69a85edd-5b1c-458f-96f2-b4552b15b8e6 203.0.113.0", "curl -v http://203.0.113.0 --insecure", "* About to connect() to 203.0.113.0 port 80 (#0) * Trying 203.0.113.0 * Connected to 203.0.113.0 (203.0.113.0) port 80 (#0) > GET / HTTP/1.1 > User-Agent: curl/7.29.0 > Host: 203.0.113.0 > Accept: */* > < HTTP/1.1 200 OK < Content-Length: 30 < * Connection #0 to host 203.0.113.0 left intact", "openstack loadbalancer member show pool1 b85c807e-4d7c-4cbd-b725-5e8afddf80d2", "+---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | address | 192.0.02.10 | | admin_state_up | True | | created_at | 2022-01-15T11:11:23 | | id | b85c807e-4d7c-4cbd-b725-5e8afddf80d2 | | name | | | operating_status | ONLINE | | project_id | dda678ca5b1241e7ad7bf7eb211a2fd7 | | protocol_port | 80 | | provisioning_status | ACTIVE | | subnet_id | 5bd7334b-49b3-4849-b3a2-b0b83852dba1 | | updated_at | 2022-01-15T11:28:42 | | weight | 1 | | monitor_port | None | | monitor_address | None | | backup | False | +---------------------+--------------------------------------+", "source ~/overcloudrc", "openstack loadbalancer create --name lb1 --vip-subnet-id public_subnet", "openstack loadbalancer show lb1", "openstack loadbalancer listener create --name listener1 --protocol HTTP --protocol-port 80 lb1", "openstack loadbalancer pool create --name pool1 --lb-algorithm ROUND_ROBIN --listener listener1 --protocol HTTP --session-persistence type=APP_COOKIE,cookie_name=PHPSESSIONID", "openstack loadbalancer healthmonitor create --delay 15 --max-retries 4 --timeout 10 --type HTTP --url-path / pool1", "openstack loadbalancer member create --subnet-id private_subnet --address 192.0.2.10 --protocol-port 80 pool1 openstack loadbalancer member create --subnet-id private_subnet --address 192.0.2.11 --protocol-port 80 pool1", "openstack loadbalancer show lb1", "+---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | admin_state_up | True | | created_at | 2022-01-15T11:11:58 | | description | | | flavor | | | id | 788fe121-3dec-4e1b-8360-4020642238b0 | | listeners | 09f28053-fde8-4c78-88b9-0f191d84120e | | name | lb1 | | operating_status | ONLINE | | pools | 627842b3-eed8-4f5f-9f4a-01a738e64d6a | | project_id | dda678ca5b1241e7ad7bf7eb211a2fd7 | | provider | amphora | | provisioning_status | ACTIVE | | updated_at | 2022-01-15T11:28:42 | | vip_address | 198.51.100.22 | | vip_network_id | 9bca13be-f18d-49a5-a83d-9d487827fd16 | | vip_port_id | 69a85edd-5b1c-458f-96f2-b4552b15b8e6 | | vip_qos_policy_id | None | | vip_subnet_id | 5bd7334b-49b3-4849-b3a2-b0b83852dba1 | +---------------------+--------------------------------------+", "openstack loadbalancer member show pool1 b85c807e-4d7c-4cbd-b725-5e8afddf80d2", "+---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | address | 192.0.02.10 | | admin_state_up | True | | created_at | 2022-01-15T11:11:23 | | id | b85c807e-4d7c-4cbd-b725-5e8afddf80d2 | | name | | | operating_status | ONLINE | | project_id | dda678ca5b1241e7ad7bf7eb211a2fd7 | | protocol_port | 80 | | provisioning_status | ACTIVE | | subnet_id | 5bd7334b-49b3-4849-b3a2-b0b83852dba1 | | updated_at | 2022-01-15T11:28:42 | | weight | 1 | | monitor_port | None | | monitor_address | None | | backup | False | +---------------------+--------------------------------------+" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/using_octavia_for_load_balancing-as-a-service/create-non-secure-http-lbs_rhosp-lbaas
7.189. ppc64-diag
7.189. ppc64-diag 7.189.1. RHBA-2013:0382 - ppc64-diag bug fix and enhancement update Updated ppc64-diag packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The ppc64-diag packages provide diagnostic tools for Linux on the 64-bit PowerPC platforms. The platform diagnostics write events reported by the firmware to the service log, provide automated responses to urgent events, and notify system administrators or connected service frameworks about the reported events. Note The ppc64-diag packages have been upgraded to upstream version 2.5.0, which provides a number of bug fixes and enhancements over the version. (BZ#822653) Bug Fixes BZ#833619 Previously, the GARD functionality could fail to "gard out" a CPU that was being deconfigured on a logical partition (LPAR) if a predictive CPU failure was received. Consequently, the CPU could not be deconfigured. This was caused by incorrect behavior of the SIGCHLD signal handler, which under certain circumstances performed cleanup on a pipe child process that had already exited. This update modifies the underlying source code so that the SIGCHLD signal handler is reset to the default action before a pipe is open and set up again after the pipe is closed. The CPU is now correctly "garded out" and deconfigured as expected in this scenario. Also, vital product data (VPD) extraction from the lsvpd command did not work correctly. This has been fixed by correcting the lsvpd_init() function, and VPD is now obtained as expected. BZ#878314 The diag_encl command was previously enhanced with a comparison feature. The feature requires the /etc/ppc64-diag/ses_pages directory to be created on ppc64-diag installation. However, the ppc64-diag spec file was not modified accordingly so that the required directory was not created when installing the ppc64-diag packages. Consequently, the comparison feature of the diag_encl command did not work. This update corrects the ppc64-diag spec file so that the /etc/ppc64-diag/ses_pages directory is now created as expected, and the comparison feature works properly. All users of ppc64-diag are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/ppc64-diag
Chapter 2. Instance boot source
Chapter 2. Instance boot source The boot source for an instance can be an image or a bootable volume. The instance disk of an instance that you boot from an image is controlled by the Compute service and deleted when the instance is deleted. The instance disk of an instance that you boot from a volume is controlled by the Block Storage service and is stored remotely. An image contains a bootable operating system. The Image Service (glance) controls image storage and management. You can launch any number of instances from the same base image. Each instance runs from a copy of the base image. Any changes that you make to the instance do not affect the base image. A bootable volume is a block storage volume created from an image that contains a bootable operating system. The instance can use the bootable volume to persist instance data when the instance is deleted. You can use an existing persistent root volume when you launch an instance. You can also create persistent storage when you launch an instance from an image, so that you can save the instance data when the instance is deleted. A new persistent storage volume is created automatically when you create an instance from a volume snapshot. The following diagram shows the instance disks and storage that you can create when you launch an instance. The actual instance disks and storage created depend on the boot source and flavor used.
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/creating_and_managing_instances/con_instance-boot-source_osp
Chapter 1. What is Red Hat JBoss Enterprise Application Platform
Chapter 1. What is Red Hat JBoss Enterprise Application Platform Red Hat JBoss Enterprise Application Platform 8.0 (JBoss EAP) is a middleware platform built on open standards and compliant with the Jakarta EE 10 specification. It provides preconfigured options for features such as high-availability clustering, messaging, and distributed caching. It includes a modular structure that allows you to enable services only when required, which results in improved startup speed. By using the web-based management console and management command line interface (CLI), you can script and automate tasks and avoid having to edit XML configuration files. In addition, JBoss EAP includes APIs and development frameworks that you can use to develop, deploy, and run secure and scalable Jakarta EE applications. JBoss EAP 8.0 is a Jakarta EE 10 compatible implementation for Web Profile, Core Profile, and Full Platform specifications. 1.1. How does JBoss EAP work on OpenShift? Red Hat offers container images to build and run application images with JBoss EAP on OpenShift. Note Red Hat no longer offers images that contain JBoss EAP. 1.2. Comparison: JBoss EAP and JBoss EAP for OpenShift There are some notable differences when comparing the JBoss EAP product with the JBoss EAP for OpenShift image. The following table describes these differences and notes which features are included or supported in the current version of JBoss EAP for OpenShift. Table 1.1. Differences between JBoss EAP and JBoss EAP for OpenShift JBoss EAP Feature Status in JBoss EAP for OpenShift Description JBoss EAP management console Not included The JBoss EAP management console is not included in this release of JBoss EAP for OpenShift. JBoss EAP management CLI Not recommended The JBoss EAP management CLI is not recommended for use with JBoss EAP running in a containerized environment. Any configuration changes made using the management CLI in a running container will be lost when the container restarts. The management CLI is accessible from within a pod for troubleshooting purposes . Managed domain Not supported Although a JBoss EAP managed domain is not supported, creation and distribution of applications are managed in the containers on OpenShift. Default root page Disabled The default root page is disabled, but you can deploy your own application to the root context as ROOT.war . Remote messaging Supported Red Hat AMQ for inter-pod and remote messaging is supported. ActiveMQ Artemis is only supported for messaging within a single pod with JBoss EAP instances and is only enabled when Red Hat AMQ is absent. Transaction recovery Supported The EAP operator is the only tested and supported option of transaction recovery in OpenShift 4. For more information about recovering transactions using the EAP operator, see EAP Operator for Safe Transaction Recovery . 1.3. Version compatibility and support JBoss EAP for OpenShift provides images for OpenJDK 17 and OpenJDK 21 Two variant of the image are available: an S2I builder image and a runtime image. The S2I Builder image contains all the required tools that will enable you provision a complete JBoss EAP Server during S2I build. The runtime image contains dependencies needed to run JBoss EAP but does not contain a server. The server is installed in the runtime image during a chained build. The following modifications were applied to the images in JBoss EAP 8.0 for OpenShift. S2I builder image does not contain an installed JBoss EAP server and installs the JBoss EAP 8.0 server during S2I build. Configure the eap-maven-plugin in the application pom file during S2I build. Use existing JBoss EAP 7.4 application without any changes by setting GALLEON_PROVISION_FEATURE_PACKS , GALLEON_PROVISION_LAYERS , and GALLEON_PROVISION_CHANNELS environment variables during S2I build. The JBoss EAP provisioned server during S2I build contains a standalone.xml server configuration file customized for OpenShift. Important The sever contains a standalone.xml configuration file, not the standalone-openshift.xml configuration file that was used with JBoss EAP 7.4. Inside the image, JBOSS_HOME value is /opt/server . The value of JBOSS_HOME was /opt/eap for JBoss EAP 7.4. Jolokia agent is no longer present in the image. Prometheus agent is not installed. Python probes are no more present. SSO adapters are no longer present in the image. activemq.rar is no more present. Note The following discovery mechanism protocols were deprecated and are replaced by other protocols: The openshift.DNS_PING protocol was deprecated and is replaced with the dns.DNS_PING protocol. If you referenced the openshift.DNS_PING protocol in a customized standalone.xml file, replace the protocol with the dns.DNS_PING protocol. The openshift.KUBE_PING discovery mechanism protocol was deprecated and is replaced with the kubernetes.KUBE_PING protocol. 1.3.1. OpenShift 4.x support Changes in OpenShift 4.1 affect access to Jolokia, and the Open Java Console is no longer available in the OpenShift 4.x web console. In releases of OpenShift, certain kube-apiserver proxied requests were authenticated and passed through to the cluster. This behavior is now considered insecure, and so, accessing Jolokia in this manner is no longer supported. Due to changes in codebase for the OpenShift console, the link to the Open Java Console is no longer available. 1.3.2. IBM Z Support The s390x variant of libartemis-native is not included in the image. Thus, any settings related to AIO will not be taken into account. journal-type : Setting the journal-type to ASYNCIO has no effect. The value of this attribute defaults to NIO at runtime. journal-max-io : This attribute has no effect. journal-store-enable-async-io : This attribute has no effect. 1.3.2.1. Upgrades from JBoss EAP 7.4 to JBoss EAP 8.0 on OpenShift The file standalone.xml installed with JBoss EAP 7.4 on OpenShift is not compatible with JBoss EAP 8.0 and later. You must modify and rename the file to standalone.xml before starting a JBoss EAP 8.0 or later container for OpenShift. Additional resources Updates to standalone.xml when upgrading JBoss EAP 7.1 to JBoss EAP 8.0 on OpenShift . 1.3.3. Deployment options You can deploy the JBoss EAP Java applications on OpenShift using the EAP operator, a JBoss EAP-specific controller that extends the OpenShift API to create, configure, and manage instances of complex stateful applications on behalf of an OpenShift user. Additional resources For more information about the EAP operator, see EAP Operator for Automating Application Deployment on OpenShift .
null
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/using_jboss_eap_on_openshift_container_platform/assembly_what-is-red-hat-jboss-enterprise-application-platform_default
Automation controller API overview
Automation controller API overview Red Hat Ansible Automation Platform 2.4 Developer overview for the automation controller API Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/automation_controller_api_overview/index
Chapter 2. About Network Observability
Chapter 2. About Network Observability Red Hat offers cluster administrators and developers the Network Observability Operator to observe the network traffic for OpenShift Container Platform clusters. The Network Observability Operator uses the eBPF technology to create network flows. The network flows are then enriched with OpenShift Container Platform information. They are available as Prometheus metrics or as logs in Loki. You can view and analyze the stored network flows information in the OpenShift Container Platform console for further insight and troubleshooting. 2.1. Optional dependencies of the Network Observability Operator Loki Operator: Loki is the backend that can be used to store all collected flows with a maximal level of details. You can choose to use Network Observability without Loki , but there are some considerations for doing this, as described in the linked section. If you choose to install Loki, it is recommended to use the Loki Operator, which is supported by Red Hat. AMQ Streams Operator: Kafka provides scalability, resiliency and high availability in the OpenShift Container Platform cluster for large scale deployments. If you choose to use Kafka, it is recommended to use the AMQ Streams Operator, because it is supported by Red Hat. 2.2. Network Observability Operator The Network Observability Operator provides the Flow Collector API custom resource definition. A Flow Collector instance is a cluster-scoped resource that enables configuration of network flow collection. The Flow Collector instance deploys pods and services that form a monitoring pipeline where network flows are then collected and enriched with the Kubernetes metadata before storing in Loki or generating Prometheus metrics. The eBPF agent, which is deployed as a daemonset object, creates the network flows. 2.3. OpenShift Container Platform console integration OpenShift Container Platform console integration offers overview, topology view, and traffic flow tables in both Administrator and Developer perspectives. In the Administrator perspective, you can find the Network Observability Overview , Traffic flows , and Topology views by clicking Observe Network Traffic . In the Developer perspective, you can view this information by clicking Observe . The Network Observability metrics dashboards in Observe Dashboards are only available to administrators. Note To enable multi-tenancy for the developer perspective and for administrators with limited access to namespaces, you must specify permissions by defining roles. For more information, see Enabling multi-tenancy in Network Observability . 2.3.1. Network Observability metrics dashboards On the Overview tab in the OpenShift Container Platform console, you can view the overall aggregated metrics of the network traffic flow on the cluster. You can choose to display the information by zone, node, namespace, owner, pod, and service. Filters and display options can further refine the metrics. For more information, see Observing the network traffic from the Overview view . In Observe Dashboards , the Netobserv dashboards provide a quick overview of the network flows in your OpenShift Container Platform cluster. The Netobserv/Health dashboard provides metrics about the health of the Operator. For more information, see Network Observability Metrics and Viewing health information . 2.3.2. Network Observability topology views The OpenShift Container Platform console offers the Topology tab which displays a graphical representation of the network flows and the amount of traffic. The topology view represents traffic between the OpenShift Container Platform components as a network graph. You can refine the graph by using the filters and display options. You can access the information for zone, node, namespace, owner, pod, and service. 2.3.3. Traffic flow tables The Traffic flow table view provides a view for raw flows, non aggregated filtering options, and configurable columns. The OpenShift Container Platform console offers the Traffic flows tab which displays the data of the network flows and the amount of traffic. 2.4. Network Observability CLI You can quickly debug and troubleshoot networking issues with Network Observability by using the Network Observability CLI ( oc netobserv ). The Network Observability CLI is a flow and packet visualization tool that relies on eBPF agents to stream collected data to an ephemeral collector pod. It requires no persistent storage during the capture. After the run, the output is transferred to your local machine. This enables quick, live insight into packets and flow data without installing the Network Observability Operator.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/network_observability/network-observability-overview
Appendix A. Component Versions
Appendix A. Component Versions This appendix provides a list of key components and their versions in the Red Hat Enterprise Linux 7 release. Table A.1. Component Versions Component Version kernel 3.10.0-1127 kernel-alt 4.14.0-115 QLogic qla2xxx driver 10.01.00.20.07.8-k QLogic qla4xxx driver 5.04.00.00.07.02-k0 Emulex lpfc driver 0:12.0.0.13 iSCSI initiator utils ( iscsi-initiator-utils ) 6.2.0.874-17 DM-Multipath ( device-mapper-multipath ) 0.4.9-131 LVM ( lvm2 ) 2.02.186-7 qemu-kvm [a] 1.5.3-173 qemu-kvm-ma [b] 2.12.0-33 [a] The qemu-kvm packages provide KVM virtualization on AMD64 and Intel 64 systems. [b] The qemu-kvm-ma packages provide KVM virtualization on IBM POWER8, IBM POWER9, and IBM Z. Note that KVM virtualization on IBM POWER9 and IBM Z also requires using the kernel-alt packages.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.8_release_notes/component_versions
Chapter 107. Flatpack DataFormat
Chapter 107. Flatpack DataFormat Available as of Camel version 2.1 The Flatpack component ships with the Flatpack data format that can be used to format between fixed width or delimited text messages to a List of rows as Map . marshal = from List<Map<String, Object>> to OutputStream (can be converted to String ) unmarshal = from java.io.InputStream (such as a File or String ) to a java.util.List as an org.apache.camel.component.flatpack.DataSetList instance. The result of the operation will contain all the data. If you need to process each row one by one you can split the exchange, using Splitter. Notice: The Flatpack library does currently not support header and trailers for the marshal operation. 107.1. Options The Flatpack dataformat supports 9 options, which are listed below. Name Default Java Type Description definition String The flatpack pzmap configuration file. Can be omitted in simpler situations, but its preferred to use the pzmap. fixed false Boolean Delimited or fixed. Is by default false = delimited ignoreFirstRecord true Boolean Whether the first line is ignored for delimited files (for the column headers). Is by default true. textQualifier String If the text is qualified with a character. Uses quote character by default. delimiter , String The delimiter char (could be ; , or similar) allowShortLines false Boolean Allows for lines to be shorter than expected and ignores the extra characters ignoreExtraColumns false Boolean Allows for lines to be longer than expected and ignores the extra characters. parserFactoryRef String References to a custom parser factory to lookup in the registry contentTypeHeader false Boolean Whether the data format should set the Content-Type header with the type from the data format if the data format is capable of doing so. For example application/xml for data formats marshalling to XML, or application/json for data formats marshalling to JSon etc. 107.2. Spring Boot Auto-Configuration The component supports 12 options, which are listed below. Name Description Default Type camel.component.flatpack.enabled Enable flatpack component true Boolean camel.component.flatpack.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean camel.dataformat.flatpack.allow-short-lines Allows for lines to be shorter than expected and ignores the extra characters false Boolean camel.dataformat.flatpack.content-type-header Whether the data format should set the Content-Type header with the type from the data format if the data format is capable of doing so. For example application/xml for data formats marshalling to XML, or application/json for data formats marshalling to JSon etc. false Boolean camel.dataformat.flatpack.definition The flatpack pzmap configuration file. Can be omitted in simpler situations, but its preferred to use the pzmap. String camel.dataformat.flatpack.delimiter The delimiter char (could be ; , or similar) , String camel.dataformat.flatpack.enabled Enable flatpack dataformat true Boolean camel.dataformat.flatpack.fixed Delimited or fixed. Is by default false = delimited false Boolean camel.dataformat.flatpack.ignore-extra-columns Allows for lines to be longer than expected and ignores the extra characters. false Boolean camel.dataformat.flatpack.ignore-first-record Whether the first line is ignored for delimited files (for the column headers). Is by default true. true Boolean camel.dataformat.flatpack.parser-factory-ref References to a custom parser factory to lookup in the registry String camel.dataformat.flatpack.text-qualifier If the text is qualified with a character. Uses quote character by default. String ND 107.3. Usage To use the data format, simply instantiate an instance and invoke the marshal or unmarshal operation in the route builder: FlatpackDataFormat fp = new FlatpackDataFormat(); fp.setDefinition(new ClassPathResource("INVENTORY-Delimited.pzmap.xml")); ... from("file:order/in").unmarshal(df).to("seda:queue:neworder"); The sample above will read files from the order/in folder and unmarshal the input using the Flatpack configuration file INVENTORY-Delimited.pzmap.xml that configures the structure of the files. The result is a DataSetList object we store on the SEDA queue. FlatpackDataFormat df = new FlatpackDataFormat(); df.setDefinition(new ClassPathResource("PEOPLE-FixedLength.pzmap.xml")); df.setFixed(true); df.setIgnoreFirstRecord(false); from("seda:people").marshal(df).convertBodyTo(String.class).to("jms:queue:people"); In the code above we marshal the data from a Object representation as a List of rows as Maps . The rows as Map contains the column name as the key, and the corresponding value. This structure can be created in Java code from e.g. a processor. We marshal the data according to the Flatpack format and convert the result as a String object and store it on a JMS queue. 107.4. Dependencies To use Flatpack in your camel routes you need to add the a dependency on camel-flatpack which implements this data format. If you use maven you could just add the following to your pom.xml, substituting the version number for the latest & greatest release (see the download page for the latest versions). <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-flatpack</artifactId> <version>x.x.x</version> </dependency>
[ "FlatpackDataFormat fp = new FlatpackDataFormat(); fp.setDefinition(new ClassPathResource(\"INVENTORY-Delimited.pzmap.xml\")); from(\"file:order/in\").unmarshal(df).to(\"seda:queue:neworder\");", "FlatpackDataFormat df = new FlatpackDataFormat(); df.setDefinition(new ClassPathResource(\"PEOPLE-FixedLength.pzmap.xml\")); df.setFixed(true); df.setIgnoreFirstRecord(false); from(\"seda:people\").marshal(df).convertBodyTo(String.class).to(\"jms:queue:people\");", "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-flatpack</artifactId> <version>x.x.x</version> </dependency>" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/flatpack-dataformat
5.2. Enhancements
5.2. Enhancements The updated Red Hat Enterprise Linux 4.9 kernel packages also provide the following enhancement: BZ#553745 Support for the Intel architectural performance monitoring subsystem (arch_perfmon). On supported CPUs, arch_perfmon offers means to mark performance events and options for configuring and counting these events.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/4.9_release_notes/sec-kernel-related_updates-enhancements
Chapter 2. Management of services using the Ceph Orchestrator
Chapter 2. Management of services using the Ceph Orchestrator As a storage administrator, after installing the Red Hat Ceph Storage cluster, you can monitor and manage the services in a storage cluster using the Ceph Orchestrator. A service is a group of daemons that are configured together. This section covers the following administrative information: Placement specification of the Ceph Orchestrator . Deploying the Ceph daemons using the command line interface . Deploying the Ceph daemons on a subset of hosts using the command line interface . Service specification of the Ceph Orchestrator . Deploying the Ceph daemons using the service specification . Deploying the Ceph File System mirroring daemon using the service specification . 2.1. Placement specification of the Ceph Orchestrator You can use the Ceph Orchestrator to deploy osds , mons , mgrs , mds , and rgw services. Red Hat recommends deploying services using placement specifications. You need to know where and how many daemons have to be deployed to deploy a service using the Ceph Orchestrator. Placement specifications can either be passed as command line arguments or as a service specification in a yaml file. There are two ways of deploying the services using the placement specification: Using the placement specification directly in the command line interface. For example, if you want to deploy three monitors on the hosts, running the following command deploys three monitors on host01 , host02 , and host03 . Example Using the placement specification in the YAML file. For example, if you want to deploy node-exporter on all the hosts, then you can specify the following in the yaml file. Example 2.2. Deploying the Ceph daemons using the command line interface Using the Ceph Orchestrator, you can deploy the daemons such as Ceph Manager, Ceph Monitors, Ceph OSDs, monitoring stack, and others using the ceph orch command. Placement specification is passed as --placement argument with the Orchestrator commands. Prerequisites A running Red Hat Ceph Storage cluster. Hosts are added to the storage cluster. Procedure Log into the Cephadm shell: Example Use one of the following methods to deploy the daemons on the hosts: Method 1 : Specify the number of daemons and the host names: Syntax Example Method 2 : Add the labels to the hosts and then deploy the daemons using the labels: Add the labels to the hosts: Syntax Example Deploy the daemons with labels: Syntax Example Method 3 : Add the labels to the hosts and deploy using the --placement argument: Add the labels to the hosts: Syntax Example Deploy the daemons using the label placement specification: Syntax Example Verification List the service: Example List the hosts, daemons, and processes: Syntax Example Additional Resources See the Adding hosts using the Ceph Orchestrator section in the Red Hat Ceph Storage Operations Guide . 2.3. Deploying the Ceph daemons on a subset of hosts using the command line interface You can use the --placement option to deploy daemons on a subset of hosts. You can specify the number of daemons in the placement specification with the name of the hosts to deploy the daemons. Prerequisites A running Red Hat Ceph Storage cluster. Hosts are added to the cluster. Procedure Log into the Cephadm shell: Example List the hosts on which you want to deploy the Ceph daemons: Example Deploy the daemons: Syntax Example In this example, the mgr daemons are deployed only on two hosts. Verification List the hosts: Example Additional Resources See the Listing hosts using the Ceph Orchestrator section in the Red Hat Ceph Storage Operations Guide . 2.4. Service specification of the Ceph Orchestrator A service specification is a data structure to specify the service attributes and configuration settings that is used to deploy the Ceph service. The following is an example of the multi-document YAML file, cluster.yaml , for specifying service specifications: Example The following list are the parameters where the properties of a service specification are defined as follows: service_type : The type of service: Ceph services like mon, crash, mds, mgr, osd, rbd, or rbd-mirror. Ceph gateway like nfs or rgw. Monitoring stack like Alertmanager, Prometheus, Grafana or Node-exporter. Container for custom containers. service_id : A unique name of the service. placement : This is used to define where and how to deploy the daemons. unmanaged : If set to true , the Orchestrator will neither deploy nor remove any daemon associated with this service. Stateless service of Orchestrators A stateless service is a service that does not need information of the state to be available. For example, to start an rgw service, additional information is not needed to start or run the service. The rgw service does not create information about this state in order to provide the functionality. Regardless of when the rgw service starts, the state is the same. 2.5. Disabling automatic management of daemons You can mark the Cephadm services as managed or unmanaged without having to edit and re-apply the service specification. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to all the nodes. Procedure Set unmanaged for services by using this command: Syntax Example Set managed for services by using this command: Syntax Example 2.6. Deploying the Ceph daemons using the service specification Using the Ceph Orchestrator, you can deploy daemons such as ceph Manager, Ceph Monitors, Ceph OSDs, monitoring stack, and others using the service specification in a YAML file. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to all the nodes. Procedure Create the yaml file: Example This file can be configured in two different ways: Edit the file to include the host details in placement specification: Syntax Example Edit the file to include the label details in placement specification: Syntax Example Optional: You can also use extra container arguments in the service specification files such as CPUs, CA certificates, and other files while deploying services: Example Note Red Hat Ceph Storage supports the use of extra arguments to enable additional metrics in node-exporter deployed by Cephadm. Mount the YAML file under a directory in the container: Example Navigate to the directory: Example Deploy the Ceph daemons using service specification: Syntax Example Verification List the service: Example List the hosts, daemons, and processes: Syntax Example Additional Resources See the Listing hosts using the Ceph Orchestrator section in the Red Hat Ceph Storage Operations Guide . 2.7. Deploying the Ceph File System mirroring daemon using the service specification Ceph File System (CephFS) supports asynchronous replication of snapshots to a remote CephFS file system using the CephFS mirroring daemon ( cephfs-mirror ). Snapshot synchronization copies snapshot data to a remote CephFS, and creates a new snapshot on the remote target with the same name. Using the Ceph Orchestrator, you can deploy cephfs-mirror using the service specification in a YAML file. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to all the nodes. A CephFS created. Procedure Create the yaml file: Example Edit the file to include the following: Syntax Example Mount the YAML file under a directory in the container: Example Navigate to the directory: Example Deploy the cephfs-mirror daemon using the service specification: Example Verification List the service: Example List the hosts, daemons, and processes: Example Additional Resources See Ceph File System mirrors for more information about the cephfs-mirror daemon.
[ "ceph orch apply mon --placement=\"3 host01 host02 host03\"", "service_type: node-exporter placement: host_pattern: '*' extra_entrypoint_args: - \"--collector.textfile.directory=/var/lib/node_exporter/textfile_collector2\"", "cephadm shell", "ceph orch apply SERVICE_NAME --placement=\" NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_2 HOST_NAME_3 \"", "ceph orch apply mon --placement=\"3 host01 host02 host03\"", "ceph orch host label add HOSTNAME_1 LABEL", "ceph orch host label add host01 mon", "ceph orch apply DAEMON_NAME label: LABEL", "ceph orch apply mon label:mon", "ceph orch host label add HOSTNAME_1 LABEL", "ceph orch host label add host01 mon", "ceph orch apply DAEMON_NAME --placement=\"label: LABEL \"", "ceph orch apply mon --placement=\"label:mon\"", "ceph orch ls", "ceph orch ps --daemon_type= DAEMON_NAME ceph orch ps --service_name= SERVICE_NAME", "ceph orch ps --daemon_type=mon ceph orch ps --service_name=mon", "cephadm shell", "ceph orch host ls", "ceph orch apply SERVICE_NAME --placement=\" NUMBER_OF_DAEMONS HOST_NAME_1 _HOST_NAME_2 HOST_NAME_3 \"", "ceph orch apply mgr --placement=\"2 host01 host02 host03\"", "ceph orch host ls", "service_type: mon placement: host_pattern: \"mon*\" --- service_type: mgr placement: host_pattern: \"mgr*\" --- service_type: osd service_id: default_drive_group placement: host_pattern: \"osd*\" data_devices: all: true", "ceph orch set-unmanaged SERVICE_NAME", "ceph orch set-unmanaged grafana", "ceph orch set-managed SERVICE_NAME", "ceph orch set-managed mon", "touch mon.yaml", "service_type: SERVICE_NAME placement: hosts: - HOST_NAME_1 - HOST_NAME_2", "service_type: mon placement: hosts: - host01 - host02 - host03", "service_type: SERVICE_NAME placement: label: \" LABEL_1 \"", "service_type: mon placement: label: \"mon\"", "extra_container_args: - \"-v\" - \"/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\" - \"--security-opt\" - \"label=disable\" - \"cpus=2\" - \"--collector.textfile.directory=/var/lib/node_exporter/textfile_collector2\"", "cephadm shell --mount mon.yaml:/var/lib/ceph/mon/mon.yaml", "cd /var/lib/ceph/mon/", "ceph orch apply -i FILE_NAME .yaml", "ceph orch apply -i mon.yaml", "ceph orch ls", "ceph orch ps --daemon_type= DAEMON_NAME", "ceph orch ps --daemon_type=mon", "touch mirror.yaml", "service_type: cephfs-mirror service_name: SERVICE_NAME placement: hosts: - HOST_NAME_1 - HOST_NAME_2 - HOST_NAME_3", "service_type: cephfs-mirror service_name: cephfs-mirror placement: hosts: - host01 - host02 - host03", "cephadm shell --mount mirror.yaml:/var/lib/ceph/mirror.yaml", "cd /var/lib/ceph/", "ceph orch apply -i mirror.yaml", "ceph orch ls", "ceph orch ps --daemon_type=cephfs-mirror" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/operations_guide/management-of-services-using-the-ceph-orchestrator
Chapter 2. Considerations and recommendations
Chapter 2. Considerations and recommendations As a storage administrator, a basic understanding about what to consider before running a Ceph Object Gateway and implementing a multi-site Ceph Object Gateway solution is important. You can learn the hardware and network requirements, knowing what type of workloads work well with a Ceph Object Gateway, and Red Hat's recommendations. Prerequisites Time to understand, consider, and plan a storage solution. 2.1. Network considerations for Red Hat Ceph Storage An important aspect of a cloud storage solution is that storage clusters can run out of IOPS due to network latency, and other factors. Also, the storage cluster can run out of throughput due to bandwidth constraints long before the storage clusters run out of storage capacity. This means that the network hardware configuration must support the chosen workloads to meet price versus performance requirements. Storage administrators prefer that a storage cluster recovers as quickly as possible. Carefully consider bandwidth requirements for the storage cluster network, be mindful of network link oversubscription, and segregate the intra-cluster traffic from the client-to-cluster traffic. Also consider that network performance is increasingly important when considering the use of Solid State Disks (SSD), flash, NVMe, and other high performing storage devices. Ceph supports a public network and a storage cluster network. The public network handles client traffic and communication with Ceph Monitors. The storage cluster network handles Ceph OSD heartbeats, replication, backfilling, and recovery traffic. At a minimum , a single 10 GB Ethernet link should be used for storage hardware, and you can add additional 10 GB Ethernet links for connectivity and throughput. Important Red Hat recommends allocating bandwidth to the storage cluster network, such that it is a multiple of the public network using the osd_pool_default_size as the basis for the multiple on replicated pools. Red Hat also recommends running the public and storage cluster networks on separate network cards. Important Red Hat recommends using 10 GB Ethernet for Red Hat Ceph Storage deployments in production. A 1 GB Ethernet network is not suitable for production storage clusters. In the case of a drive failure, replicating 1 TB of data across a 1 GB Ethernet network takes 3 hours, and 3 TB takes 9 hours. Using 3 TB is the typical drive configuration. By contrast, with a 10 GB Ethernet network, the replication times would be 20 minutes and 1 hour. Remember that when a Ceph OSD fails, the storage cluster recovers by replicating the data it contained to other OSDs within the same failure domain and device class as the failed OSD. The failure of a larger domain such as a rack means that the storage cluster utilizes considerably more bandwidth. When building a storage cluster consisting of multiple racks, which is common for large storage implementations, consider utilizing as much network bandwidth between switches in a "fat tree" design for optimal performance. A typical 10 GB Ethernet switch has 48 10 GB ports and four 40 GB ports. Use the 40 GB ports on the spine for maximum throughput. Alternatively, consider aggregating unused 10 GB ports with QSFP+ and SFP+ cables into more 40 GB ports to connect to other rack and spine routers. Also, consider using LACP mode 4 to bond network interfaces. Additionally, use jumbo frames, with a maximum transmission unit (MTU) of 9000, especially on the backend or cluster network. Before installing and testing a Red Hat Ceph Storage cluster, verify the network throughput. Most performance-related problems in Ceph usually begin with a networking issue. Simple network issues like a kinked or bent Cat-6 cable could result in degraded bandwidth. Use a minimum of 10 GB ethernet for the front side network. For large clusters, consider using 40 GB ethernet for the backend or cluster network. Important For network optimization, Red Hat recommends using jumbo frames for a better CPU per bandwidth ratio, and a non-blocking network switch back-plane. Red Hat Ceph Storage requires the same MTU value throughout all networking devices in the communication path, end-to-end for both public and cluster networks. Verify that the MTU value is the same on all hosts and networking equipment in the environment before using a Red Hat Ceph Storage cluster in production. 2.2. Basic Red Hat Ceph Storage considerations The first consideration for using Red Hat Ceph Storage is developing a storage strategy for the data. A storage strategy is a method of storing data that serves a particular use case. If you need to store volumes and images for a cloud platform like OpenStack, you can choose to store data on faster Serial Attached SCSI (SAS) drives with Solid State Drives (SSD) for journals. By contrast, if you need to store object data for an S3- or Swift-compliant gateway, you can choose to use something more economical, like traditional Serial Advanced Technology Attachment (SATA) drives. Red Hat Ceph Storage can accommodate both scenarios in the same storage cluster, but you need a means of providing the fast storage strategy to the cloud platform, and a means of providing more traditional storage for your object store. One of the most important steps in a successful Ceph deployment is identifying a price-to-performance profile suitable for the storage cluster's use case and workload. It is important to choose the right hardware for the use case. For example, choosing IOPS-optimized hardware for a cold storage application increases hardware costs unnecessarily. Whereas, choosing capacity-optimized hardware for its more attractive price point in an IOPS-intensive workload will likely lead to unhappy users complaining about slow performance. Red Hat Ceph Storage can support multiple storage strategies. Use cases, cost versus benefit performance tradeoffs, and data durability are the primary considerations that help develop a sound storage strategy. Use Cases Ceph provides massive storage capacity, and it supports numerous use cases, such as: The Ceph Block Device client is a leading storage backend for cloud platforms that provides limitless storage for volumes and images with high performance features like copy-on-write cloning. The Ceph Object Gateway client is a leading storage backend for cloud platforms that provides a RESTful S3-compliant and Swift-compliant object storage for objects like audio, bitmap, video, and other data. The Ceph File System for traditional file storage. Cost vs. Benefit of Performance Faster is better. Bigger is better. High durability is better. However, there is a price for each superlative quality, and a corresponding cost versus benefit tradeoff. Consider the following use cases from a performance perspective: SSDs can provide very fast storage for relatively small amounts of data and journaling. Storing a database or object index can benefit from a pool of very fast SSDs, but proves too expensive for other data. SAS drives with SSD journaling provide fast performance at an economical price for volumes and images. SATA drives without SSD journaling provide cheap storage with lower overall performance. When you create a CRUSH hierarchy of OSDs, you need to consider the use case and an acceptable cost versus performance tradeoff. Data Durability In large scale storage clusters, hardware failure is an expectation, not an exception. However, data loss and service interruption remain unacceptable. For this reason, data durability is very important. Ceph addresses data durability with multiple replica copies of an object or with erasure coding and multiple coding chunks. Multiple copies or multiple coding chunks present an additional cost versus benefit tradeoff: it is cheaper to store fewer copies or coding chunks, but it can lead to the inability to service write requests in a degraded state. Generally, one object with two additional copies, or two coding chunks can allow a storage cluster to service writes in a degraded state while the storage cluster recovers. Replication stores one or more redundant copies of the data across failure domains in case of a hardware failure. However, redundant copies of data can become expensive at scale. For example, to store 1 petabyte of data with triple replication would require a cluster with at least 3 petabytes of storage capacity. Erasure coding stores data as data chunks and coding chunks. In the event of a lost data chunk, erasure coding can recover the lost data chunk with the remaining data chunks and coding chunks. Erasure coding is substantially more economical than replication. For example, using erasure coding with 8 data chunks and 3 coding chunks provides the same redundancy as 3 copies of the data. However, such an encoding scheme uses approximately 1.5x the initial data stored compared to 3x with replication. The CRUSH algorithm aids this process by ensuring that Ceph stores additional copies or coding chunks in different locations within the storage cluster. This ensures that the failure of a single storage device or host does not lead to a loss of all of the copies or coding chunks necessary to preclude data loss. You can plan a storage strategy with cost versus benefit tradeoffs, and data durability in mind, then present it to a Ceph client as a storage pool. Important ONLY the data storage pool can use erasure coding. Pools storing service data and bucket indexes use replication. Important Ceph's object copies or coding chunks make RAID solutions obsolete. Do not use RAID, because Ceph already handles data durability, a degraded RAID has a negative impact on performance, and recovering data using RAID is substantially slower than using deep copies or erasure coding chunks. Additional Resources See the Minimum hardware considerations for Red Hat Ceph Storage section of the Red Hat Ceph Storage Installation Guide for more details. 2.2.1. Colocating Ceph daemons and its advantages You can colocate containerized Ceph daemons on the same host. Here are the advantages of colocating some of Ceph's daemons: Significantly improves the total cost of ownership (TCO) at small scale. Can increase overall performance. Reduces the amount of physical hosts for a minimum configuration. Better resource utilization. Upgrading Red Hat Ceph Storage is easier. By using containers you can colocate one daemon from the following list with a Ceph OSD daemon ( ceph-osd ). Additionally, for the Ceph Object Gateway ( radosgw ), Ceph Metadata Server ( ceph-mds ), and Grafana, you can colocate it either with a Ceph OSD daemon, plus a daemon from the list below. Ceph Metadata Server ( ceph-mds ) Ceph Monitor ( ceph-mon ) Ceph Manager ( ceph-mgr ) NFS Ganesha ( nfs-ganesha ) Ceph Manager ( ceph-grafana ) Table 2.1. Daemon Placement Example Host Name Daemon Daemon Daemon host1 OSD Monitor & Manager Prometheus host2 OSD Monitor & Manager RGW host3 OSD Monitor & Manager RGW host4 OSD Metadata Server host5 OSD Metadata Server Note Because ceph-mon and ceph-mgr work closely together, they are not considered two separate daemons for the purposes of colocation. Colocating Ceph daemons can be done from the command line interface, by using the --placement option to the ceph orch command, or you can use a service specification YAML file. Command line Example Service Specification YAML File Example Red Hat recommends colocating the Ceph Object Gateway with Ceph OSD containers to increase performance. To achieve the highest performance without incurring additional hardware cost, use two Ceph Object Gateway daemons per host. Ceph Object Gateway Command line Example Ceph Object Gateway Service Specification YAML File Example The diagrams below shows the difference between storage clusters with colocated and non-colocated daemons. Figure 2.1. Colocated Daemons Figure 2.2. Non-colocated Daemons Additional resources See the Management of services using the Ceph Orchestrator chapter in the Red Hat Ceph Storage Operations Guide for more details on using the --placement option. See the Red Hat Ceph Storage RGW deployment strategies and sizing guidance article for more information. 2.3. Red Hat Ceph Storage workload considerations One of the key benefits of a Ceph storage cluster is the ability to support different types of workloads within the same storage cluster using performance domains. Different hardware configurations can be associated with each performance domain. Storage administrators can deploy storage pools on the appropriate performance domain, providing applications with storage tailored to specific performance and cost profiles. Selecting appropriately sized and optimized servers for these performance domains is an essential aspect of designing a Red Hat Ceph Storage cluster. To the Ceph client interface that reads and writes data, a Ceph storage cluster appears as a simple pool where the client stores data. However, the storage cluster performs many complex operations in a manner that is completely transparent to the client interface. Ceph clients and Ceph object storage daemons, referred to as Ceph OSDs, or simply OSDs, both use the Controlled Replication Under Scalable Hashing (CRUSH) algorithm for the storage and retrieval of objects. Ceph OSDs can run in containers within the storage cluster. A CRUSH map describes a topography of cluster resources, and the map exists both on client hosts as well as Ceph Monitor hosts within the cluster. Ceph clients and Ceph OSDs both use the CRUSH map and the CRUSH algorithm. Ceph clients communicate directly with OSDs, eliminating a centralized object lookup and a potential performance bottleneck. With awareness of the CRUSH map and communication with their peers, OSDs can handle replication, backfilling, and recovery-allowing for dynamic failure recovery. Ceph uses the CRUSH map to implement failure domains. Ceph also uses the CRUSH map to implement performance domains, which simply take the performance profile of the underlying hardware into consideration. The CRUSH map describes how Ceph stores data, and it is implemented as a simple hierarchy, specifically an acyclic graph, and a ruleset. The CRUSH map can support multiple hierarchies to separate one type of hardware performance profile from another. Ceph implements performance domains with device "classes". For example, you can have these performance domains coexisting in the same Red Hat Ceph Storage cluster: Hard disk drives (HDDs) are typically appropriate for cost and capacity-focused workloads. Throughput-sensitive workloads typically use HDDs with Ceph write journals on solid state drives (SSDs). IOPS-intensive workloads, such as MySQL and MariaDB, often use SSDs. Figure 2.3. Performance and Failure Domains Workloads Red Hat Ceph Storage is optimized for three primary workloads. Important Carefully consider the workload being run by Red Hat Ceph Storage clusters BEFORE considering what hardware to purchase, because it can significantly impact the price and performance of the storage cluster. For example, if the workload is capacity-optimized and the hardware is better suited to a throughput-optimized workload, then hardware will be more expensive than necessary. Conversely, if the workload is throughput-optimized and the hardware is better suited to a capacity-optimized workload, then the storage cluster can suffer from poor performance. IOPS optimized: Input, output per second (IOPS) optimization deployments are suitable for cloud computing operations, such as running MYSQL or MariaDB instances as virtual machines on OpenStack. IOPS optimized deployments require higher performance storage such as 15k RPM SAS drives and separate SSD journals to handle frequent write operations. Some high IOPS scenarios use all flash storage to improve IOPS and total throughput. An IOPS-optimized storage cluster has the following properties: Lowest cost per IOPS. Highest IOPS per GB. 99th percentile latency consistency. Uses for an IOPS-optimized storage cluster are: Typically block storage. 3x replication for hard disk drives (HDDs) or 2x replication for solid state drives (SSDs). MySQL on OpenStack clouds. Throughput optimized: Throughput-optimized deployments are suitable for serving up significant amounts of data, such as graphic, audio, and video content. Throughput-optimized deployments require high bandwidth networking hardware, controllers, and hard disk drives with fast sequential read and write characteristics. If fast data access is a requirement, then use a throughput-optimized storage strategy. Also, if fast write performance is a requirement, using Solid State Disks (SSD) for journals will substantially improve write performance. A throughput-optimized storage cluster has the following properties: Lowest cost per MBps (throughput). Highest MBps per TB. Highest MBps per BTU. Highest MBps per Watt. 97th percentile latency consistency. Uses for a throughput-optimized storage cluster are: Block or object storage. 3x replication. Active performance storage for video, audio, and images. Streaming media, such as 4k video. Capacity optimized: Capacity-optimized deployments are suitable for storing significant amounts of data as inexpensively as possible. Capacity-optimized deployments typically trade performance for a more attractive price point. For example, capacity-optimized deployments often use slower and less expensive SATA drives and co-locate journals rather than using SSDs for journaling. A cost and capacity-optimized storage cluster has the following properties: Lowest cost per TB. Lowest BTU per TB. Lowest Watts required per TB. Uses for a cost and capacity-optimized storage cluster are: Typically object storage. Erasure coding for maximizing usable capacity Object archive. Video, audio, and image object repositories. 2.4. Ceph Object Gateway considerations Another important aspect of designing a storage cluster is to determine if the storage cluster will be in one data center site or span multiple data center sites. Multi-site storage clusters benefit from geographically distributed failover and disaster recovery, such as long-term power outages, earthquakes, hurricanes, floods or other disasters. Additionally, multi-site storage clusters can have an active-active configuration, which can direct client applications to the closest available storage cluster. This is a good storage strategy for content delivery networks. Consider placing data as close to the client as possible. This is important for throughput-intensive workloads, such as streaming 4k video. Important Red Hat recommends identifying realm, zone group and zone names BEFORE creating Ceph's storage pools. Prepend some pool names with the zone name as a standard naming convention. Additional Resources See the Multi-site configuration and administration section in the Red Hat Ceph Storage Object Gateway Guide for more information. 2.4.1. Administrative data storage A Ceph Object Gateway stores administrative data in a series of pools defined in an instance's zone configuration. For example, the buckets, users, user quotas, and usage statistics discussed in the subsequent sections are stored in pools in the Ceph storage cluster. By default, Ceph Object Gateway creates the following pools and maps them to the default zone. .rgw.root .default.rgw.control .default.rgw.meta .default.rgw.log .default.rgw.buckets.index .default.rgw.buckets.data .default.rgw.buckets.non-ec Note The .default.rgw.buckets.index pool is created only after the bucket is created in Ceph Object Gateway, while the .default.rgw.buckets.data pool is created after the data is uploaded to the bucket. Consider creating these pools manually so you can set the CRUSH ruleset and the number of placement groups. In a typical configuration, the pools that store the Ceph Object Gateway's administrative data will often use the same CRUSH ruleset, and use fewer placement groups, because there are 10 pools for the administrative data. Red Hat recommends that the .rgw.root pool and the service pools use the same CRUSH hierarchy, and use at least node as the failure domain in the CRUSH rule. Red Hat recommends using replicated for data durability, and NOT erasure for the .rgw.root pool, and the service pools. The mon_pg_warn_max_per_osd setting warns you if you assign too many placement groups to a pool, 300 by default. You may adjust the value to suit your needs and the capabilities of your hardware where n is the maximum number of PGs per OSD. Note For service pools, including .rgw.root , the suggested PG count from the Ceph placement groups (PGs) per pool calculator is substantially less than the target PGs per Ceph OSD. Also, ensure the number of Ceph OSDs is set in step 4 of the calculator. Important Garbage collection uses the .log pool with regular RADOS objects instead of OMAP. In future releases, more features will store metadata on the .log pool. Therefore, Red Hat recommends using NVMe/SSD Ceph OSDs for the .log pool. .rgw.root Pool The pool where the Ceph Object Gateway configuration is stored. This includes realms, zone groups, and zones. By convention, its name is not prepended with the zone name. Service Pools The service pools store objects related to service control, garbage collection, logging, user information, and usage. By convention, these pool names have the zone name prepended to the pool name. . ZONE_NAME .rgw.control : The control pool. . ZONE_NAME .log : The log pool contains logs of all bucket, container, and object actions, such as create, read, update, and delete. . ZONE_NAME .rgw.buckets.index : This pool stores index of the buckets. . ZONE_NAME .rgw.buckets.data : This pool stores data of the buckets. . ZONE_NAME .rgw.meta : The metadata pool stores user_keys and other critical metadata. . ZONE_NAME .meta:users.uid : The user ID pool contains a map of unique user IDs. . ZONE_NAME .meta:users.keys : The keys pool contains access keys and secret keys for each user ID. . ZONE_NAME .meta:users.email : The email pool contains email addresses associated to a user ID. . ZONE_NAME .meta:users.swift : The Swift pool contains the Swift subuser information for a user ID. Additional Resources See the About pools section in the Red Hat Ceph Storage Object Gateway Guide for more details. See the Red Hat Ceph Storage Storage Strategies Guide for additional details. 2.4.2. Index pool When selecting OSD hardware for use with a Ceph Object Gateway-- irrespective of the use case-- an OSD node that has at least one high performance drive, either an SSD or NVMe drive, is required for storing the index pool. This is particularly important when buckets contain a large number of objects. For Red Hat Ceph Storage running Bluestore, Red Hat recommends deploying an NVMe drive as a block.db device, rather than as a separate pool. Ceph Object Gateway index data is written only into an object map (OMAP). OMAP data for BlueStore resides on the block.db device on an OSD. When an NVMe drive functions as a block.db device for an HDD OSD and when the index pool is backed by HDD OSDs, the index data will ONLY be written to the block.db device. As long as the block.db partition/lvm is sized properly at 4% of block, this configuration is all that is needed for BlueStore. Note Red Hat does not support HDD devices for index pools. For more information on supported configurations, see the Red Hat Ceph Storage: Supported configurations article. An index entry is approximately 200 bytes of data, stored as an OMAP in rocksdb . While this is a trivial amount of data, some uses of Ceph Object Gateway can result in tens or hundreds of millions of objects in a single bucket. By mapping the index pool to a CRUSH hierarchy of high performance storage media, the reduced latency provides a dramatic performance improvement when buckets contain very large numbers of objects. Important In a production cluster, a typical OSD node will have at least one SSD or NVMe drive for storing the OSD journal and the index pool or block.db device, which use separate partitions or logical volumes for the same physical drive. 2.4.3. Data pool The data pool is where the Ceph Object Gateway stores the object data for a particular storage policy. The data pool has a full complement of placement groups (PGs), not the reduced number of PGs for service pools. Consider using erasure coding for the data pool, as it is substantially more efficient than replication, and can significantly reduce the capacity requirements while maintaining data durability. To use erasure coding, create an erasure code profile. See the Erasure Code Profiles section in the Red Hat Ceph Storage Storage Strategies Guide for more details. Important Choosing the correct profile is important because you cannot change the profile after you create the pool. To modify a profile, you must create a new pool with a different profile and migrate the objects from the old pool to the new pool. The default configuration is two data chunks and one encoding chunk, which means only one OSD can be lost. For higher resiliency, consider a larger number of data and encoding chunks. For example, some large scale systems use 8 data chunks and 3 encoding chunks, which allows 3 OSDs to fail without losing data. Important Each data and encoding chunk SHOULD get stored on a different node or host at a minimum. For smaller storage clusters, this makes using rack impractical as the minimum CRUSH failure domain for a larger number of data and encoding chunks. Consequently, it is common for the data pool to use a separate CRUSH hierarchy with host as the minimum CRUSH failure domain. Red Hat recommends host as the minimum failure domain. If erasure code chunks get stored on Ceph OSDs within the same host, a host failure, such as a failed journal or network card, could lead to data loss. To create a data pool, run the ceph osd pool create command with the pool name, the number of PGs and PGPs, the erasure data durability method, the erasure code profile, and the name of the rule. 2.4.4. Data extra pool The data_extra_pool is for data that cannot use erasure coding. For example, multi-part uploads allow uploading a large object, such as a movie in multiple parts. These parts must first be stored without erasure coding. Erasure coding applies to the whole object, not the partial uploads. Note The placement group (PG) per Pool Calculator recommends a smaller number of PGs per pool for the data_extra_pool ; however, the PG count is approximately twice the number of PGs as the service pools and the same as the bucket index pool. To create a data extra pool, run the ceph osd pool create command with the pool name, the number of PGs and PGPs, the replicated data durability method, and the name of the rule. For example: 2.5. Developing CRUSH hierarchies As a storage administrator, when deploying a Ceph storage cluster and an Object Gateway, typically the Ceph Object Gateway has a default zone group and zone. The Ceph storage cluster will have default pools, which in turn will use a CRUSH map with a default CRUSH hierarchy and a default CRUSH rule. Important The default rbd pool can use the default CRUSH rule. DO NOT delete the default rule or hierarchy if Ceph clients have used them to store client data. Production gateways typically use a custom realm, zone group and zone named according to the use and geographic location of the gateways. Additionally, the Ceph storage cluster will have a CRUSH map that has multiple CRUSH hierarchies. Service Pools: At least one CRUSH hierarchy will be for service pools and potentially for data. The service pools include .rgw.root and the service pools associated with the zone. Service pools typically fall under a single CRUSH hierarchy, and use replication for data durability. A data pool may also use the CRUSH hierarchy, but the pool will usually be configured with erasure coding for data durability. Index: At least one CRUSH hierarchy SHOULD be for the index pool, where the CRUSH hierarchy maps to high performance media, such as SSD or NVMe drives. Bucket indices can be a performance bottleneck. Red Hat recommends to use SSD or NVMe drives in this CRUSH hierarchy. Create partitions for indices on SSDs or NVMe drives used for Ceph OSD journals. Additionally, an index should be configured with bucket sharding. Placement Pools: The placement pools for each placement target include the bucket index, the data bucket, and the bucket extras. These pools can fall under separate CRUSH hierarchies. Since the Ceph Object Gateway can support multiple storage policies, the bucket pools of the storage policies may be associated with different CRUSH hierarchies, reflecting different use cases, such as IOPS-optimized, throughput-optimized, and capacity-optimized. The bucket index pool SHOULD use its own CRUSH hierarchy to map the bucket index pool to higher performance storage media, such as SSD or NVMe drives. 2.5.1. Creating CRUSH roots From the command line on the administration node, create CRUSH roots in the CRUSH map for each CRUSH hierarchy. There MUST be at least one CRUSH hierarchy for service pools that may also potentially serve data storage pools. There SHOULD be at least one CRUSH hierarchy for the bucket index pool, mapped to high performance storage media, such as SSDs or NVMe drives. For details on CRUSH hierarchies, see the CRUSH Hierarchies section in the Red Hat Ceph Storage Storage Strategies Guide 6 . To manually edit a CRUSH map, see the Editing a CRUSH Map section in the Red Hat Ceph Storage Storage Strategies Guide 6 . In the following examples, the hosts named data0 , data1 , and data2 use extended logical names, such as data0-sas-ssd , data0-index , and so forth in the CRUSH map, because there are multiple CRUSH hierarchies pointing to the same physical hosts. A typical CRUSH root might represent nodes with SAS drives and SSDs for journals. For example: A CRUSH root for bucket indexes SHOULD represent high performance media, such as SSD or NVMe drives. Consider creating partitions on SSD or NVMe media that store OSD journals. For example: 2.5.2. Creating CRUSH rules Like the default CRUSH hierarchy, the CRUSH map also contains a default CRUSH rule. Note The default rbd pool may use this rule. DO NOT delete the default rule if other pools have used it to store customer data. For general details on CRUSH rules, see the CRUSH rules section in the Red Hat Ceph Storage Storage Strategies Guide for Red Hat Ceph Storage 6. To manually edit a CRUSH map, see the Editing a CRUSH map section in the Red Hat Ceph Storage Storage Strategies Guide for Red Hat Ceph Storage 6. For each CRUSH hierarchy, create a CRUSH rule. The following example illustrates a rule for the CRUSH hierarchy that will store the service pools, including .rgw.root . In this example, the root sas-ssd serves as the main CRUSH hierarchy. It uses the name rgw-service to distinguish itself from the default rule. The step take sas-ssd line tells the pool to use the sas-ssd root created in Creating CRUSH roots , whose child buckets contain OSDs with SAS drives and high performance storage media, such as SSD or NVMe drives, for journals in a high throughput hardware configuration. The type rack portion of step chooseleaf is the failure domain. In the following example, it is a rack. Note In the foregoing example, if data gets replicated three times, there should be at least three racks in the cluster containing a similar number of OSD nodes. Tip The type replicated setting has NOTHING to do with data durability, the number of replicas, or the erasure coding. Only replicated is supported. The following example illustrates a rule for the CRUSH hierarchy that will store the data pool. In this example, the root sas-ssd serves as the main CRUSH hierarchy- the same CRUSH hierarchy as the service rule. It uses rgw-throughput to distinguish itself from the default rule and rgw-service . The step take sas-ssd line tells the pool to use the sas-ssd root created in Creating CRUSH roots , whose child buckets contain OSDs with SAS drives and high performance storage media, such as SSD or NVMe drives, in a high throughput hardware configuration. The type host portion of step chooseleaf is the failure domain. In the following example, it is a host. Notice that the rule uses the same CRUSH hierarchy, but a different failure domain. Note In the foregoing example, if the pool uses erasure coding with a larger number of data and encoding chunks than the default, there should be at least as many racks in the cluster containing a similar number of OSD nodes to facilitate the erasure coding chunks. For smaller clusters, this may not be practical, so the foregoing example uses host as the CRUSH failure domain. The following example illustrates a rule for the CRUSH hierarchy that will store the index pool. In this example, the root index serves as the main CRUSH hierarchy. It uses rgw-index to distinguish itself from rgw-service and rgw-throughput . The step take index line tells the pool to use the index root created in Creating CRUSH roots , whose child buckets contain high performance storage media, such as SSD or NVMe drives, or partitions on SSD or NVMe drives that also store OSD journals. The type rack portion of step chooseleaf is the failure domain. In the following example, it is a rack. Additional Resources For general details on CRUSH hierarchies, see the CRUSH Administration section of the Red Hat Ceph Storage Storage Strategies Guide . 2.6. Ceph Object Gateway multi-site considerations A Ceph Object Gateway multi-site configuration requires at least two Red Hat Ceph Storage clusters, and at least two Ceph Object Gateway instances, one for each Red Hat Ceph Storage cluster. Typically, the two Red Hat Ceph Storage clusters will be in geographically separate locations; however, this same multi-site configuration can work on two Red Hat Ceph Storage clusters located at the same physical site. Multi-site configurations require a primary zone group and a primary zone. Additionally, each zone group requires a primary zone. Zone groups might have one or more secondary zones. Important The primary zone within the primary zone group of a realm is responsible for storing the primary copy of the realm's metadata, including users, quotas, and buckets. This metadata gets synchronized to secondary zones and secondary zone groups automatically. Metadata operations issued with the radosgw-admin command line interface (CLI) MUST be issued on a node within the primary zone of the primary zone group to ensure that they synchronize to the secondary zone groups and zones. Currently, it is possible to issue metadata operations on secondary zones and zone groups, but it is NOT recommended because they WILL NOT be synchronized, which can lead to fragmentation of the metadata. The diagrams below illustrate the possible one, and two realm configurations in multi-site Ceph Object Gateway environments. Figure 2.4. One Realm Figure 2.5. Two Realms Figure 2.6. Two Realms Variant 2.7. Considering storage sizing One of the most important factors in designing a cluster is to determine the storage requirements (sizing). Ceph Storage is designed to scale into petabytes and beyond. The following examples are common sizes for Ceph storage clusters. Small: 250 terabytes Medium: 1 petabyte Large: 2 petabytes or more Sizing includes current needs and near future needs. Consider the rate at which the gateway client will add new data to the cluster. That can differ from use-case to use-case. For example, recording 4k videos or storing medical images can add significant amounts of data faster than less storage-intensive information, such as financial market data. Additionally, consider that the data durability methods, such as replication versus erasure coding, can have a significant impact on the storage media required. For additional information on sizing, see the Red Hat Ceph Storage Hardware Guide and its associated links for selecting OSD hardware. 2.8. Considering storage density Another important aspect of Ceph's design, includes storage density. Generally, a storage cluster stores data across at least 10 nodes to ensure reasonable performance when replicating, backfilling, and recovery. If a node fails, with at least 10 nodes in the storage cluster, only 10% of the data has to move to the surviving nodes. If the number of nodes is substantially less, a higher percentage of the data must move to the surviving nodes. Additionally, the full_ratio and near_full_ratio options need to be set to accommodate a node failure to ensure that the storage cluster can write data. For this reason, it is important to consider storage density. Higher storage density is not necessarily a good idea. Another factor that favors more nodes over higher storage density is erasure coding. When writing an object using erasure coding and using node as the minimum CRUSH failure domain, the Ceph storage cluster will need as many nodes as data and coding chunks. For example, a cluster using k=8, m=3 should have at least 11 nodes so that each data or coding chunk is stored on a separate node. Hot-swapping is also an important consideration. Most modern servers support drive hot-swapping. However, some hardware configurations require removing more than one drive to replace a drive. Red Hat recommends avoiding such configurations, because they can bring down more Ceph OSDs than required when swapping out failed disks. 2.9. Considering disks for the Ceph Monitor nodes Ceph Monitors use rocksdb , which is sensitive to synchronous write latency. Red Hat strongly recommends using SSD disks to store the Ceph Monitor data. Choose SSD disks that have sufficient sequential write and throughput characteristics. 2.10. Adjusting backfill and recovery settings I/O is negatively impacted by both backfilling and recovery operations, leading to poor performance and unhappy end users. To help accommodate I/O demand during a cluster expansion or recovery, set the following options and values in the Ceph Configuration file: 2.11. Adjusting the cluster map size By default, the ceph-osd daemon caches 500 osdmaps. Even with deduplication, the map might consume a lot of memory per daemon. Tuning the cache size in the Ceph configuration might help reduce memory consumption significantly. For example: For Red Hat Ceph Storage version 3 and later, the ceph-manager daemon handles PG queries, so the cluster map should not impact performance. 2.12. Adjusting scrubbing By default, Ceph performs light scrubbing daily and deep scrubbing weekly. Light scrubbing checks object sizes and checksums to ensure that PGs are storing the same object data. Over time, disk sectors can go bad irrespective of object sizes and checksums. Deep scrubbing checks an object's content with that of its replicas to ensure that the actual contents are the same. In this respect, deep scrubbing ensures data integrity in the manner of fsck , but the procedure imposes an I/O penalty on the cluster. Even light scrubbing can impact I/O. The default settings may allow Ceph OSDs to initiate scrubbing at inopportune times, such as peak operating times or periods with heavy loads. End users may experience latency and poor performance when scrubbing operations conflict with end user operations. To prevent end users from experiencing poor performance, Ceph provides a number of scrubbing settings that can limit scrubbing to periods with lower loads or during off-peak hours. For details, see the Scrubbing the OSD section in the Red Hat Ceph Storage Configuration Guide . If the cluster experiences high loads during the day and low loads late at night, consider restricting scrubbing to night time hours. For example: If time constraints aren't an effective method of determining a scrubbing schedule, consider using the osd_scrub_load_threshold . The default value is 0.5 , but it could be modified for low load conditions. For example: 2.13. Increase objecter_inflight_ops To improve scalability, you can edit the value of the objecter_inflight_ops parameter, which specifies the maximum number of unsent I/O requests allowed. This parameter is used for client traffic control. 2.14. Increase rgw_thread_pool_size To improve scalability, you can edit the value of the rgw_thread_pool_size parameter, which is the size of the thread pool. The new beast frontend is not restricted by the thread pool size to accept new connections. 2.15. Tuning considerations for the Linux kernel when running Ceph Production Red Hat Ceph Storage clusters generally benefit from tuning the operating system, specifically around limits and memory allocation. Ensure that adjustments are set for all hosts within the storage cluster. You can also open a case with Red Hat support asking for additional guidance. Increase the File Descriptors The Ceph Object Gateway can hang if it runs out of file descriptors. You can modify the /etc/security/limits.conf file on Ceph Object Gateway hosts to increase the file descriptors for the Ceph Object Gateway. Adjusting the ulimit value for Large Storage Clusters When running Ceph administrative commands on large storage clusters, for example, with 1024 Ceph OSDs or more, create an /etc/security/limits.d/50-ceph.conf file on each host that runs administrative commands with the following contents: Replace USER_NAME with the name of the non-root user account that runs the Ceph administrative commands. Note The root user's ulimit value is already set to unlimited by default on Red Hat Enterprise Linux. Additional Resources For more details about Ceph's various internal components and the strategies around those components, see the Red Hat Ceph Storage Storage Strategies Guide .
[ "ceph orch apply mon --placement=\"host1 host2 host3\"", "service_type: mon placement: hosts: - host01 - host02 - host03", "ceph orch apply -i mon.yml", "ceph orch apply rgw example --placement=\"6 host1 host2 host3\"", "service_type: rgw service_id: example placement: count: 6 hosts: - host01 - host02 - host03", "ceph orch apply -i rgw.yml", "mon_pg_warn_max_per_osd = n", "ceph osd pool create .us-west.rgw.buckets.non-ec 64 64 replicated rgw-service", "## SAS-SSD ROOT DECLARATION ## root sas-ssd { id -1 # do not change unnecessarily # weight 0.000 alg straw hash 0 # rjenkins1 item data2-sas-ssd weight 4.000 item data1-sas-ssd weight 4.000 item data0-sas-ssd weight 4.000 }", "## INDEX ROOT DECLARATION ## root index { id -2 # do not change unnecessarily # weight 0.000 alg straw hash 0 # rjenkins1 item data2-index weight 1.000 item data1-index weight 1.000 item data0-index weight 1.000 }", "## SERVICE RULE DECLARATION ## rule rgw-service { type replicated min_size 1 max_size 10 step take sas-ssd step chooseleaf firstn 0 type rack step emit }", "## THROUGHPUT RULE DECLARATION ## rule rgw-throughput { type replicated min_size 1 max_size 10 step take sas-ssd step chooseleaf firstn 0 type host step emit }", "## INDEX RULE DECLARATION ## rule rgw-index { type replicated min_size 1 max_size 10 step take index step chooseleaf firstn 0 type rack step emit }", "[osd] osd_max_backfills = 1 osd_recovery_max_active = 1 osd_recovery_op_priority = 1", "ceph config set global osd_map_message_max 10 ceph config set osd osd_map_cache_size 20 ceph config set osd osd_map_share_max_epochs 10 ceph config set osd osd_pg_epoch_persisted_max_stale 10", "[osd] osd_scrub_begin_hour = 23 #23:01H, or 10:01PM. osd_scrub_end_hour = 6 #06:01H or 6:01AM.", "[osd] osd_scrub_load_threshold = 0.25", "objecter_inflight_ops = 24576", "rgw_thread_pool_size = 512", "ceph soft nofile unlimited", "USER_NAME soft nproc unlimited" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/6/html/object_gateway_guide/considerations-and-recommendations
Chapter 9. Configuring client certificate authentication
Chapter 9. Configuring client certificate authentication Add client trust stores to your project and configure Data Grid to allow connections only from clients that present valid certificates. This increases security of your deployment by ensuring that clients are trusted by a public certificate authority (CA). 9.1. Client certificate authentication Client certificate authentication restricts in-bound connections based on the certificates that clients present. You can configure Data Grid to use trust stores with either of the following strategies: Validate To validate client certificates, Data Grid requires a trust store that contains any part of the certificate chain for the signing authority, typically the root CA certificate. Any client that presents a certificate signed by the CA can connect to Data Grid. If you use the Validate strategy for verifying client certificates, you must also configure clients to provide valid Data Grid credentials if you enable authentication. Authenticate Requires a trust store that contains all public client certificates in addition to the root CA certificate. Only clients that present a signed certificate can connect to Data Grid. If you use the Authenticate strategy for verifying client certificates, you must ensure that certificates contain valid Data Grid credentials as part of the distinguished name (DN). 9.2. Enabling client certificate authentication To enable client certificate authentication, you configure Data Grid to use trust stores with either the Validate or Authenticate strategy. Procedure Set either Validate or Authenticate as the value for the spec.security.endpointEncryption.clientCert field in your Infinispan CR. Note The default value is None . Specify the secret that contains the client trust store with the spec.security.endpointEncryption.clientCertSecretName field. By default Data Grid Operator expects a trust store secret named <cluster-name>-client-cert-secret . Note The secret must be unique to each Infinispan CR instance in the OpenShift cluster. When you delete the Infinispan CR, OpenShift also automatically deletes the associated secret. spec: security: endpointEncryption: type: Secret certSecretName: tls-secret clientCert: Validate clientCertSecretName: infinispan-client-cert-secret Apply the changes. steps Provide Data Grid Operator with a trust store that contains all client certificates. Alternatively you can provide certificates in PEM format and let Data Grid generate a client trust store. 9.3. Providing client truststores If you have a trust store that contains the required certificates you can make it available to Data Grid Operator. Data Grid supports trust stores in PKCS12 format only. Procedure Specify the name of the secret that contains the client trust store as the value of the metadata.name field. Note The name must match the value of the spec.security.endpointEncryption.clientCertSecretName field. Provide the password for the trust store with the stringData.truststore-password field. Specify the trust store with the data.truststore.p12 field. apiVersion: v1 kind: Secret metadata: name: infinispan-client-cert-secret type: Opaque stringData: truststore-password: changme data: truststore.p12: "<base64_encoded_PKCS12_trust_store>" Apply the changes. 9.4. Providing client certificates Data Grid Operator can generate a trust store from certificates in PEM format. Procedure Specify the name of the secret that contains the client trust store as the value of the metadata.name field. Note The name must match the value of the spec.security.endpointEncryption.clientCertSecretName field. Specify the signing certificate, or CA certificate bundle, as the value of the data.trust.ca field. If you use the Authenticate strategy to verify client identities, add the certificate for each client that can connect to Data Grid endpoints with the data.trust.cert.<name> field. Note Data Grid Operator uses the <name> value as the alias for the certificate when it generates the trust store. Optionally provide a password for the trust store with the stringData.truststore-password field. If you do not provide one, Data Grid Operator sets "password" as the trust store password. apiVersion: v1 kind: Secret metadata: name: infinispan-client-cert-secret type: Opaque stringData: truststore-password: changme data: trust.ca: "<base64_encoded_CA_certificate>" trust.cert.client1: "<base64_encoded_client_certificate>" trust.cert.client2: "<base64_encoded_client_certificate>" Apply the changes.
[ "spec: security: endpointEncryption: type: Secret certSecretName: tls-secret clientCert: Validate clientCertSecretName: infinispan-client-cert-secret", "apiVersion: v1 kind: Secret metadata: name: infinispan-client-cert-secret type: Opaque stringData: truststore-password: changme data: truststore.p12: \"<base64_encoded_PKCS12_trust_store>\"", "apiVersion: v1 kind: Secret metadata: name: infinispan-client-cert-secret type: Opaque stringData: truststore-password: changme data: trust.ca: \"<base64_encoded_CA_certificate>\" trust.cert.client1: \"<base64_encoded_client_certificate>\" trust.cert.client2: \"<base64_encoded_client_certificate>\"" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/data_grid_operator_guide/client-certificates
5.9.6. Adding/Removing Storage
5.9.6. Adding/Removing Storage While most of the steps required to add or remove storage depend more on the system hardware than the system software, there are aspects of the procedure that are specific to your operating environment. This section explores the steps necessary to add and remove storage that are specific to Red Hat Enterprise Linux. 5.9.6.1. Adding Storage The process of adding storage to a Red Hat Enterprise Linux system is relatively straightforward. Here are the steps that are specific to Red Hat Enterprise Linux: Partitioning Formatting the partition(s) Updating /etc/fstab The following sections explore each step in more detail. 5.9.6.1.1. Partitioning Once the disk drive has been installed, it is time to create one or more partitions to make the space available to Red Hat Enterprise Linux. There is more than one way of doing this: Using the command-line fdisk utility program Using parted , another command-line utility program Although the tools may be different, the basic steps are the same. In the following example, the commands necessary to perform these steps using fdisk are included: Select the new disk drive (the drive's name can be identified by following the device naming conventions outlined in Section 5.9.1, "Device Naming Conventions" ). Using fdisk , this is done by including the device name when you start fdisk : View the disk drive's partition table, to ensure that the disk drive to be partitioned is, in fact, the correct one. In our example, fdisk displays the partition table by using the p command: Delete any unwanted partitions that may already be present on the new disk drive. This is done using the d command in fdisk : The process would be repeated for all unneeded partitions present on the disk drive. Create the new partition(s), being sure to specify the desired size and file system type. Using fdisk , this is a two-step process -- first, creating the partition (using the n command): Second, by setting the file system type (using the t command): Partition type 82 represents a Linux swap partition. Save your changes and exit the partitioning program. This is done in fdisk by using the w command: Warning When partitioning a new disk drive, it is vital that you are sure the disk drive you are about to partition is the correct one. Otherwise, you may inadvertently partition a disk drive that is already in use, resulting in lost data. Also make sure you have decided on the best partition size. Always give this matter serious thought, because changing it later is much more difficult than taking a bit of time now to think things through. 5.9.6.1.2. Formatting the Partition(s) Formatting partitions under Red Hat Enterprise Linux is done using the mkfs utility program. However, mkfs does not actually do the work of writing the file-system-specific information onto a disk drive; instead it passes control to one of several other programs that actually create the file system. This is the time to look at the mkfs. <fstype> man page for the file system you have selected. For example, look at the mkfs.ext3 man page to see the options available to you when creating a new ext3 file system. In general, the mkfs. <fstype> programs provide reasonable defaults for most configurations; however here are some of the options that system administrators most commonly change: Setting a volume label for later use in /etc/fstab On very large hard disks, setting a lower percentage of space reserved for the super-user Setting a non-standard block size and/or bytes per inode for configurations that must support either very large or very small files Checking for bad blocks before formatting Once file systems have been created on all the appropriate partitions, the disk drive is properly configured for use. , it is always best to double-check your work by manually mounting the partition(s) and making sure everything is in order. Once everything checks out, it is time to configure your Red Hat Enterprise Linux system to automatically mount the new file system(s) whenever it boots. 5.9.6.1.3. Updating /etc/fstab As outlined in Section 5.9.5, "Mounting File Systems Automatically with /etc/fstab " , you must add the necessary line(s) to /etc/fstab to ensure that the new file system(s) are mounted whenever the system reboots. Once you have updated /etc/fstab , test your work by issuing an "incomplete" mount , specifying only the device or mount point. Something similar to one of the following commands is sufficient: (Replacing /home or /dev/hda3 with the mount point or device for your specific situation.) If the appropriate /etc/fstab entry is correct, mount obtains the missing information from it and completes the mount operation. At this point you can be relatively confident that /etc/fstab is configured properly to automatically mount the new storage every time the system boots (although if you can afford a quick reboot, it would not hurt to do so -- just to be sure).
[ "fdisk /dev/hda", "Command (m for help): p Disk /dev/hda: 255 heads, 63 sectors, 1244 cylinders Units = cylinders of 16065 * 512 bytes Device Boot Start End Blocks Id System /dev/hda1 * 1 17 136521 83 Linux /dev/hda2 18 83 530145 82 Linux swap /dev/hda3 84 475 3148740 83 Linux /dev/hda4 476 1244 6176992+ 83 Linux", "Command (m for help): d Partition number (1-4): 1", "Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-767): 1 Last cylinder or +size or +sizeM or +sizeK: +512M", "Command (m for help): t Partition number (1-4): 1 Hex code (type L to list codes): 82", "Command (m for help): w", "mount /home mount /dev/hda3" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/s2-storage-addrem
Part I. Vulnerability reporting with Clair on Red Hat Quay overview
Part I. Vulnerability reporting with Clair on Red Hat Quay overview The content in this guide explains the key purposes and concepts of Clair on Red Hat Quay. It also contains information about Clair releases and the location of official Clair containers.
null
https://docs.redhat.com/en/documentation/red_hat_quay/3.10/html/vulnerability_reporting_with_clair_on_red_hat_quay/vulnerability-reporting-clair-quay-overview
6.3. Colocation of Resources
6.3. Colocation of Resources A colocation constraint determines that the location of one resource depends on the location of another resource. There is an important side effect of creating a colocation constraint between two resources: it affects the order in which resources are assigned to a node. This is because you cannot place resource A relative to resource B unless you know where resource B is. So when you are creating colocation constraints, it is important to consider whether you should colocate resource A with resource B or resource B with resource A. Another thing to keep in mind when creating colocation constraints is that, assuming resource A is collocated with resource B, the cluster will also take into account resource A's preferences when deciding which node to choose for resource B. The following command creates a colocation constraint. For information on master and slave resources, see Section 8.2, "Multi-State Resources: Resources That Have Multiple Modes" . Table 6.3, "Properties of a Colocation Constraint" . summarizes the properties and options for configuring colocation constraints. Table 6.3. Properties of a Colocation Constraint Field Description source_resource The colocation source. If the constraint cannot be satisfied, the cluster may decide not to allow the resource to run at all. target_resource The colocation target. The cluster will decide where to put this resource first and then decide where to put the source resource. score Positive values indicate the resource should run on the same node. Negative values indicate the resources should not run on the same node. A value of + INFINITY , the default value, indicates that the source_resource must run on the same node as the target_resource . A value of - INFINITY indicates that the source_resource must not run on the same node as the target_resource . 6.3.1. Mandatory Placement Mandatory placement occurs any time the constraint's score is +INFINITY or -INFINITY . In such cases, if the constraint cannot be satisfied, then the source_resource is not permitted to run. For score=INFINITY , this includes cases where the target_resource is not active. If you need myresource1 to always run on the same machine as myresource2 , you would add the following constraint: Because INFINITY was used, if myresource2 cannot run on any of the cluster nodes (for whatever reason) then myresource1 will not be allowed to run. Alternatively, you may want to configure the opposite, a cluster in which myresource1 cannot run on the same machine as myresource2 . In this case use score=-INFINITY Again, by specifying -INFINITY , the constraint is binding. So if the only place left to run is where myresource2 already is, then myresource1 may not run anywhere.
[ "pcs constraint colocation add [master|slave] source_resource with [master|slave] target_resource [ score ] [ options ]", "pcs constraint colocation add myresource1 with myresource2 score=INFINITY", "pcs constraint colocation add myresource1 with myresource2 score=-INFINITY" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/configuring_the_red_hat_high_availability_add-on_with_pacemaker/s1-colocationconstraints-HAAR
Chapter 10. Cloning Subsystems
Chapter 10. Cloning Subsystems When a new subsystem instance is first configured, the Red Hat Certificate System allows subsystems to be cloned, or duplicated, for high availability of the Certificate System. The cloned instances run on different machines to avoid a single point of failure and their databases are synchronized through replication. The master CA and its clones are functionally identical, they only differ in serial number assignments and CRL generation. Therefore, this chapter refers to master or any of its clones as replicated CAs . 10.1. Backing up Subsystem Keys from a Software Database Ideally, the keys for the master instance are backed up when the instance is first created. If the keys were not backed up then or if the backup file is lost, then it is possible to extract the keys from the internal software database for the subsystem instance using the PKCS12Export utility. For example: Then copy the PKCS #12 file to the clone machine to be used in the clone instance configuration. For more details, see Section 2.7.6, "Cloning and Key Stores" . Note Keys cannot be exported from an HSM. However, in a typical deployment, HSMs support networked access, as long as the clone instance is installed using the same HSM as the master. If both instances use the same key store, then the keys are naturally available to the clone. If backing up keys from the HSM is required, contact the HSM manufacturer for assistance.
[ "PKCS12Export -debug -d /var/lib/pki/ instance_name /alias -w p12pwd.txt -p internal.txt -o master.p12" ]
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide/cloning_a_subsystem
Red Hat OpenShift Cluster Manager
Red Hat OpenShift Cluster Manager Red Hat OpenShift Service on AWS 4 Configuring Red Hat OpenShift Service on AWS clusters using OpenShift Cluster Manager Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html-single/red_hat_openshift_cluster_manager/index
Chapter 3. Conditional policies in Red Hat Developer Hub
Chapter 3. Conditional policies in Red Hat Developer Hub The permission framework in Red Hat Developer Hub provides conditions, supported by the RBAC backend plugin ( backstage-plugin-rbac-backend ). The conditions work as content filters for the Developer Hub resources that are provided by the RBAC backend plugin. The RBAC backend API stores conditions assigned to roles in the database. When you request to access the frontend resources, the RBAC backend API searches for the corresponding conditions and delegates them to the appropriate plugin using its plugin ID. If you are assigned to multiple roles with different conditions, then the RBAC backend merges the conditions using the anyOf criteria. Conditional criteria A condition in Developer Hub is a simple condition with a rule and parameters. However, a condition can also contain a parameter or an array of parameters combined by conditional criteria. The supported conditional criteria includes: allOf : Ensures that all conditions within the array must be true for the combined condition to be satisfied. anyOf : Ensures that at least one of the conditions within the array must be true for the combined condition to be satisfied. not : Ensures that the condition within it must not be true for the combined condition to be satisfied. Conditional object The plugin specifies the parameters supported for conditions. You can access the conditional object schema from the RBAC API endpoint to understand how to construct a conditional JSON object, which is then used by the RBAC backend plugin API. A conditional object contains the following parameters: Table 3.1. Conditional object parameters Parameter Type Description result String Always has the value CONDITIONAL roleEntityRef String String entity reference to the RBAC role, such as role:default/dev pluginId String Corresponding plugin ID, such as catalog permissionMapping String array Array permission actions, such as ['read', 'update', 'delete'] resourceType String Resource type provided by the plugin, such as catalog-entity conditions JSON Condition JSON with parameters or array parameters joined by criteria 3.1. Conditional policies definition You can access API endpoints for conditional policies in Red Hat Developer Hub. For example, to retrieve the available conditional rules, which can help you define these policies, you can access the GET [api/plugins/condition-rules] endpoint. The api/plugins/condition-rules returns the condition parameters schemas, for example: [ { "pluginId": "catalog", "rules": [ { "name": "HAS_ANNOTATION", "description": "Allow entities with the specified annotation", "resourceType": "catalog-entity", "paramsSchema": { "type": "object", "properties": { "annotation": { "type": "string", "description": "Name of the annotation to match on" }, "value": { "type": "string", "description": "Value of the annotation to match on" } }, "required": [ "annotation" ], "additionalProperties": false, "USDschema": "http://json-schema.org/draft-07/schema#" } }, { "name": "HAS_LABEL", "description": "Allow entities with the specified label", "resourceType": "catalog-entity", "paramsSchema": { "type": "object", "properties": { "label": { "type": "string", "description": "Name of the label to match on" } }, "required": [ "label" ], "additionalProperties": false, "USDschema": "http://json-schema.org/draft-07/schema#" } }, { "name": "HAS_METADATA", "description": "Allow entities with the specified metadata subfield", "resourceType": "catalog-entity", "paramsSchema": { "type": "object", "properties": { "key": { "type": "string", "description": "Property within the entities metadata to match on" }, "value": { "type": "string", "description": "Value of the given property to match on" } }, "required": [ "key" ], "additionalProperties": false, "USDschema": "http://json-schema.org/draft-07/schema#" } }, { "name": "HAS_SPEC", "description": "Allow entities with the specified spec subfield", "resourceType": "catalog-entity", "paramsSchema": { "type": "object", "properties": { "key": { "type": "string", "description": "Property within the entities spec to match on" }, "value": { "type": "string", "description": "Value of the given property to match on" } }, "required": [ "key" ], "additionalProperties": false, "USDschema": "http://json-schema.org/draft-07/schema#" } }, { "name": "IS_ENTITY_KIND", "description": "Allow entities matching a specified kind", "resourceType": "catalog-entity", "paramsSchema": { "type": "object", "properties": { "kinds": { "type": "array", "items": { "type": "string" }, "description": "List of kinds to match at least one of" } }, "required": [ "kinds" ], "additionalProperties": false, "USDschema": "http://json-schema.org/draft-07/schema#" } }, { "name": "IS_ENTITY_OWNER", "description": "Allow entities owned by a specified claim", "resourceType": "catalog-entity", "paramsSchema": { "type": "object", "properties": { "claims": { "type": "array", "items": { "type": "string" }, "description": "List of claims to match at least one on within ownedBy" } }, "required": [ "claims" ], "additionalProperties": false, "USDschema": "http://json-schema.org/draft-07/schema#" } } ] } ... <another plugin condition parameter schemas> ] The RBAC backend API constructs a condition JSON object based on the condition schema. 3.1.1. Examples of conditional policies In Red Hat Developer Hub, you can define conditional policies with or without criteria. You can use the following examples to define the conditions based on your use case: A condition without criteria Consider a condition without criteria displaying catalogs only if user is a member of the owner group. To add this condition, you can use the catalog plugin schema IS_ENTITY_OWNER as follows: Example condition without criteria { "rule": "IS_ENTITY_OWNER", "resourceType": "catalog-entity", "params": { "claims": ["group:default/team-a"] } } In the example, the only conditional parameter used is claims , which contains a list of user or group entity references. You can apply the example condition to the RBAC REST API by adding additional parameters as follows: { "result": "CONDITIONAL", "roleEntityRef": "role:default/test", "pluginId": "catalog", "resourceType": "catalog-entity", "permissionMapping": ["read"], "conditions": { "rule": "IS_ENTITY_OWNER", "resourceType": "catalog-entity", "params": { "claims": ["group:default/team-a"] } } } A condition with criteria Consider a condition with criteria, which displays catalogs only if user is a member of owner group OR displays list of all catalog user groups. To add the criteria, you can add another rule as IS_ENTITY_KIND in the condition as follows: Example condition with criteria { "anyOf": [ { "rule": "IS_ENTITY_OWNER", "resourceType": "catalog-entity", "params": { "claims": ["group:default/team-a"] } }, { "rule": "IS_ENTITY_KIND", "resourceType": "catalog-entity", "params": { "kinds": ["Group"] } } ] } Note Running conditions in parallel during creation is not supported. Therefore, consider defining nested conditional policies based on the available criteria. Example of nested conditions { "anyOf": [ { "rule": "IS_ENTITY_OWNER", "resourceType": "catalog-entity", "params": { "claims": ["group:default/team-a"] } }, { "rule": "IS_ENTITY_KIND", "resourceType": "catalog-entity", "params": { "kinds": ["Group"] } } ], "not": { "rule": "IS_ENTITY_KIND", "resourceType": "catalog-entity", "params": { "kinds": ["Api"] } } } You can apply the example condition to the RBAC REST API by adding additional parameters as follows: { "result": "CONDITIONAL", "roleEntityRef": "role:default/test", "pluginId": "catalog", "resourceType": "catalog-entity", "permissionMapping": ["read"], "conditions": { "anyOf": [ { "rule": "IS_ENTITY_OWNER", "resourceType": "catalog-entity", "params": { "claims": ["group:default/team-a"] } }, { "rule": "IS_ENTITY_KIND", "resourceType": "catalog-entity", "params": { "kinds": ["Group"] } } ] } } The following examples can be used with Developer Hub plugins. These examples can help you determine how to define conditional policies: Conditional policy defined for Keycloak plugin { "result": "CONDITIONAL", "roleEntityRef": "role:default/developer", "pluginId": "catalog", "resourceType": "catalog-entity", "permissionMapping": ["update", "delete"], "conditions": { "not": { "rule": "HAS_ANNOTATION", "resourceType": "catalog-entity", "params": { "annotation": "keycloak.org/realm", "value": "<YOUR_REALM>" } } } } The example of Keycloak plugin prevents users in the role:default/developer from updating or deleting users that are ingested into the catalog from the Keycloak plugin. Note In the example, the annotation keycloak.org/realm requires the value of <YOUR_REALM> . Conditional policy defined for Quay plugin { "result": "CONDITIONAL", "roleEntityRef": "role:default/developer", "pluginId": "scaffolder", "resourceType": "scaffolder-action", "permissionMapping": ["use"], "conditions": { "not": { "rule": "HAS_ACTION_ID", "resourceType": "scaffolder-action", "params": { "actionId": "quay:create-repository" } } } } The example of Quay plugin prevents the role role:default/developer from using the Quay scaffolder action. Note that permissionMapping contains use , signifying that scaffolder-action resource type permission does not have a permission policy. For more information about permissions in Red Hat Developer Hub, see Chapter 2, Permission policies in Red Hat Developer Hub .
[ "[ { \"pluginId\": \"catalog\", \"rules\": [ { \"name\": \"HAS_ANNOTATION\", \"description\": \"Allow entities with the specified annotation\", \"resourceType\": \"catalog-entity\", \"paramsSchema\": { \"type\": \"object\", \"properties\": { \"annotation\": { \"type\": \"string\", \"description\": \"Name of the annotation to match on\" }, \"value\": { \"type\": \"string\", \"description\": \"Value of the annotation to match on\" } }, \"required\": [ \"annotation\" ], \"additionalProperties\": false, \"USDschema\": \"http://json-schema.org/draft-07/schema#\" } }, { \"name\": \"HAS_LABEL\", \"description\": \"Allow entities with the specified label\", \"resourceType\": \"catalog-entity\", \"paramsSchema\": { \"type\": \"object\", \"properties\": { \"label\": { \"type\": \"string\", \"description\": \"Name of the label to match on\" } }, \"required\": [ \"label\" ], \"additionalProperties\": false, \"USDschema\": \"http://json-schema.org/draft-07/schema#\" } }, { \"name\": \"HAS_METADATA\", \"description\": \"Allow entities with the specified metadata subfield\", \"resourceType\": \"catalog-entity\", \"paramsSchema\": { \"type\": \"object\", \"properties\": { \"key\": { \"type\": \"string\", \"description\": \"Property within the entities metadata to match on\" }, \"value\": { \"type\": \"string\", \"description\": \"Value of the given property to match on\" } }, \"required\": [ \"key\" ], \"additionalProperties\": false, \"USDschema\": \"http://json-schema.org/draft-07/schema#\" } }, { \"name\": \"HAS_SPEC\", \"description\": \"Allow entities with the specified spec subfield\", \"resourceType\": \"catalog-entity\", \"paramsSchema\": { \"type\": \"object\", \"properties\": { \"key\": { \"type\": \"string\", \"description\": \"Property within the entities spec to match on\" }, \"value\": { \"type\": \"string\", \"description\": \"Value of the given property to match on\" } }, \"required\": [ \"key\" ], \"additionalProperties\": false, \"USDschema\": \"http://json-schema.org/draft-07/schema#\" } }, { \"name\": \"IS_ENTITY_KIND\", \"description\": \"Allow entities matching a specified kind\", \"resourceType\": \"catalog-entity\", \"paramsSchema\": { \"type\": \"object\", \"properties\": { \"kinds\": { \"type\": \"array\", \"items\": { \"type\": \"string\" }, \"description\": \"List of kinds to match at least one of\" } }, \"required\": [ \"kinds\" ], \"additionalProperties\": false, \"USDschema\": \"http://json-schema.org/draft-07/schema#\" } }, { \"name\": \"IS_ENTITY_OWNER\", \"description\": \"Allow entities owned by a specified claim\", \"resourceType\": \"catalog-entity\", \"paramsSchema\": { \"type\": \"object\", \"properties\": { \"claims\": { \"type\": \"array\", \"items\": { \"type\": \"string\" }, \"description\": \"List of claims to match at least one on within ownedBy\" } }, \"required\": [ \"claims\" ], \"additionalProperties\": false, \"USDschema\": \"http://json-schema.org/draft-07/schema#\" } } ] } ... <another plugin condition parameter schemas> ]", "{ \"rule\": \"IS_ENTITY_OWNER\", \"resourceType\": \"catalog-entity\", \"params\": { \"claims\": [\"group:default/team-a\"] } }", "{ \"result\": \"CONDITIONAL\", \"roleEntityRef\": \"role:default/test\", \"pluginId\": \"catalog\", \"resourceType\": \"catalog-entity\", \"permissionMapping\": [\"read\"], \"conditions\": { \"rule\": \"IS_ENTITY_OWNER\", \"resourceType\": \"catalog-entity\", \"params\": { \"claims\": [\"group:default/team-a\"] } } }", "{ \"anyOf\": [ { \"rule\": \"IS_ENTITY_OWNER\", \"resourceType\": \"catalog-entity\", \"params\": { \"claims\": [\"group:default/team-a\"] } }, { \"rule\": \"IS_ENTITY_KIND\", \"resourceType\": \"catalog-entity\", \"params\": { \"kinds\": [\"Group\"] } } ] }", "{ \"anyOf\": [ { \"rule\": \"IS_ENTITY_OWNER\", \"resourceType\": \"catalog-entity\", \"params\": { \"claims\": [\"group:default/team-a\"] } }, { \"rule\": \"IS_ENTITY_KIND\", \"resourceType\": \"catalog-entity\", \"params\": { \"kinds\": [\"Group\"] } } ], \"not\": { \"rule\": \"IS_ENTITY_KIND\", \"resourceType\": \"catalog-entity\", \"params\": { \"kinds\": [\"Api\"] } } }", "{ \"result\": \"CONDITIONAL\", \"roleEntityRef\": \"role:default/test\", \"pluginId\": \"catalog\", \"resourceType\": \"catalog-entity\", \"permissionMapping\": [\"read\"], \"conditions\": { \"anyOf\": [ { \"rule\": \"IS_ENTITY_OWNER\", \"resourceType\": \"catalog-entity\", \"params\": { \"claims\": [\"group:default/team-a\"] } }, { \"rule\": \"IS_ENTITY_KIND\", \"resourceType\": \"catalog-entity\", \"params\": { \"kinds\": [\"Group\"] } } ] } }", "{ \"result\": \"CONDITIONAL\", \"roleEntityRef\": \"role:default/developer\", \"pluginId\": \"catalog\", \"resourceType\": \"catalog-entity\", \"permissionMapping\": [\"update\", \"delete\"], \"conditions\": { \"not\": { \"rule\": \"HAS_ANNOTATION\", \"resourceType\": \"catalog-entity\", \"params\": { \"annotation\": \"keycloak.org/realm\", \"value\": \"<YOUR_REALM>\" } } } }", "{ \"result\": \"CONDITIONAL\", \"roleEntityRef\": \"role:default/developer\", \"pluginId\": \"scaffolder\", \"resourceType\": \"scaffolder-action\", \"permissionMapping\": [\"use\"], \"conditions\": { \"not\": { \"rule\": \"HAS_ACTION_ID\", \"resourceType\": \"scaffolder-action\", \"params\": { \"actionId\": \"quay:create-repository\" } } } }" ]
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.2/html/authorization/con-rbac-conditional-policies-rhdh_title-authorization
2.2.6.2. Anonymous Access
2.2.6.2. Anonymous Access The presence of the /var/ftp/ directory activates the anonymous account. The easiest way to create this directory is to install the vsftpd package. This package establishes a directory tree for anonymous users and configures the permissions on directories to read-only for anonymous users. By default the anonymous user cannot write to any directories. Warning If enabling anonymous access to an FTP server, be aware of where sensitive data is stored. Procedure 2.1. Anonymous Upload To allow anonymous users to upload files, it is recommended to create a write-only directory within the /var/ftp/pub/ directory. Run the following command as root to create such directory named /upload/ : , change the permissions so that anonymous users cannot view the contents of the directory: A long format listing of the directory should look like this: Note Administrators who allow anonymous users to read and write in directories often find that their servers become a repository of stolen software. Under vsftpd , add the following line to the /etc/vsftpd/vsftpd.conf file: In Red Hat Enterprise Linux, the SELinux is running in Enforcing mode by default. Therefore, the allow_ftpd_anon_write Boolean must be enabled in order to allow vsftpd to upload files: Label the /upload/ directory and its files with the public_content_rw_t SELinux context: Note The semanage utility is provided by the policycoreutils-python package, which is not installed by default. To install it, use the following command as root: Use the restorecon utility to change the type of /upload/ and its files: The directory is now properly labeled with public_content_rw_t so that SELinux in Enforcing mode allows anonymous users to upload files to it: For further information about using SELinux, see the Security-Enhanced Linux User Guide and Managing Confined Services guides.
[ "~]# mkdir /var/ftp/pub/upload", "~]# chmod 730 /var/ftp/pub/upload", "~]# ls -ld /var/ftp/pub/upload drwx-wx---. 2 root ftp 4096 Nov 14 22:57 /var/ftp/pub/upload", "anon_upload_enable=YES", "~]# setsebool -P allow_ftpd_anon_write=1", "~]# semanage fcontext -a -t public_content_rw_t '/var/ftp/pub/upload(/.*)'", "~]# yum install policycoreutils-python", "~]# restorecon -R -v /var/ftp/pub/upload", "~]USD ls -dZ /var/ftp/pub/upload drwx-wx---. root root unconfined_u:object_r:public_content_t:s0 /var/ftp/pub/upload/" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sect-security_guide-securing_ftp-anonymous_access
1.5. Red Hat GFS
1.5. Red Hat GFS Red Hat GFS is a cluster file system that allows a cluster of nodes to simultaneously access a block device that is shared among the nodes. GFS is a native file system that interfaces directly with the VFS layer of the Linux kernel file-system interface. GFS employs distributed metadata and multiple journals for optimal operation in a cluster. To maintain file system integrity, GFS uses a lock manager to coordinate I/O. When one node changes data on a GFS file system, that change is immediately visible to the other cluster nodes using that file system. Using Red Hat GFS, you can achieve maximum application uptime through the following benefits: Simplifying your data infrastructure Install and patch applications once for the entire cluster. Eliminates the need for redundant copies of application data (duplication). Enables concurrent read/write access to data by many clients. Simplifies backup and disaster recovery (only one file system to back up or recover). Maximize the use of storage resources; minimize storage administration costs. Manage storage as a whole instead of by partition. Decrease overall storage needs by eliminating the need for data replications. Scale the cluster seamlessly by adding servers or storage on the fly. No more partitioning storage through complicated techniques. Add servers to the cluster on the fly by mounting them to the common file system. Nodes that run Red Hat GFS are configured and managed with Red Hat Cluster Suite configuration and management tools. Volume management is managed through CLVM (Cluster Logical Volume Manager). Red Hat GFS provides data sharing among GFS nodes in a Red Hat cluster. GFS provides a single, consistent view of the file-system name space across the GFS nodes in a Red Hat cluster. GFS allows applications to install and run without much knowledge of the underlying storage infrastructure. Also, GFS provides features that are typically required in enterprise environments, such as quotas, multiple journals, and multipath support. GFS provides a versatile method of networking storage according to the performance, scalability, and economic needs of your storage environment. This chapter provides some very basic, abbreviated information as background to help you understand GFS. You can deploy GFS in a variety of configurations to suit your needs for performance, scalability, and economy. For superior performance and scalability, you can deploy GFS in a cluster that is connected directly to a SAN. For more economical needs, you can deploy GFS in a cluster that is connected to a LAN with servers that use GNBD (Global Network Block Device) or to iSCSI (Internet Small Computer System Interface) devices. (For more information about GNBD, refer to Section 1.7, "Global Network Block Device" .) The following sections provide examples of how GFS can be deployed to suit your needs for performance, scalability, and economy: Section 1.5.1, "Superior Performance and Scalability" Section 1.5.2, "Performance, Scalability, Moderate Price" Section 1.5.3, "Economy and Performance" Note The GFS deployment examples reflect basic configurations; your needs might require a combination of configurations shown in the examples. 1.5.1. Superior Performance and Scalability You can obtain the highest shared-file performance when applications access storage directly. The GFS SAN configuration in Figure 1.12, "GFS with a SAN" provides superior file performance for shared files and file systems. Linux applications run directly on cluster nodes using GFS. Without file protocols or storage servers to slow data access, performance is similar to individual Linux servers with directly connected storage; yet, each GFS application node has equal access to all data files. GFS supports over 300 GFS nodes. Figure 1.12. GFS with a SAN
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_suite_overview/s1-rhgfs-overview-cso
Chapter 14. Using Generic JMS
Chapter 14. Using Generic JMS Abstract Apache CXF provides a generic implementation of a JMS transport. The generic JMS transport is not restricted to using SOAP messages and allows for connecting to any application that uses JMS. NOTE : Support for the JMS 1.0.2 APIs has been removed in CXF 3.0. If you are using RedHat JBoss Fuse 6.2 or higher (includes CXF 3.0), your JMS provider must support the JMS 1.1 APIs. 14.1. Approaches to Configuring JMS The Apache CXF generic JMS transport can connect to any JMS provider and work with applications that exchange JMS messages with bodies of either TextMessage or ByteMessage . There are two ways to enable and configure the JMS transport: Section 14.2, "Using the JMS configuration bean" Section 14.5, "Using WSDL to configure JMS" 14.2. Using the JMS configuration bean Overview To simplify JMS configuration and make it more powerful, Apache CXF uses a single JMS configuration bean to configure JMS endpoints. The bean is implemented by the org.apache.cxf.transport.jms.JMSConfiguration class. It can be used to either configure endpoint's directly or to configure the JMS conduits and destinations. Configuration namespace The JMS configuration bean uses the Spring p-namespace to make the configuration as simple as possible. To use this namespace you need to declare it in the configuration's root element as shown in Example 14.1, "Declaring the Spring p-namespace" . Example 14.1. Declaring the Spring p-namespace Specifying the configuration You specify the JMS configuration by defining a bean of class org.apache.cxf.transport.jms.JMSConfiguration . The properties of the bean provide the configuration settings for the transport. Important In CXF 3.0, the JMS transport no longer has a dependency on Spring JMS, so some Spring JMS-related options have been removed. Table 14.1, "General JMS Configuration Properties" lists properties that are common to both providers and consumers. Table 14.1. General JMS Configuration Properties Property Default Description connectionFactory [Required] Specifies a reference to a bean that defines a JMS ConnectionFactory. wrapInSingleConnectionFactory true [pre v3.0] Removed in CXF 3.0 pre CXF 3.0 Specifies whether to wrap the ConnectionFactory with a Spring SingleConnectionFactory . Enable this property when using a ConnectionFactory that does not pool connections, as it will improve the performance of the JMS transport. This is so because the JMS transport creates a new connection for each message, and the SingleConnectionFactory is needed to cache the connection, so it can be reused. reconnectOnException false Deprecated in CXF 3.0 CXF always reconnects when an exception occurs. pre CXF 3.0 Specifies whether to create a new connection when an exception occurs. When wrapping the ConnectionFactory with a Spring SingleConnectionFactory : true - on an exception, create a new connection Do not enable this option when using a PooledConnectionFactory, as this option only returns the pooled connection, but does not reconnect. false - on an exception, do not try to reconnect targetDestination Specifies the JNDI name or provider-specific name of a destination. replyDestination Specifies the JMS name of the JMS destination where replies are sent. This property allows the use of a user-defined destination for replies. For more details see Section 14.6, "Using a Named Reply Destination" . destinationResolver DynamicDestinationResolver Specifies a reference to a Spring DestinationResolver . This property allows you to define how destination names are resolved to JMS destinations. Valid values are: DynamicDestinationResolver - resolve destination names using the features of the JMS provider. JndiDestinationResolver - resolve destination names using JNDI. transactionManager Specifies a reference to a Spring transaction manager. This enables the service to participate in JTA transactions. taskExecutor SimpleAsyncTaskExecutor Removed in CXF 3.0 pre CXF 3.0 Specifies a reference to a Spring TaskExecutor. This is used in listeners to decide how to handle incoming messages. useJms11 false Removed in CXF 3.0 CXF 3.0 supports JMS 1.1 features only. pre CXF 3.0 Specifies whether JMS 1.1 features are used. Valid values are: true - JMS 1.1 features false - JMS 1.0.2 features messageIdEnabled true Removed in CXF 3.0 pre CXF 3.0 Specifies whether the JMS transport wants the JMS broker to provide message IDs. Valid values are: true - broker needs to provide message IDs false - broker need not provide message IDs In this case, the endpoint calls its message producer's setDisableMessageID() method with a value of true . The broker is then given a hint that it need not generate message IDs or add them to the endpoint's messages. The broker either accepts the hint or ignores it. messageTimestampEnabled true Removed in CXF 3.0 pre CXF 3.0 Specifies whether the JMS transport wants the JMS broker to provide message time stamps. Valid values are: true - broker needs to provide message timestamps false - broker need not provide message timestamps In this case, the endpoint calls its message producer's setDisableMessageTimestamp() method with a value of true . The broker is then given a hint that it need not generate time stamps or add them to the endpoint's messages. The broker either accepts the hint or ignores it. cacheLevel -1 (feature disabled) Removed in CXF 3.0 pre CXF 3.0 Specifies the level of caching that the JMS listener container may apply. Valid values are: 0 - CACHE_NONE 1 - CACHE_CONNECTION 2 - CACHE_SESSION 3 - CACHE_CONSUMER 4 - CACHE_AUTO For details, see Class DefaultMessageListenerContainer pubSubNoLocal false Specifies whether to receive your own messages when using topics. true - do not receive your own messages false - receive your own messages receiveTimeout 60000 Specifies the time, in milliseconds, to wait for response messages. explicitQosEnabled false Specifies whether the QoS settings (such as priority, persistence, time to live) are explicitly set for each message ( true ) or use the default values ( false ). deliveryMode 2 Specifies whether a message is persistent. Valid values are: 1 (NON_PERSISTENT)-messages are kept memory only 2 (PERSISTENT)-messages are persisted to disk priority 4 Specifies message priority. JMS priority values range from 0 (lowest) to 9 (highest). See your JMS provider's documentation for details. timeToLive 0 (indefinitely) Specifies the time, in milliseconds, before a message that has been sent is discarded. sessionTransacted false Specifies whether JMS transactions are used. concurrentConsumers 1 Removed in CXF 3.0 pre CXF 3.0 Specifies the minimum number of concurrent consumers for the listener. maxConcurrentConsumers 1 Removed in CXF 3.0 pre CXF 3.0 Specifies the maximum number of concurrent consumers for the listener. messageSelector Specifies the string value of the selector used to filter incoming messages. This property enables multiple connections to share a queue. For more information on the syntax used to specify message selectors, see the JMS 1.1 specification . subscriptionDurable false Specifies whether the server uses durable subscriptions. durableSubscriptionName Specifies the name (string) used to register the durable subscription. messageType text Specifies how the message data will be packaged as a JMS message. Valid values are: text - specifies that the data will be packaged as a TextMessage byte - specifies that the data will be packaged as an array of bytes ( byte[] ) binary - specifies that the data will be packaged as an ByteMessage pubSubDomain false Specifies whether the target destination is a topic or a queue. Valid values are: true - topic false - queue jmsProviderTibcoEms false Specifies whether the JMS provider is Tibco EMS. When set to true , the principal in the security context is populated from the JMS_TIBCO_SENDER header. useMessageIDAsCorrelationID false Removed in CXF 3.0 Specifies whether JMS will use the message ID to correlate messages. When set to true , the client sets a generated correlation ID. maxSuspendedContinuations -1 (feature disabled) CXF 3.0 Specifies the maximum number of suspended continuations the JMS destination may have. When the current number exceeds the specified maximum, the JMSListenerContainer is stopped. reconnectPercentOfMax 70 CXF 3.0 Specifies when to restart the JMSListenerContainer stopped for exceeding maxSuspendedContinuations . The listener container is restarted when its current number of suspended continuations falls below the value of (maxSuspendedContinuations * reconnectPercentOfMax/100) . As shown in Example 14.2, "JMS configuration bean" , the bean's properties are specified as attributes to the bean element. They are all declared in the Spring p namespace. Example 14.2. JMS configuration bean Applying the configuration to an endpoint The JMSConfiguration bean can be applied directly to both server and client endpoints using the Apache CXF features mechanism. To do so: Set the endpoint's address attribute to jms:// . Add a jaxws:feature element to the endpoint's configuration. Add a bean of type org.apache.cxf.transport.jms.JMSConfigFeature to the feature. Set the bean element's p:jmsConfig-ref attribute to the ID of the JMSConfiguration bean. Example 14.3, "Adding JMS configuration to a JAX-WS client" shows a JAX-WS client that uses the JMS configuration from Example 14.2, "JMS configuration bean" . Example 14.3. Adding JMS configuration to a JAX-WS client Applying the configuration to the transport The JMSConfiguration bean can be applied to JMS conduits and JMS destinations using the jms:jmsConfig-ref element. The jms:jmsConfig-ref element's value is the ID of the JMSConfiguration bean. Example 14.4, "Adding JMS configuration to a JMS conduit" shows a JMS conduit that uses the JMS configuration from Example 14.2, "JMS configuration bean" . Example 14.4. Adding JMS configuration to a JMS conduit 14.3. Optimizing Client-Side JMS Performance Overview Two major settings affect the JMS performance of clients: pooling and synchronous receives. Pooling On the client side, CXF creates a new JMS session and JMS producer for each message. This is so because neither session nor producer objects are thread safe. Creating a producer is especially time intensive because it requires communicating with the server. Pooling connection factories improves performance by caching the connection, session, and producer. For ActiveMQ, configuring pooling is simple; for example: For more information on pooling, see "Appendix A Optimizing Performance of JMS Single- and Multiple-Resource Transactions" in the Red Hat JBoss Fuse Transaction Guide Avoiding synchronous receives For request/reply exchanges, the JMS transport sends a request and then waits for a reply. Whenever possible, request/reply messaging is implemented asynchronously using a JMS MessageListener . However, CXF must use a synchronous Consumer.receive() method when it needs to share queues between endpoints. This scenario requires the MessageListener to use a message selector to filter the messages. The message selector must be known in advance, so the MessageListener is opened only once. Two cases in which the message selector cannot be known in advance should be avoided: When JMSMessageID is used as the JMSCorrelationID If the JMS properties useConduitIdSelector and conduitSelectorPrefix are not set on the JMS transport, the client does not set a JMSCorrelationId . This causes the server to use the JMSMessageId of the request message as the JMSCorrelationId . As JMSMessageID cannot be known in advance, the client has to use a synchronous Consumer.receive() method. Note that you must use the Consumer.receive() method with IBM JMS endpoints (their default). The user sets the JMStype in the request message and then sets a custom JMSCorrelationID . Again, as the custom JMSCorrelationID cannot be known in advance, the client has to use a synchronous Consumer.receive() method. So the general rule is to avoid using settings that require using a synchronous receive. 14.4. Configuring JMS Transactions Overview CXF 3.0 supports both local JMS transactions and JTA transactions on CXF endpoints, when using one-way messaging. Local transactions Transactions using local resources roll back the JMS message only when an exception occurs. They do not directly coordinate other resources, such as database transactions. To set up a local transaction, configure the endpoint as you normally would, and set the property sessionTrasnsacted to true . Note For more information on transactions and pooling, see the Red Hat JBoss Fuse Transaction Guide . JTA transactions Using JTA transactions, you can coordinate any number of XA resources. If a CXF endpoint is configured for JTA transactions, it starts a transaction before calling the service implementation. The transaction will be committed if no exception occurs. Otherwise, it will be rolled back. In JTA transactions, a JMS message is consumed and the data written to a database. When an exception occurs, both resources are rolled back, so either the message is consumed and the data is written to the database, or the message is rolled back and the data is not written to the database. Configuring JTA transactions requires two steps: Defining a transaction manager bean method Define a transaction manager Set the name of the transaction manager in the JMS URI This example finds a bean with the ID TransactionManager . OSGi reference method Look up the transaction manager as an OSGi service using Blueprint Set the name of the transaction manager in the JMS URI This example looks up the transaction manager in JNDI. Configuring a JCA pooled connection factory Using Spring to define the JCA pooled connection factory: In this example, the first bean defines an ActiveMQ XA connection factory, which is given to a JcaPooledConnectionFactory . The JcaPooledConnectionFactory is then provided as the default bean with id ConnectionFactory . Note that the JcaPooledConnectionFactory looks like a normal ConnectionFactory. But when a new connection and session are opened, it checks for an XA transaction and, if found, automatically registers the JMS session as an XA resource. This allows the JMS session to participate in the JMS transaction. Important Directly setting an XA ConnectionFactory on the JMS transport will not work! 14.5. Using WSDL to configure JMS 14.5.1. JMS WSDL Extension Namespance The WSDL extensions for defining a JMS endpoint are defined in the namespace http://cxf.apache.org/transports/jms . In order to use the JMS extensions you will need to add the line shown in Example 14.5, "JMS WSDL extension namespace" to the definitions element of your contract. Example 14.5. JMS WSDL extension namespace 14.5.2. Basic JMS configuration Overview The JMS address information is provided using the jms:address element and its child, the jms:JMSNamingProperties element. The jms:address element's attributes specify the information needed to identify the JMS broker and the destination. The jms:JMSNamingProperties element specifies the Java properties used to connect to the JNDI service. Important Information specified using the JMS feature will override the information in the endpoint's WSDL file. Specifying the JMS address The basic configuration for a JMS endpoint is done by using a jms:address element as the child of your service's port element. The jms:address element used in WSDL is identical to the one used in the configuration file. Its attributes are listed in Table 14.2, "JMS endpoint attributes" . Table 14.2. JMS endpoint attributes Attribute Description destinationStyle Specifies if the JMS destination is a JMS queue or a JMS topic. jndiConnectionFactoryName Specifies the JNDI name bound to the JMS connection factory to use when connecting to the JMS destination. jmsDestinationName Specifies the JMS name of the JMS destination to which requests are sent. jmsReplyDestinationName Specifies the JMS name of the JMS destinations where replies are sent. This attribute allows you to use a user defined destination for replies. For more details see Section 14.6, "Using a Named Reply Destination" . jndiDestinationName Specifies the JNDI name bound to the JMS destination to which requests are sent. jndiReplyDestinationName Specifies the JNDI name bound to the JMS destinations where replies are sent. This attribute allows you to use a user defined destination for replies. For more details see Section 14.6, "Using a Named Reply Destination" . connectionUserName Specifies the user name to use when connecting to a JMS broker. connectionPassword Specifies the password to use when connecting to a JMS broker. The jms:address WSDL element uses a jms:JMSNamingProperties child element to specify additional information needed to connect to a JNDI provider. Specifying JNDI properties To increase interoperability with JMS and JNDI providers, the jms:address element has a child element, jms:JMSNamingProperties , that allows you to specify the values used to populate the properties used when connecting to the JNDI provider. The jms:JMSNamingProperties element has two attributes: name and value . name specifies the name of the property to set. value attribute specifies the value for the specified property. jms:JMSNamingProperties element can also be used for specification of provider specific properties. The following is a list of common JNDI properties that can be set: java.naming.factory.initial java.naming.provider.url java.naming.factory.object java.naming.factory.state java.naming.factory.url.pkgs java.naming.dns.url java.naming.authoritative java.naming.batchsize java.naming.referral java.naming.security.protocol java.naming.security.authentication java.naming.security.principal java.naming.security.credentials java.naming.language java.naming.applet For more details on what information to use in these attributes, check your JNDI provider's documentation and consult the Java API reference material. Example Example 14.6, "JMS WSDL port specification" shows an example of a JMS WSDL port specification. Example 14.6. JMS WSDL port specification 14.5.3. JMS client configuration Overview JMS consumer endpoints specify the type of messages they use. JMS consumer endpoint can use either a JMS ByteMessage or a JMS TextMessage . When using an ByteMessage the consumer endpoint uses a byte[] as the method for storing data into and retrieving data from the JMS message body. When messages are sent, the message data, including any formating information, is packaged into a byte[] and placed into the message body before it is placed on the wire. When messages are received, the consumer endpoint will attempt to unmarshall the data stored in the message body as if it were packed in a byte[] . When using a TextMessage , the consumer endpoint uses a string as the method for storing and retrieving data from the message body. When messages are sent, the message information, including any format-specific information, is converted into a string and placed into the JMS message body. When messages are received the consumer endpoint will attempt to unmarshall the data stored in the JMS message body as if it were packed into a string. When native JMS applications interact with Apache CXF consumers, the JMS application is responsible for interpreting the message and the formatting information. For example, if the Apache CXF contract specifies that the binding used for a JMS endpoint is SOAP, and the messages are packaged as TextMessage , the receiving JMS application will get a text message containing all of the SOAP envelope information. Specifying the message type The type of messages accepted by a JMS consumer endpoint is configured using the optional jms:client element. The jms:client element is a child of the WSDL port element and has one attribute: Table 14.3. JMS Client WSDL Extensions messageType Specifies how the message data will be packaged as a JMS message. text specifies that the data will be packaged as a TextMessage . binary specifies that the data will be packaged as an ByteMessage . Example Example 14.7, "WSDL for a JMS consumer endpoint" shows the WSDL for configuring a JMS consumer endpoint. Example 14.7. WSDL for a JMS consumer endpoint 14.5.4. JMS provider configuration Overview JMS provider endpoints have a number of behaviors that are configurable. These include: how messages are correlated the use of durable subscriptions if the service uses local JMS transactions the message selectors used by the endpoint Specifying the configuration Provider endpoint behaviors are configured using the optional jms:server element. The jms:server element is a child of the WSDL wsdl:port element and has the following attributes: Table 14.4. JMS provider endpoint WSDL extensions Attribute Description useMessageIDAsCorrealationID Specifies whether JMS will use the message ID to correlate messages. The default is false . durableSubscriberName Specifies the name used to register a durable subscription. messageSelector Specifies the string value of a message selector to use. For more information on the syntax used to specify message selectors, see the JMS 1.1 specification. transactional Specifies whether the local JMS broker will create transactions around message processing. The default is false . [a] [a] Currently, setting the transactional attribute to true is not supported by the runtime. Example Example 14.8, "WSDL for a JMS provider endpoint" shows the WSDL for configuring a JMS provider endpoint. Example 14.8. WSDL for a JMS provider endpoint 14.6. Using a Named Reply Destination Overview By default, Apache CXF endpoints using JMS create a temporary queue for sending replies back and forth. If you prefer to use named queues, you can configure the queue used to send replies as part of an endpoint's JMS configuration. Setting the reply destination name You specify the reply destination using either the jmsReplyDestinationName attribute or the jndiReplyDestinationName attribute in the endpoint's JMS configuration. A client endpoint will listen for replies on the specified destination and it will specify the value of the attribute in the ReplyTo field of all outgoing requests. A service endpoint will use the value of the jndiReplyDestinationName attribute as the location for placing replies if there is no destination specified in the request's ReplyTo field. Example Example 14.9, "JMS Consumer Specification Using a Named Reply Queue" shows the configuration for a JMS client endpoint. Example 14.9. JMS Consumer Specification Using a Named Reply Queue
[ "<beans xmlns:p=\"http://www.springframework.org/schema/p\" ... > </beans>", "<bean id=\"jmsConfig\" class=\"org.apache.cxf.transport.jms.JMSConfiguration\" p:connectionFactory=\"jmsConnectionFactory\" p:targetDestination=\"dynamicQueues/greeter.request.queue\" p:pubSubDomain=\"false\" />", "<jaxws:client id=\"CustomerService\" xmlns:customer=\"http://customerservice.example.com/\" serviceName=\"customer:CustomerServiceService\" endpointName=\"customer:CustomerServiceEndpoint\" address=\"jms://\" serviceClass=\"com.example.customerservice.CustomerService\"> <jaxws:features> <bean xmlns=\"http://www.springframework.org/schema/beans\" class=\"org.apache.cxf.transport.jms.JMSConfigFeature\" p:jmsConfig-ref=\"jmsConfig\"/> </jaxws:features> </jaxws:client>", "<jms:conduit name=\"{http://cxf.apache.org/jms_conf_test}HelloWorldQueueBinMsgPort.jms-conduit\"> <jms:jmsConfig-ref>jmsConf</jms:jmsConfig-ref> </jms:conduit>", "import org.apache.activemq.ActiveMQConnectionFactory; import org.apache.activemq.pool.PooledConnectionFactory; ConnectionFactory cf = new ActiveMQConnectionFactory(\"tcp://localhost:61616\"); PooledConnectionFactory pcf = new PooledConnectionFactory(); //Set expiry timeout because the default (0) prevents reconnection on failure pcf.setExpiryTimeout(5000); pcf.setConnectionFactory(cf); JMSConfiguration jmsConfig = new JMSConfiguration(); jmsConfig.setConnectionFactory(pdf);", "<bean id=\"transactionManager\" class=\"org.apache.geronimo.transaction.manager.GeronimoTransactionManager\"/>", "jms:queue:myqueue?jndiTransactionManager=TransactionManager", "<reference id=\"TransactionManager\" interface=\"javax.transaction.TransactionManager\"/>", "jms:jndi:myqueue?jndiTransactionManager=java:comp/env/TransactionManager", "<bean id=\"xacf\" class=\"org.apache.activemq.ActiveMQXAConnectionFactory\"> <property name=\"brokerURL\" value=\"tcp://localhost:61616\" /> </bean> <bean id=\"ConnectionFactory\" class=\"org.apache.activemq.jms.pool.JcaPooledConnectionFactory\"> <property name=\"transactionManager\" ref=\"transactionManager\" /> <property name=\"connectionFactory\" ref=\"xacf\" /> </bean>", "xmlns:jms=\"http://cxf.apache.org/transports/jms\"", "<service name=\"JMSService\"> <port binding=\"tns:Greeter_SOAPBinding\" name=\"SoapPort\"> <jms:address jndiConnectionFactoryName=\"ConnectionFactory\" jndiDestinationName=\"dynamicQueues/test.Celtix.jmstransport\" > <jms:JMSNamingProperty name=\"java.naming.factory.initial\" value=\"org.activemq.jndi.ActiveMQInitialContextFactory\" /> <jms:JMSNamingProperty name=\"java.naming.provider.url\" value=\"tcp://localhost:61616\" /> </jms:address> </port> </service>", "<service name=\"JMSService\"> <port binding=\"tns:Greeter_SOAPBinding\" name=\"SoapPort\"> <jms:address jndiConnectionFactoryName=\"ConnectionFactory\" jndiDestinationName=\"dynamicQueues/test.Celtix.jmstransport\" > <jms:JMSNamingProperty name=\"java.naming.factory.initial\" value=\"org.activemq.jndi.ActiveMQInitialContextFactory\" /> <jms:JMSNamingProperty name=\"java.naming.provider.url\" value=\"tcp://localhost:61616\" /> </jms:address> <jms:client messageType=\"binary\" /> </port> </service>", "<service name=\"JMSService\"> <port binding=\"tns:Greeter_SOAPBinding\" name=\"SoapPort\"> <jms:address jndiConnectionFactoryName=\"ConnectionFactory\" jndiDestinationName=\"dynamicQueues/test.Celtix.jmstransport\" > <jms:JMSNamingProperty name=\"java.naming.factory.initial\" value=\"org.activemq.jndi.ActiveMQInitialContextFactory\" /> <jms:JMSNamingProperty name=\"java.naming.provider.url\" value=\"tcp://localhost:61616\" /> </jms:address> <jms:server messageSelector=\"cxf_message_selector\" useMessageIDAsCorrelationID=\"true\" transactional=\"true\" durableSubscriberName=\"cxf_subscriber\" /> </port> </service>", "<jms:conduit name=\"{http://cxf.apache.org/jms_endpt}HelloWorldJMSPort.jms-conduit\"> <jms:address destinationStyle=\"queue\" jndiConnectionFactoryName=\"myConnectionFactory\" jndiDestinationName=\"myDestination\" jndiReplyDestinationName=\"myReplyDestination\" > <jms:JMSNamingProperty name=\"java.naming.factory.initial\" value=\"org.apache.cxf.transport.jms.MyInitialContextFactory\" /> <jms:JMSNamingProperty name=\"java.naming.provider.url\" value=\"tcp://localhost:61616\" /> </jms:address> </jms:conduit>" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_cxf_development_guide/fusecxfjms
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/red_hat_openshift_data_foundation_architecture/making-open-source-more-inclusive
Chapter 12. Managing Disk Storage
Chapter 12. Managing Disk Storage Introduction to different methods........ 12.1. Standard Partitions using parted Many users need to view the existing partition table, change the size of the partitions, remove partitions, or add partitions from free space or additional hard drives. The utility parted allows users to perform these tasks. This chapter discusses how to use parted to perform file system tasks. If you want to view the system's disk space usage or monitor the disk space usage, refer to Section 39.3, "File Systems" . You must have the parted package installed to use the parted utility. To start parted , at a shell prompt as root, type the command parted /dev/sda , where /dev/sda is the device name for the drive you want to configure. The (parted) prompt is displayed. Type help to view a list of available commands. If you want to create, remove, or resize a partition, the device cannot be in use (partitions cannot be mounted, and swap space cannot be enabled). The partition table should not be modified while in use because the kernel may not properly recognize the changes. Data could be overwritten by writing to the wrong partition because the partition table and partitions mounted do not match. The easiest way to achieve this it to boot your system in rescue mode. Refer to Chapter 5, Basic System Recovery for instructions on booting into rescue mode. When prompted to mount the file system, select Skip . Alternately, if the drive does not contain any partitions in use (system processes that use or lock the file system from being unmounted), you can unmount them with the umount command and turn off all the swap space on the hard drive with the swapoff command. Table 12.1, " parted commands" contains a list of commonly used parted commands. The sections that follow explain some of them in more detail. Table 12.1. parted commands Command Description check minor-num Perform a simple check of the file system cp from to Copy file system from one partition to another; from and to are the minor numbers of the partitions help Display list of available commands mklabel label Create a disk label for the partition table mkfs minor-num file-system-type Create a file system of type file-system-type mkpart part-type fs-type start-mb end-mb Make a partition without creating a new file system mkpartfs part-type fs-type start-mb end-mb Make a partition and create the specified file system move minor-num start-mb end-mb Move the partition name minor-num name Name the partition for Mac and PC98 disklabels only print Display the partition table quit Quit parted rescue start-mb end-mb Rescue a lost partition from start-mb to end-mb resize minor-num start-mb end-mb Resize the partition from start-mb to end-mb rm minor-num Remove the partition select device Select a different device to configure set minor-num flag state Set the flag on a partition; state is either on or off 12.1.1. Viewing the Partition Table After starting parted , type the following command to view the partition table: A table similar to the following appears: The first line displays the size of the disk, the second line displays the disk label type, and the remaining output shows the partition table. In the partition table, the Minor number is the partition number. For example, the partition with minor number 1 corresponds to /dev/sda1 . The Start and End values are in megabytes. The Type is one of primary, extended, or logical. The Filesystem is the file system type, which can be one of ext2, ext3, fat16, fat32, hfs, jfs, linux-swap, ntfs, reiserfs, hp-ufs, sun-ufs, or xfs. The Flags column lists the flags set for the partition. Available flags are boot, root, swap, hidden, raid, lvm, or lba. In this example, minor number 1 refers to the /boot/ file system, minor number 2 refers to the root file system ( / ), minor number 3 refers to the swap, and minor number 5 refers to the /home/ file system. Note To select a different device without having to restart parted , use the select command followed by the device name such as /dev/sda . Then, you can view its partition table or configure it. 12.1.2. Creating a Partition Warning Do not attempt to create a partition on a device that is in use. Before creating a partition, boot into rescue mode (or unmount any partitions on the device and turn off any swap space on the device). Start parted , where /dev/ sda is the device on which to create the partition: View the current partition table to determine if there is enough free space: If there is not enough free space, you can resize an existing partition. Refer to Section 12.1.4, "Resizing a Partition" for details. 12.1.2.1. Making the Partition From the partition table, determine the start and end points of the new partition and what partition type it should be. You can only have four primary partitions (with no extended partition) on a device. If you need more than four partitions, you can have three primary partitions, one extended partition, and multiple logical partitions within the extended. For an overview of disk partitions, refer to the appendix An Introduction to Disk Partitions in the Installation Guide . For example, to create a primary partition with an ext3 file system from 1024 megabytes until 2048 megabytes on a hard drive type the following command: Note If you use the mkpartfs command instead, the file system is created after the partition is created. However, parted does not support creating an ext3 file system. Thus, if you wish to create an ext3 file system, use mkpart and create the file system with the mkfs command as described later. mkpartfs works for file system type linux-swap. The changes start taking place as soon as you press Enter , so review the command before executing to it. After creating the partition, use the print command to confirm that it is in the partition table with the correct partition type, file system type, and size. Also remember the minor number of the new partition so that you can label it. You should also view the output of to make sure the kernel recognizes the new partition. 12.1.2.2. Formating the Partition The partition still does not have a file system. Create the file system: Warning Formatting the partition permanently destroys any data that currently exists on the partition. 12.1.2.3. Labeling the Partition , give the partition a label. For example, if the new partition is /dev/sda6 and you want to label it /work : By default, the installation program uses the mount point of the partition as the label to make sure the label is unique. You can use any label you want. 12.1.2.4. Creating the Mount Point As root, create the mount point: 12.1.2.5. Add to /etc/fstab As root, edit the /etc/fstab file to include the new partition. The new line should look similar to the following: The first column should contain LABEL= followed by the label you gave the partition. The second column should contain the mount point for the new partition, and the column should be the file system type (for example, ext3 or swap). If you need more information about the format, read the man page with the command man fstab . If the fourth column is the word defaults , the partition is mounted at boot time. To mount the partition without rebooting, as root, type the command: 12.1.3. Removing a Partition Warning Do not attempt to remove a partition on a device that is in use. Before removing a partition, boot into rescue mode (or unmount any partitions on the device and turn off any swap space on the device). Start parted , where /dev/ sda is the device on which to remove the partition: View the current partition table to determine the minor number of the partition to remove: Remove the partition with the command rm . For example, to remove the partition with minor number 3: The changes start taking place as soon as you press Enter , so review the command before committing to it. After removing the partition, use the print command to confirm that it is removed from the partition table. You should also view the output of to make sure the kernel knows the partition is removed. The last step is to remove it from the /etc/fstab file. Find the line that declares the removed partition, and remove it from the file. 12.1.4. Resizing a Partition Warning Do not attempt to resize a partition on a device that is in use. Before resizing a partition, boot into rescue mode (or unmount any partitions on the device and turn off any swap space on the device). Start parted , where /dev/ sda is the device on which to resize the partition: View the current partition table to determine the minor number of the partition to resize as well as the start and end points for the partition: Warning The used space of the partition to resize must not be larger than the new size. To resize the partition, use the resize command followed by the minor number for the partition, the starting place in megabytes, and the end place in megabytes. For example: After resizing the partition, use the print command to confirm that the partition has been resized correctly, is the correct partition type, and is the correct file system type. After rebooting the system into normal mode, use the command df to make sure the partition was mounted and is recognized with the new size.
[ "print", "Disk geometry for /dev/sda: 0.000-8678.789 megabytes Disk label type: msdos Minor Start End Type Filesystem Flags 1 0.031 101.975 primary ext3 boot 2 101.975 5098.754 primary ext3 3 5098.755 6361.677 primary linux-swap 4 6361.677 8675.727 extended 5 6361.708 7357.895 logical ext3 Disk geometry for /dev/hda: 0.000-9765.492 megabytes Disk label type: msdos Minor Start End Type Filesystem Flags 1 0.031 101.975 primary ext3 boot 2 101.975 611.850 primary linux-swap 3 611.851 760.891 primary ext3 4 760.891 9758.232 extended lba 5 760.922 9758.232 logical ext3", "parted /dev/ sda", "print", "mkpart primary ext3 1024 2048", "cat /proc/partitions", "/sbin/mkfs -t ext3 /dev/ sda6", "e2label /dev/sda6 /work", "mkdir /work", "LABEL=/work /work ext3 defaults 1 2", "mount /work", "parted /dev/ sda", "print", "rm 3", "cat /proc/partitions", "parted /dev/ sda", "print", "resize 3 1024 2048" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/Managing_Disk_Storage
Appendix A. Reference material
Appendix A. Reference material As a user of JBoss EAP, you can expect seamless compatibility and interoperability between different releases. Connecting from various clients to servers is supported, with only specific cases requiring additional considerations. A.1. Compatibility and interoperability between releases This section describes the compatibility and interoperability of client and server enterprise beans and messaging components between the JBoss EAP 6, JBoss EAP 7, and JBoss EAP 8.0 releases. A.1.1. Enterprise beans remoting over Internet Inter-ORB Protocol The following configurations must run without errors: Connecting from a JBoss EAP 6 client to a JBoss EAP 8.0 server Connecting from a JBoss EAP 7 client to a JBoss EAP 8.0 server Connecting from a JBoss EAP 8.0 client to a JBoss EAP 7 server Connecting from a JBoss EAP 8.0 client to a JBoss EAP 6 server A.1.2. Enterprise beans remoting using Java Naming and Directory Interface The following configurations must run without errors: Connecting from a JBoss EAP 7 client to a JBoss EAP 8.0 server Connecting from a JBoss EAP 8.0 client to a JBoss EAP 7 server JBoss EAP 6 provided support for the Enterprise Beans 3.1 specification and introduced the use of standardized global Java Naming and Directory Interface namespaces, which are still used in JBoss EAP 8.0. The changes in Java Naming and Directory Interface namespace names do not introduce incompatibilities for the following configurations: Connecting from a JBoss EAP 6 client to a JBoss EAP 8.0 or a JBoss EAP 7 server Connecting from a JBoss EAP 8.0 or JBoss EAP 7 client to a JBoss EAP 6 server A.1.3. Enterprise beans remoting using @WebService The following configurations must run without errors: Connecting from a JBoss EAP 6 client to a JBoss EAP 8.0 server Connecting from a JBoss EAP 7 client to a JBoss EAP 8.0 server Connecting from a JBoss EAP 8.0 client to a JBoss EAP 7 server Connecting from a JBoss EAP 8.0 client to a JBoss EAP 6 server A.1.4. Messaging standalone client The following configurations must run without errors: Connecting from a JBoss EAP 7 client to a JBoss EAP 8.0 server Connecting from a JBoss EAP 8.0 client to a JBoss EAP 7 server JBoss EAP 8.0 built-in messaging is not able to connect to HornetQ 2.3.x that shipped with JBoss EAP 6 due to protocol compatibility issues. For this reason, the following configuration are not compatible: Connecting from a JBoss EAP 8.0 client to a JBoss EAP 6 server Connecting from a JBoss EAP 6 client to a JBoss EAP 8.0 server Note To make this connection possible, you must create a legacy connection factory, accessible through Java Naming and Directory Interface. A.1.5. Messaging MDBs The following configurations must run without errors: Connecting from a JBoss EAP 7 client to a JBoss EAP 8.0 server Connecting from a JBoss EAP 8.0 client to a JBoss EAP 7 server JBoss EAP 8.0 built-in messaging is not able to connect to HornetQ 2.3.x that shipped with JBoss EAP 6 due to protocol compatibility issues. For this reason, the following configuration are not compatible: Connecting from a JBoss EAP 8.0 client to a JBoss EAP 6 server Connecting from a JBoss EAP 6 client to a JBoss EAP 8.0 server Note To make this connection possible, you must create a legacy connection factory, accessible through Java Naming and Directory Interface. A.1.6. Messaging bridges The following configurations must run without errors: Connecting from a JBoss EAP 6 client to a JBoss EAP 8.0 server Connecting from a JBoss EAP 7 client to a JBoss EAP 8.0 server Connecting from a JBoss EAP 8.0 client to a JBoss EAP 7 server Connecting from a JBoss EAP 8.0 client to a JBoss EAP 6 server
null
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/migration_guide/reference-material_default
8.4. Increasing the Size of an XFS File System
8.4. Increasing the Size of an XFS File System An XFS file system may be grown while mounted using the xfs_growfs command: The -D size option grows the file system to the specified size (expressed in file system blocks). Without the -D size option, xfs_growfs will grow the file system to the maximum size supported by the device. Before growing an XFS file system with -D size , ensure that the underlying block device is of an appropriate size to hold the file system later. Use the appropriate resizing methods for the affected block device. Note While XFS file systems can be grown while mounted, their size cannot be reduced at all. For more information about growing a file system, refer to man xfs_growfs .
[ "xfs_growfs /mount/point -D size" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/storage_administration_guide/xfsgrow
6.3.2.3. /etc/group
6.3.2.3. /etc/group The /etc/group file is world-readable and contains a list of groups, each on a separate line. Each line is a four field, colon delimited list including the following information: Group name - The name of the group. Used by various utility programs as a human-readable identifier for the group. Group password - If set, this allows users that are not part of the group to join the group by using the newgrp command and typing the password stored here. If a lower case x is in this field, then shadow group passwords are being used. Group ID ( GID ) - The numerical equivalent of the group name. It is used by the operating system and applications when determining access privileges. Member list - A comma delimited list of the users belonging to the group. Here is an example line from /etc/group : This line shows that the general group is using shadow passwords, has a GID of 502, and that juan , shelley , and bob are members. For more information on /etc/group , see the group(5) man page.
[ "general:x:502:juan,shelley,bob" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/s3-acctspgrps-group
17.2. RAID Levels and Linear Support
17.2. RAID Levels and Linear Support RAID supports various configurations, including levels 0, 1, 4, 5, 6, 10, and linear. These RAID types are defined as follows: Level 0 RAID level 0, often called "striping," is a performance-oriented striped data mapping technique. This means the data being written to the array is broken down into strips and written across the member disks of the array, allowing high I/O performance at low inherent cost but provides no redundancy. Many RAID level 0 implementations will only stripe the data across the member devices up to the size of the smallest device in the array. This means that if you have multiple devices with slightly different sizes, each device will get treated as though it is the same size as the smallest drive. Therefore, the common storage capacity of a level 0 array is equal to the capacity of the smallest member disk in a Hardware RAID or the capacity of smallest member partition in a Software RAID multiplied by the number of disks or partitions in the array. Level 1 RAID level 1, or "mirroring," has been used longer than any other form of RAID. Level 1 provides redundancy by writing identical data to each member disk of the array, leaving a "mirrored" copy on each disk. Mirroring remains popular due to its simplicity and high level of data availability. Level 1 operates with two or more disks, and provides very good data reliability and improves performance for read-intensive applications but at a relatively high cost. [5] The storage capacity of the level 1 array is equal to the capacity of the smallest mirrored hard disk in a Hardware RAID or the smallest mirrored partition in a Software RAID. Level 1 redundancy is the highest possible among all RAID types, with the array being able to operate with only a single disk present. Level 4 Level 4 uses parity [6] concentrated on a single disk drive to protect data. Because the dedicated parity disk represents an inherent bottleneck on all write transactions to the RAID array, level 4 is seldom used without accompanying technologies such as write-back caching, or in specific circumstances where the system administrator is intentionally designing the software RAID device with this bottleneck in mind (such as an array that will have little to no write transactions once the array is populated with data). RAID level 4 is so rarely used that it is not available as an option in Anaconda. However, it could be created manually by the user if truly needed. The storage capacity of Hardware RAID level 4 is equal to the capacity of the smallest member partition multiplied by the number of partitions minus one . Performance of a RAID level 4 array will always be asymmetrical, meaning reads will outperform writes. This is because writes consume extra CPU and main memory bandwidth when generating parity, and then also consume extra bus bandwidth when writing the actual data to disks because you are writing not only the data, but also the parity. Reads need only read the data and not the parity unless the array is in a degraded state. As a result, reads generate less traffic to the drives and across the busses of the computer for the same amount of data transfer under normal operating conditions. Level 5 This is the most common type of RAID. By distributing parity across all of an array's member disk drives, RAID level 5 eliminates the write bottleneck inherent in level 4. The only performance bottleneck is the parity calculation process itself. With modern CPUs and Software RAID, that is usually not a bottleneck at all since modern CPUs can generate parity very fast. However, if you have a sufficiently large number of member devices in a software RAID5 array such that the combined aggregate data transfer speed across all devices is high enough, then this bottleneck can start to come into play. As with level 4, level 5 has asymmetrical performance, with reads substantially outperforming writes. The storage capacity of RAID level 5 is calculated the same way as with level 4. Level 6 This is a common level of RAID when data redundancy and preservation, and not performance, are the paramount concerns, but where the space inefficiency of level 1 is not acceptable. Level 6 uses a complex parity scheme to be able to recover from the loss of any two drives in the array. This complex parity scheme creates a significantly higher CPU burden on software RAID devices and also imposes an increased burden during write transactions. As such, level 6 is considerably more asymmetrical in performance than levels 4 and 5. The total capacity of a RAID level 6 array is calculated similarly to RAID level 5 and 4, except that you must subtract 2 devices (instead of 1) from the device count for the extra parity storage space. Level 10 This RAID level attempts to combine the performance advantages of level 0 with the redundancy of level 1. It also helps to alleviate some of the space wasted in level 1 arrays with more than 2 devices. With level 10, it is possible to create a 3-drive array configured to store only 2 copies of each piece of data, which then allows the overall array size to be 1.5 times the size of the smallest devices instead of only equal to the smallest device (like it would be with a 3-device, level 1 array). The number of options available when creating level 10 arrays (as well as the complexity of selecting the right options for a specific use case) make it impractical to create during installation. It is possible to create one manually using the command line mdadm tool. For details on the options and their respective performance trade-offs, refer to man md . Linear RAID Linear RAID is a simple grouping of drives to create a larger virtual drive. In linear RAID, the chunks are allocated sequentially from one member drive, going to the drive only when the first is completely filled. This grouping provides no performance benefit, as it is unlikely that any I/O operations will be split between member drives. Linear RAID also offers no redundancy and, in fact, decreases reliability - if any one member drive fails, the entire array cannot be used. The capacity is the total of all member disks. [5] RAID level 1 comes at a high cost because you write the same information to all of the disks in the array, provides data reliability, but in a much less space-efficient manner than parity based RAID levels such as level 5. However, this space inefficiency comes with a performance benefit: parity-based RAID levels consume considerably more CPU power in order to generate the parity while RAID level 1 simply writes the same data more than once to the multiple RAID members with very little CPU overhead. As such, RAID level 1 can outperform the parity-based RAID levels on machines where software RAID is employed and CPU resources on the machine are consistently taxed with operations other than RAID activities. [6] Parity information is calculated based on the contents of the rest of the member disks in the array. This information can then be used to reconstruct data when one disk in the array fails. The reconstructed data can then be used to satisfy I/O requests to the failed disk before it is replaced and to repopulate the failed disk after it has been replaced.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/storage_administration_guide/s1-raid-levels
Chapter 1. Overview
Chapter 1. Overview 1.1. Introduction SAP NetWeaver or SAP S/4HANA based systems play an important role in many business processes; thus, it is critical to ensure the continuous and reliable availability of those systems to the business. This can be achieved by using HA clustering for managing the instances of such SAP NetWeaver or SAP S/4HANA systems. The underlying idea of HA clustering is a fairly simple one: not a single large machine bears all of the load and risk, but rather one or more machines automatically drop in as an instant full replacement for the service or the machine that has failed. In the best case, this replacement process causes no interruption to the systems' users. 1.2. Audience Designing highly available solutions and implementing them based on SAP NetWeaver or SAP S/4HANA can be very complex, so deep knowledge about each layer of the infrastructure and every aspect of the deployment is needed to ensure reliable, repeatable, accurate, and quick automated actions. This document is intended for SAP and Red Hat certified or trained administrators and consultants who already have experience setting up SAP NetWeaver or S/4HANA application server instances and HA clusters using the RHEL HA add-on or other clustering solutions. Access to both the SAP Support Portal and the Red Hat Customer Portal is required to be able to download software and additional documentation. Red Hat Consulting is highly recommended to set up the cluster and customize the solution to meet customers' data center requirements, which are normally more complex than the solution presented in this document. 1.3. Concepts 1.3.1. SAP NetWeaver or S/4HANA High Availability A typical SAP NetWeaver or S/4HANA environment consists of three distinctive components: SAP (A)SCS instance SAP application server instances (Primary Application Server (PAS) and Additional Application Server (AAS) instances) Database instance The (A)SCS instance and the database instance are single points of failure (SPOF); therefore, it is important to ensure they are protected by an HA solution to avoid data loss or corruption and unnecessary outages of the SAP system. For more information on SPOF, please refer to Single point of failure . For the application servers, the enqueue lock table that is managed by the enqueue server is the most critical component. To protect it, SAP has developed the "Enqueue Replication Server" ( ERS ), which maintains a backup copy of the enqueue lock table. While the (A)SCS is running on one server, the ERS always needs to maintain a copy of the current enqueue table on another server. This document describes how to set up a two-node or three-node HA cluster solution for managing (A)SCS and ERS instances that conforms to the guidelines for high availability that have been established by both SAP and Red Hat. The HA solution can either be used for the "Standalone Enqueue Server" (ENSA1) that is typically used with SAP NetWeaver or the "Standalone Enqueue Server 2" (ENSA2) that is used by SAP S/4HANA. Additionally, it also provides guidelines for setting up HA cluster resources for managing other SAP instance types, like Primary Application Server (PAS) or Additional Application Server (AAS) instances that can either be managed as part of the same HA cluster or on a separate HA cluster. 1.3.2. ENSA1 vs. ENSA2 1.3.2.1. Standalone Enqueue Server (ENSA1) In case there is an issue with the (A)SCS instance, for the Standalone Enqueue Server (ENSA1), it is required that the (A)SCS instance "follows" the ERS instance. That is, an HA cluster has to start the (A)SCS instance on the host where the ERS instance is currently running. Until the host where the (A)SCS instance was running has been fenced, it can be noticed that both instances stay running on that same node. When the HA cluster node where the (A)SCS instance was previously running is back online, the HA cluster should move the ERS instance to that HA cluster node so that Enqueue Replication can resume. The following diagram shows the typical architecture of a Pacemaker HA cluster for managing SAP NetWeaver setups with the Standalone Enqueue Server (ENSA1). Even though the diagram shows that it is optionally possible to also have Primary and Additional Application Server (PAS/AAS) instances managed on separate servers, it is also supported to have these instances running on the same HA cluster nodes as the (A)SCS and ERS instances and have them managed by the cluster. Please see the following SAP documentation for more information on how the Standalone Enqueue Server (ENSA1) works: Standalone Enqueue Server . 1.3.2.2. Standalone Enqueue Server 2 (ENSA2) As shown above with ENSA1, if there is a failover, the Standalone Enqueue Server is required to "follow" the Enqueue Replication Server. That is, the HA software had to start the (A)SCS instance on the host where the ERS instance is currently running. In contrast to the Standalone Enqueue Server (ENSA1), the new Standalone Enqueue Server 2 (ENSA2) and Enqueue Replicator 2 no longer have these restrictions, which means that the ASCS instance can either be restarted on the same cluster node in case of a failure. Or it can also be moved to another HA cluster node, which doesn't have to be the HA cluster node where the ERS instance is running. This makes it possible to use a multi-node HA cluster setup with more than two HA cluster nodes when Standalone Enqueue Server 2 (ENSA2) is used. When using more than two HA cluster nodes, the ASCS will failover to a spare node, as illustrated in the following picture: For more information on ENSA2, please refer to SAP Note 2630416 - Support for Standalone Enqueue Server 2 . The following diagram shows the architecture of a three-node cluster that can be used for managing SAP S/4HANA setups with the Standalone Enqueue Server 2 (ENSA2). Even though the diagram shows that it is optionally possible to also have Primary and Additional Application Server (PAS/AAS) instances managed on separate servers, it is also supported to have these instances running on the same HA cluster nodes as the ASCS and ERS instances and have them managed by the cluster. For SAP S/4HANA, it is also possible to use a "cost-optimized" HA cluster setup, where the cluster nodes used for managing the HANA System Replication setup are also used for managing the ASCS and ERS instances. Please see Configuring a Cost-Optimized SAP S/4HANA HA cluster (HANA System Replication + ENSA2) using the RHEL HA Add-On , for more information. 1.4. Resource Agents The following resource agents are provided on RHEL 8 for managing different instance types of SAP environments via the resource-agents-sap RPM package . 1.4.1. SAPInstance resource agent The SAPInstance resource agent can be used for managing SAP application server instances using the SAP Start Service that is part of the SAP Kernel. In addition to the (A)SCS, ERS, PAS, and AAS instances, it can also be used for managing other SAP instance types, like standalone SAP Web Dispatcher or standalone SAP Gateway instances (see How to manage standalone SAP Web Dispatcher instances using the RHEL HA Add-On for information on how to configure a pacemaker resource for managing such instances). All operations of the SAPInstance resource agent are done by using commands provided by the SAP Startup Framework, which communicate with the sapstartsrv process of each SAP instance. sapstartsrv knows 4 status colors: Color Meaning GREEN Everything is fine. YELLOW Something is wrong, but the service is still working. RED The service does not work. GRAY The service has not been started. The SAPInstance resource agent will interpret GREEN and YELLOW as OK, while statuses RED and GRAY are reported as NOT_RUNNING to the cluster. The versions of the SAPInstance resource agent shipped with RHEL 8 also support SAP instances that are managed by the systemd-enabled SAP Startup Framework (see The Systemd-Based SAP Startup Framework for further details). 1.4.1.1. Important SAPInstance resource agent parameters Attribute Name Required Default value Description InstanceName yes null The full SAP instance profile name ( <SAPSID>_<INSTNAME+INSTNO>_<virt hostname> ), for example, S4H_ASCS20_s4ascs . START_PROFILE no null The full path to the SAP Start Profile (with SAP NetWeaver 7.1 and newer, the SAP Start profile is identical to the instance profile). IS_ERS no false Only used for ASCS/ERS SAP Netweaver installations without implementing a promotable resource to allow the ASCS to find the ERS running on another cluster node after a resource failure. This parameter should be set to true for the resource used for managing the ERS instance for implementations following the SAP NetWeaver 7.50 HA certification (NW-HA-CLU-750; ENSA1). This also includes systems for NetWeaver less than 7.50 when using ENSA1. DIR_EXECUTABLE no null The full qualified path where to find sapstartsrv and sapcontrol binaries (only needed if the default location of the SAP Kernel binaries has been changed). DIR_PROFILE no null The full qualified path where to find the SAP START profile (only needed if the default location for the instance profiles has been changed). AUTOMATIC_RECOVER no false The SAPInstance resource agent tries to recover a failed start attempt automatically one time. This is done by killing running instance processes, removing the kill.sap file, and executing cleanipc . Sometimes a crashed SAP instance leaves some processes and/or shared memory segments behind. Setting this option to true will try to remove those leftovers during a start operation. MONITOR_SERVICES no disp+work | msg_server | enserver | enrepserver | jcontrol | jstart The list of services of an SAP instance that need to be monitored to determine the health of the instance. To monitor more/less, or other services that sapstartsrv supports, the list can be changed using this parameter. Names must match the strings used in the output of the command sapcontrol -nr [Instance-Nr] -function GetProcessList and multiple services separated by a (pipe) sign can be specified (the value for this parameter must always be the full list of services to monitor). The full list of parameters can be obtained by running pcs resource describe SAPInstance . 1.4.2. SAPDatabase resource agent The SAPDatabase resource agent can be used to manage single Oracle, IBM DB2, SAP ASE, or MaxDB database instances as part of a SAP NetWeaver based HA cluster setup. For more information, refer to Support Policies for RHEL High Availability Clusters - Management of SAP NetWeaver in a Cluster for the list of supported database versions on RHEL 8. The SAPDatabase resource agent does not run any database commands directly. It uses the SAP Host Agent to control the database. Therefore, the SAP Host Agent must be installed on each cluster node. Since the SAPDatabase resource agent only provides basic functionality for managing database instances, it is recommended to use the HA features of the databases instead (for example, Oracle RAC and IBM DB2 HA/DR) if more HA capabilities are required for the database instance. For S/4HANA HA setups, it is recommended to use HANA System Replication to make the HANA instance more robust against failures. The HANA System Replication HA setup can either be done using a separate cluster, or alternatively, it is also possible to use a "cost-optimized" S/4HANA HA setup where the ASCS and ERS instances are managed by the same HA cluster that is used for managing the HANA System Replication setup. 1.4.2.1. Important SAPDatabase resource agent parameters Attribute Name Required Default value Description SID yes null The unique database system identifier (usually identical to the SAP SID). DBTYPE yes null The type of database to manage. Valid values are: ADA (SAP MaxDB), DB6 (IBM DB2), ORA (Oracle DB), and SYB (SAP ASE). DBINSTANCE no null Must be used for special database implementations when the database instance name is not equal to the SID (e.g., Oracle DataGuard). DBOSUSER no ADA=taken from /etc/opt/sdb , DB6= db2SID , ORA= oraSID and oracle , SYB= sybSID , HDB= SIDadm The parameter can be set if the database processes on the operating system level are not executed with the default user of the used database type. STRICT_MONITORING no false This controls how the resource agent monitors the database. If set to true , it will use saphostctrl -function GetDatabaseStatus to test the database state. If set to false , only operating system processes are monitored. MONITOR_SERVICES no Instance|Database|Listener Defines which services are monitored by the SAPDatabase resource agent if STRICT_MONITORING is set to true . Service names must correspond with the output of the saphostctrl -function GetDatabaseStatus command. AUTOMATIC_RECOVER no false If you set this to true , saphostctrl -function StartDatabase will always be called with the -force option. The full list of parameters can be obtained by running pcs resource describe SAPDatabase . 1.5. Multi-SID Support (optional) The setup described in this document can also be used to manage the (A)SCS/ERS instances for multiple SAP environments (Multi-SID) within the same HA cluster. For example, SAP products that contain both ABAP and Java application server instances (like SAP Solution Manager) could be candidates for a Multi-SID cluster. However, some additional considerations need to be taken into account for such setups. 1.5.1. Unique SID and Instance Number To avoid conflicts, each pair of (A)SCS/ERS instances must use a different SID, and each instance must use a unique Instance Number even if they belong to a different SID. 1.5.2. Sizing Each HA cluster node must meet the SAP requirements for sizing to support multiple instances. 1.5.3. Installation For each (A)SCS/ERS pair, please repeat all the steps documented in sections 4.5, 4.6, and 4.7. Each (A)SCS/ERS pair will failover independently, following the configuration rules. Note With the default pacemaker configuration for RHEL 8, certain failures of resource actions (for example, the stop of a resource fails) will cause the cluster node to be fenced. This means that, for example, if the stop of the resource for one (A)SCS instance on a HA cluster node fails, it would cause an outage for all other resources running on the same HA cluster node. Please see the description of the on-fail property for monitoring operations in Configuring and managing high availability clusters - Chapter 21. Resource monitoring operations for options on how to modify this behavior. 1.6. Support policies Support Policies for RHEL High Availability Clusters - Management of SAP S/4HANA Support Policies for RHEL High Availability Clusters - Management of SAP NetWeaver in a Cluster
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/8/html/configuring_ha_clusters_to_manage_sap_netweaver_or_sap_s4hana_application_server_instances_using_the_rhel_ha_add-on/asmb_overview_v8-configuring-clusters-to-manage
Chapter 1. Important Changes to External Kernel Parameters
Chapter 1. Important Changes to External Kernel Parameters This chapter provides system administrators with a summary of significant changes in the kernel shipped with Red Hat Enterprise Linux 6.4. These changes include added or updated procfs entries, sysfs default values, boot parameters, kernel configuration options, or any noticeable behavior changes. intel_idle.max_cstate A new kernel parameter, intel_idle.max_cstate , has been added to specify the maximum depth of a C-state, or to disable intel_idle and fall back to acpi_idle . For more information, refer to the /usr/share/doc/kernel-doc- <version> /Documentation/kernel-parameters.txt file. nobar The new nobar kernel parameter, specific to the AMD64 / Intel 64 architecture, can be used to not assign address space to the Base Address Registers (BARs) that were not assigned by the BIOS. noari The new noari kernel parameter can disable the use of PCIe Alternative Routing ID Interpretation (ARI). MD state file The state file of an MD array component device (found in the /sys/block/md <md_number> /md/dev- <device_name> directory) can now contain additional device states. For more information, refer to the /usr/share/doc/kernel-doc- <version> /Documentation/md.txt file. route_localnet The route_localnet kernel parameter can be used to enable the use of 127/8 for local routing purposes. For more information, refer to the /usr/share/doc/kernel-doc- <version> /Documentation/networking/ip-sysctl.txt file. pf_retrans The pf_retrans kernel parameter specifies the number of re-transmissions that will be attempted on a given path before traffic is redirected to an alternate transport (should one exist). For more information, refer to the /usr/share/doc/kernel-doc- <version> /Documentation/networking/ip-sysctl.txt file. traceevent The new traceevent library, used by perf , uses the following sysfs control files: /sys/kernel/fadump_* On 64-bit IBM POWER machines, the following control files have been added to be used by the firmware-assisted dump feature: For more information about these files, refer to /usr/share/doc/kernel-doc- <version> /Documentation/powerpc/firmware-assisted-dump.txt . Transparent Hugepages The /sys/kernel/mm/transparent_hugepage symbolic link, which points to /sys/kernel/mm/redhat_transparent_hugepage , has been added for consistency purposes. Documentation for transparent hugepages has been added to the following file: vmbus_show_device_attr The vmbus_show_device_attr attribute of the Hyper-V vmbus driver shows the device attribute in sysfs. This is invoked when the /sys/bus/vmbus/devices/ <busdevice> / <attr_name> file is read. BNA debugfs Interface The BNA debugfs interface can be accessed through the bna/pci_dev: <pci_name> hierarchy (note that the debugfs file system must be mounted). The following debugging services are available for each pci_dev> : fwtrc - used to collect current firmware trace. fwsave - used to collect last-saved firmware trace as a result of firmware crash. regwr - used to write one word to the chip register. regrd - used to read one or more words from the chip register. iwlegacy debug_level The iwlegacy driver includes a new sysfs control file, /sys/bus/pci/drivers/iwl/debug_level , to control per-device level of debugging. The CONFIG_IWLEGACY_DEBUG option enables this feature. iwlwifi debug_level The iwlwifi driver includes a new sysfs control file, /sys/class/net/wlan0/device/debug_level , to control per-device level of debugging. The CONFIG_IWLWIFI_DEBUG option enables this feature. ie6xx_wdt If debugfs is mounted, the new /sys/kernel/debug/ie6xx_wdt file contains a value that determines whether the system was rebooted by watchdog. supported_krb5_enctypes The new /proc/fs/nfsd/supported_krb5_enctypes proc file lists the encryption types supported by the kernel's gss_krb5 code. usbmixer The /proc/asound/card <card_number> /usbmixer proc file has been added. It contains a mapping between the ALSA control API and the USB mixer control units. This file can be used debugging and problem diagnostics. codec# <number> The /proc/asound/card <card_number> /codec# <number> proc files now contain information about the D3cold power state, the deepest power-saving state for a PCIe device. The codec# <number> files now also contain additional power state information, specifically: reset status , clock stop ok , and power states error . The following is an example output: cgroup.procs The cgroup.procs file is now writable. Writing a TGID into the cgroup.procs file of a cgroup moves that thread group into that cgroup. sysfs_dirent The last sysfs_dirent , which represents a single sysfs node, is now cached to improve scalability of the readdir function. iov The iov sysfs directory was added under the ib device. This directory is used to manage and examine the port P_Key and guid paravirtualization. FDMI attributes Fabric Device Management Interface (FDMI) attributes can now be exposed to the fcoe driver via the fc_host class object. ltm_capable The /sys/bus/usb/devices/ <device> /ltm_capable file has been added to show whether a device supports Latency Tolerance Messaging (LTM). This file is present for both USB 2.0 and USB 3.0 devices. fwdump_state The /sys/class/net/eth <number> /device/fwdump_state file has been added to determine whether the firmware dump feature is enabled or disabled. flags , registers The Commands in Q item was added to the /sys/block/rssd <number> /registers file. This file's output was also re-formatted. Also, a new /sys/block/rssd <number> /flags file has been added. This read-only file dumps the flags in a port and driver data structure. duplex The /sys/class/net/eth <number> /duplex file now reports unknown when the NIC duplex state is DUPLEX_UNKNOWN . Mountpoint Interface A sysfs mountpoint interface was added to the perf tool. TCP_USER_TIMEOUT TCP_USER_TIMEOUT is a TCP level socket option that specifies the maximum amount of time (in milliseconds) that transmitted data may remain unacknowledged before TCP will forcefully close the corresponding connection and return ETIMEDOUT to the application. If the value 0 is specified, TCP will continue to use the system default. IPPROTO_ICMP The IPPROTO_ICMP socket option makes it possible to send ICMP_ECHO messages and receive the corresponding ICMP_ECHOREPLY messages without any special privileges. Increased Default in ST_MAX_TAPES In Red Hat Enterprise Linux 6.4, the number of supported tape drives has increased from 128 to 512. Increased Number of Supported IOMMUs The number of supported input/output memory management units (IOMMUs) has been increased to be the same as the number of I/O Advanced Programmable Interrupt Controllers (APICs; defined in MAX_IO_APICS ). New Module Parameters The following list summarizes new command line arguments passed to various kernel modules. For more information about the majority of these module parameters, refer to the output of the modinfo <module> command, for example, modinfo bna . New kvm module parameter: min_timer_period_us - Do not allow the guest to program periodic timers with small interval, since the hrtimers are not throttled by the host scheduler, and allow tuning the interval with this parameter. The default value is 500us . New kvm-intel module parameter: enable_ept_ad_bits - Parameter to control enabling/disabling A/D bits, if supported by CPU. The default value is enabled . New ata_piix module parameter: prefer_ms_hyperv - On Hyper-V Hypervisors, the disks are exposed on both the emulated SATA controller and on the paravirtualized drivers. The CD/DVD devices are only exposed on the emulated controller. Request to ignore ATA devices on this host. The default value is enabled . New drm module parameters: edid_fixup - Minimum number of valid EDID header bytes (0-8). The default value is 6 . edid_firmware - Do not probe monitor, use specified EDID blob from built-in data or /lib/firmware instead. New i915 module parameters: New nouveau module parameter: New radeon module parameter: New i2c-ismt module parameters: New iw-cxgb4 module parameters: New mlx4_ib module parameter: New ib_qib module parameter: New bna module parameter: New cxgb4 module parameters: New e1000e module parameter: New igb module parameter: New igbvf module parameter: New ixgbe module parameter: New ixgbevf module parameter: New hv_netvsc module parameter: New mlx4_core module parameter: enable_64b_cqe_eqe - Enable 64 byte CQEs/EQEs when the firmware supports this. New sfc module parameters: New ath5k module parameter: New iwlegacy module parameters: New wlcore module parameter: New s390 scm_block module parameters: New s390 zfcp module parameters: New aacraid module parameters: New be2iscsi module parameter: New lpfc module parameter: New megaraid_sas module parameters: New qla4xxx module parameters: New hv_storvsc module parameter: New ehci-hcd driver parameter: io_watchdog_force - Force I/O watchdog to be ON for all devices. New ie6xx_wdt module parameters: New snd-ua101 module parameter:
[ "/sys/kernel/debug/tracing/events/header_page /sys/kernel/debug/tracing/events/.../.../format /sys/bus/event_source/devices/ <dev> /format /sys/bus/event_source/devices/ <dev> /events /sys/bus/event_source/devices/ <dev> /type", "/sys/kernel/fadump_enabled /sys/kernel/fadump_registered /sys/kernel/fadump_release_mem", "/usr/share/doc/kernel-doc- <version> /Documentation/vm/transhuge.txt", "Power: setting=D0, actual=D0, Error, Clock-stop-OK, Setting-reset", "module_param(min_timer_period_us, uint, S_IRUGO | S_IWUSR);", "module_param_named(eptad, enable_ept_ad_bits, bool, S_IRUGO);", "module_param(prefer_ms_hyperv, int, 0);", "module_param_named(edid_fixup, edid_fixup, int, 0400); module_param_string(edid_firmware, edid_firmware, sizeof(edid_firmware), 0644);", "module_param_named(lvds_channel_mode, i915_lvds_channel_mode, int, 0600); module_param_named(i915_enable_ppgtt, i915_enable_ppgtt, int, 0600); module_param_named(invert_brightness, i915_panel_invert_brightness, int, 0600);", "module_param_named(vram_type, nouveau_vram_type, charp, 0400);", "module_param_named(lockup_timeout, radeon_lockup_timeout, int, 0444);", "module_param(stop_on_error, uint, S_IRUGO); module_param(fair, uint, S_IRUGO);", "module_param(db_delay_usecs, int, 0644); module_param(db_fc_threshold, int, 0644);", "module_param_named(sm_guid_assign, mlx4_ib_sm_guid_assign, int, 0444);", "module_param_named(cc_table_size, qib_cc_table_size, uint, S_IRUGO);", "module_param(bna_debugfs_enable, uint, S_IRUGO | S_IWUSR);", "module_param(dbfifo_int_thresh, int, 0644); module_param(dbfifo_drain_delay, int, 0644);", "module_param(debug, int, 0);", "module_param(debug, int, 0);", "module_param(debug, int, 0);", "module_param(debug, int, 0);", "module_param(debug, int, 0);", "module_param(ring_size, int, S_IRUGO);", "module_param(enable_64b_cqe_eqe, bool, 0444);", "module_param(vf_max_tx_channels, uint, 0444); module_param(max_vfs, int, 0444);", "module_param_named(no_hw_rfkill_switch, ath5k_modparam_no_hw_rfkill_switch, bool, S_IRUGO);", "module_param(led_mode, int, S_IRUGO); module_param(bt_coex_active, bool, S_IRUGO);", "module_param(no_recovery, bool, S_IRUSR | S_IWUSR);", "module_param(nr_requests, uint, S_IRUGO); module_param(write_cluster_size, uint, S_IRUGO)", "module_param_named(no_auto_port_rescan, no_auto_port_rescan, bool, 0600); module_param_named(datarouter, enable_multibuffer, bool, 0400); module_param_named(dif, enable_dif, bool, 0400);", "module_param(aac_sync_mode, int, S_IRUGO|S_IWUSR); module_param(aac_convert_sgl, int, S_IRUGO|S_IWUSR);", "module_param(beiscsi_##_name, uint, S_IRUGO);", "module_param(lpfc_req_fw_upgrade, int, S_IRUGO|S_IWUSR);", "module_param(msix_vectors, int, S_IRUGO); module_param(throttlequeuedepth, int, S_IRUGO); module_param(resetwaittime, int, S_IRUGO);", "module_param(ql4xqfulltracking, int, S_IRUGO | S_IWUSR); module_param(ql4xmdcapmask, int, S_IRUGO); module_param(ql4xenablemd, int, S_IRUGO | S_IWUSR);", "module_param(storvsc_ringbuffer_size, int, S_IRUGO);", "module_param(io_watchdog_force, uint, S_IRUGO);", "module_param(timeout, uint, 0); module_param(nowayout, bool, 0); module_param(resetmode, byte, 0);", "module_param(queue_length, uint, 0644);" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/ch-Important-Changes-to-External-Kernel-Parameters
Chapter 14. Troubleshooting monitoring issues
Chapter 14. Troubleshooting monitoring issues 14.1. Investigating why user-defined metrics are unavailable ServiceMonitor resources enable you to determine how to use the metrics exposed by a service in user-defined projects. Follow the steps outlined in this procedure if you have created a ServiceMonitor resource but cannot see any corresponding metrics in the Metrics UI. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have installed the OpenShift CLI ( oc ). You have enabled and configured monitoring for user-defined projects. You have created a ServiceMonitor resource. Procedure Check that the corresponding labels match in the service and ServiceMonitor resource configurations. Obtain the label defined in the service. The following example queries the prometheus-example-app service in the ns1 project: USD oc -n ns1 get service prometheus-example-app -o yaml Example output labels: app: prometheus-example-app Check that the matchLabels definition in the ServiceMonitor resource configuration matches the label output in the preceding step. The following example queries the prometheus-example-monitor service monitor in the ns1 project: USD oc -n ns1 get servicemonitor prometheus-example-monitor -o yaml Example output apiVersion: v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 spec: endpoints: - interval: 30s port: web scheme: http selector: matchLabels: app: prometheus-example-app Note You can check service and ServiceMonitor resource labels as a developer with view permissions for the project. Inspect the logs for the Prometheus Operator in the openshift-user-workload-monitoring project. List the pods in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring get pods Example output NAME READY STATUS RESTARTS AGE prometheus-operator-776fcbbd56-2nbfm 2/2 Running 0 132m prometheus-user-workload-0 5/5 Running 1 132m prometheus-user-workload-1 5/5 Running 1 132m thanos-ruler-user-workload-0 3/3 Running 0 132m thanos-ruler-user-workload-1 3/3 Running 0 132m Obtain the logs from the prometheus-operator container in the prometheus-operator pod. In the following example, the pod is called prometheus-operator-776fcbbd56-2nbfm : USD oc -n openshift-user-workload-monitoring logs prometheus-operator-776fcbbd56-2nbfm -c prometheus-operator If there is a issue with the service monitor, the logs might include an error similar to this example: level=warn ts=2020-08-10T11:48:20.906739623Z caller=operator.go:1829 component=prometheusoperator msg="skipping servicemonitor" error="it accesses file system via bearer token file which Prometheus specification prohibits" servicemonitor=eagle/eagle namespace=openshift-user-workload-monitoring prometheus=user-workload Review the target status for your endpoint on the Metrics targets page in the OpenShift Container Platform web console UI. Log in to the OpenShift Container Platform web console and navigate to Observe Targets in the Administrator perspective. Locate the metrics endpoint in the list, and review the status of the target in the Status column. If the Status is Down , click the URL for the endpoint to view more information on the Target Details page for that metrics target. Configure debug level logging for the Prometheus Operator in the openshift-user-workload-monitoring project. Edit the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add logLevel: debug for prometheusOperator under data/config.yaml to set the log level to debug : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheusOperator: logLevel: debug # ... Save the file to apply the changes. The affected prometheus-operator pod is automatically redeployed. Confirm that the debug log-level has been applied to the prometheus-operator deployment in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring get deploy prometheus-operator -o yaml | grep "log-level" Example output - --log-level=debug Debug level logging will show all calls made by the Prometheus Operator. Check that the prometheus-operator pod is running: USD oc -n openshift-user-workload-monitoring get pods Note If an unrecognized Prometheus Operator loglevel value is included in the config map, the prometheus-operator pod might not restart successfully. Review the debug logs to see if the Prometheus Operator is using the ServiceMonitor resource. Review the logs for other related errors. Additional resources Creating a user-defined workload monitoring config map See Specifying how a service is monitored for details on how to create a ServiceMonitor or PodMonitor resource See Accessing metrics targets in the Administrator perspective 14.2. Determining why Prometheus is consuming a lot of disk space Developers can create labels to define attributes for metrics in the form of key-value pairs. The number of potential key-value pairs corresponds to the number of possible values for an attribute. An attribute that has an unlimited number of potential values is called an unbound attribute. For example, a customer_id attribute is unbound because it has an infinite number of possible values. Every assigned key-value pair has a unique time series. The use of many unbound attributes in labels can result in an exponential increase in the number of time series created. This can impact Prometheus performance and can consume a lot of disk space. You can use the following measures when Prometheus consumes a lot of disk: Check the time series database (TSDB) status using the Prometheus HTTP API for more information about which labels are creating the most time series data. Doing so requires cluster administrator privileges. Check the number of scrape samples that are being collected. Reduce the number of unique time series that are created by reducing the number of unbound attributes that are assigned to user-defined metrics. Note Using attributes that are bound to a limited set of possible values reduces the number of potential key-value pair combinations. Enforce limits on the number of samples that can be scraped across user-defined projects. This requires cluster administrator privileges. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have installed the OpenShift CLI ( oc ). Procedure In the Administrator perspective, navigate to Observe Metrics . Enter a Prometheus Query Language (PromQL) query in the Expression field. The following example queries help to identify high cardinality metrics that might result in high disk space consumption: By running the following query, you can identify the ten jobs that have the highest number of scrape samples: topk(10, max by(namespace, job) (topk by(namespace, job) (1, scrape_samples_post_metric_relabeling))) By running the following query, you can pinpoint time series churn by identifying the ten jobs that have created the most time series data in the last hour: topk(10, sum by(namespace, job) (sum_over_time(scrape_series_added[1h]))) Investigate the number of unbound label values assigned to metrics with higher than expected scrape sample counts: If the metrics relate to a user-defined project , review the metrics key-value pairs assigned to your workload. These are implemented through Prometheus client libraries at the application level. Try to limit the number of unbound attributes referenced in your labels. If the metrics relate to a core OpenShift Container Platform project , create a Red Hat support case on the Red Hat Customer Portal . Review the TSDB status using the Prometheus HTTP API by running the following commands when logged in as a cluster administrator: Get the Prometheus API route URL by running the following command: USD HOST=USD(oc -n openshift-monitoring get route prometheus-k8s -ojsonpath='{.status.ingress[].host}') Extract an authentication token by running the following command: USD TOKEN=USD(oc whoami -t) Query the TSDB status for Prometheus by running the following command: USD curl -H "Authorization: Bearer USDTOKEN" -k "https://USDHOST/api/v1/status/tsdb" Example output "status": "success","data":{"headStats":{"numSeries":507473, "numLabelPairs":19832,"chunkCount":946298,"minTime":1712253600010, "maxTime":1712257935346},"seriesCountByMetricName": [{"name":"etcd_request_duration_seconds_bucket","value":51840}, {"name":"apiserver_request_sli_duration_seconds_bucket","value":47718}, ... Additional resources Accessing monitoring APIs by using the CLI Setting a scrape sample limit for user-defined projects Submitting a support case 14.3. Resolving the KubePersistentVolumeFillingUp alert firing for Prometheus As a cluster administrator, you can resolve the KubePersistentVolumeFillingUp alert being triggered for Prometheus. The critical alert fires when a persistent volume (PV) claimed by a prometheus-k8s-* pod in the openshift-monitoring project has less than 3% total space remaining. This can cause Prometheus to function abnormally. Note There are two KubePersistentVolumeFillingUp alerts: Critical alert : The alert with the severity="critical" label is triggered when the mounted PV has less than 3% total space remaining. Warning alert : The alert with the severity="warning" label is triggered when the mounted PV has less than 15% total space remaining and is expected to fill up within four days. To address this issue, you can remove Prometheus time-series database (TSDB) blocks to create more space for the PV. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have installed the OpenShift CLI ( oc ). Procedure List the size of all TSDB blocks, sorted from oldest to newest, by running the following command: USD oc debug <prometheus_k8s_pod_name> -n openshift-monitoring \ 1 -c prometheus --image=USD(oc get po -n openshift-monitoring <prometheus_k8s_pod_name> \ 2 -o jsonpath='{.spec.containers[?(@.name=="prometheus")].image}') \ -- sh -c 'cd /prometheus/;du -hs USD(ls -dtr */ | grep -Eo "[0-9|A-Z]{26}")' 1 2 Replace <prometheus_k8s_pod_name> with the pod mentioned in the KubePersistentVolumeFillingUp alert description. Example output 308M 01HVKMPKQWZYWS8WVDAYQHNMW6 52M 01HVK64DTDA81799TBR9QDECEZ 102M 01HVK64DS7TRZRWF2756KHST5X 140M 01HVJS59K11FBVAPVY57K88Z11 90M 01HVH2A5Z58SKT810EM6B9AT50 152M 01HV8ZDVQMX41MKCN84S32RRZ1 354M 01HV6Q2N26BK63G4RYTST71FBF 156M 01HV664H9J9Z1FTZD73RD1563E 216M 01HTHXB60A7F239HN7S2TENPNS 104M 01HTHMGRXGS0WXA3WATRXHR36B Identify which and how many blocks could be removed, then remove the blocks. The following example command removes the three oldest Prometheus TSDB blocks from the prometheus-k8s-0 pod: USD oc debug prometheus-k8s-0 -n openshift-monitoring \ -c prometheus --image=USD(oc get po -n openshift-monitoring prometheus-k8s-0 \ -o jsonpath='{.spec.containers[?(@.name=="prometheus")].image}') \ -- sh -c 'ls -latr /prometheus/ | egrep -o "[0-9|A-Z]{26}" | head -3 | \ while read BLOCK; do rm -r /prometheus/USDBLOCK; done' Verify the usage of the mounted PV and ensure there is enough space available by running the following command: USD oc debug <prometheus_k8s_pod_name> -n openshift-monitoring \ 1 --image=USD(oc get po -n openshift-monitoring <prometheus_k8s_pod_name> \ 2 -o jsonpath='{.spec.containers[?(@.name=="prometheus")].image}') -- df -h /prometheus/ 1 2 Replace <prometheus_k8s_pod_name> with the pod mentioned in the KubePersistentVolumeFillingUp alert description. The following example output shows the mounted PV claimed by the prometheus-k8s-0 pod that has 63% of space remaining: Example output Starting pod/prometheus-k8s-0-debug-j82w4 ... Filesystem Size Used Avail Use% Mounted on /dev/nvme0n1p4 40G 15G 40G 37% /prometheus Removing debug pod ...
[ "oc -n ns1 get service prometheus-example-app -o yaml", "labels: app: prometheus-example-app", "oc -n ns1 get servicemonitor prometheus-example-monitor -o yaml", "apiVersion: v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 spec: endpoints: - interval: 30s port: web scheme: http selector: matchLabels: app: prometheus-example-app", "oc -n openshift-user-workload-monitoring get pods", "NAME READY STATUS RESTARTS AGE prometheus-operator-776fcbbd56-2nbfm 2/2 Running 0 132m prometheus-user-workload-0 5/5 Running 1 132m prometheus-user-workload-1 5/5 Running 1 132m thanos-ruler-user-workload-0 3/3 Running 0 132m thanos-ruler-user-workload-1 3/3 Running 0 132m", "oc -n openshift-user-workload-monitoring logs prometheus-operator-776fcbbd56-2nbfm -c prometheus-operator", "level=warn ts=2020-08-10T11:48:20.906739623Z caller=operator.go:1829 component=prometheusoperator msg=\"skipping servicemonitor\" error=\"it accesses file system via bearer token file which Prometheus specification prohibits\" servicemonitor=eagle/eagle namespace=openshift-user-workload-monitoring prometheus=user-workload", "oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheusOperator: logLevel: debug", "oc -n openshift-user-workload-monitoring get deploy prometheus-operator -o yaml | grep \"log-level\"", "- --log-level=debug", "oc -n openshift-user-workload-monitoring get pods", "topk(10, max by(namespace, job) (topk by(namespace, job) (1, scrape_samples_post_metric_relabeling)))", "topk(10, sum by(namespace, job) (sum_over_time(scrape_series_added[1h])))", "HOST=USD(oc -n openshift-monitoring get route prometheus-k8s -ojsonpath='{.status.ingress[].host}')", "TOKEN=USD(oc whoami -t)", "curl -H \"Authorization: Bearer USDTOKEN\" -k \"https://USDHOST/api/v1/status/tsdb\"", "\"status\": \"success\",\"data\":{\"headStats\":{\"numSeries\":507473, \"numLabelPairs\":19832,\"chunkCount\":946298,\"minTime\":1712253600010, \"maxTime\":1712257935346},\"seriesCountByMetricName\": [{\"name\":\"etcd_request_duration_seconds_bucket\",\"value\":51840}, {\"name\":\"apiserver_request_sli_duration_seconds_bucket\",\"value\":47718},", "oc debug <prometheus_k8s_pod_name> -n openshift-monitoring \\ 1 -c prometheus --image=USD(oc get po -n openshift-monitoring <prometheus_k8s_pod_name> \\ 2 -o jsonpath='{.spec.containers[?(@.name==\"prometheus\")].image}') -- sh -c 'cd /prometheus/;du -hs USD(ls -dtr */ | grep -Eo \"[0-9|A-Z]{26}\")'", "308M 01HVKMPKQWZYWS8WVDAYQHNMW6 52M 01HVK64DTDA81799TBR9QDECEZ 102M 01HVK64DS7TRZRWF2756KHST5X 140M 01HVJS59K11FBVAPVY57K88Z11 90M 01HVH2A5Z58SKT810EM6B9AT50 152M 01HV8ZDVQMX41MKCN84S32RRZ1 354M 01HV6Q2N26BK63G4RYTST71FBF 156M 01HV664H9J9Z1FTZD73RD1563E 216M 01HTHXB60A7F239HN7S2TENPNS 104M 01HTHMGRXGS0WXA3WATRXHR36B", "oc debug prometheus-k8s-0 -n openshift-monitoring -c prometheus --image=USD(oc get po -n openshift-monitoring prometheus-k8s-0 -o jsonpath='{.spec.containers[?(@.name==\"prometheus\")].image}') -- sh -c 'ls -latr /prometheus/ | egrep -o \"[0-9|A-Z]{26}\" | head -3 | while read BLOCK; do rm -r /prometheus/USDBLOCK; done'", "oc debug <prometheus_k8s_pod_name> -n openshift-monitoring \\ 1 --image=USD(oc get po -n openshift-monitoring <prometheus_k8s_pod_name> \\ 2 -o jsonpath='{.spec.containers[?(@.name==\"prometheus\")].image}') -- df -h /prometheus/", "Starting pod/prometheus-k8s-0-debug-j82w4 Filesystem Size Used Avail Use% Mounted on /dev/nvme0n1p4 40G 15G 40G 37% /prometheus Removing debug pod" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/monitoring/troubleshooting-monitoring-issues
Part II. Requirements and your responsibilities
Part II. Requirements and your responsibilities Before you start using simple content access, review the hardware and software requirements and your responsibilities when you use this tool. Learn more Review the general requirements for using simple content access: Requirements Review information about your responsibilities when you use simple content access: Your responsibilities
null
https://docs.redhat.com/en/documentation/subscription_central/1-latest/html/getting_started_with_simple_content_access/assembly-requirements-and-your-responsibilities
9.11. Managing User Groups
9.11. Managing User Groups User groups are a way of centralizing control over important management tasks, particularly access control and password policies. Four groups are created during the installation, specifically for use by IdM operations: ipausers, which contains all users. admins, which contains administrative users. The initial admin user belongs to this group. trusted admins, which contains administrative users used to manage Active Directory trusts. editors, which is a special group for users working through the web UI. This group allows users to edit other users' entries, though without all of the rights of the admin user. Note Some operating systems limit the number of groups that can be assigned to system users. For example, Solaris and AIX systems both limit users to 16 groups per user. This can be an issue when using nested groups, when a user may be automatically added to multiple groups. 9.11.1. Types of Groups in IdM All groups in Identity Management are essentially static groups, meaning that the members of the group are manually and explicitly added to the group. Tangentially, IdM allows nested groups , where a group is a member of another group. In that case, all of the group members of the member group automatically belong to the parent group, as well. Automembership rules allow new users to be added to groups automatically, using attributes in the user entry to determine what groups the user should belong to. Automembership rules are covered in Chapter 25, Policy: Defining Automatic Group Membership for Users and Hosts . The way groups are defined in IdM is simple, but there are different configuration options for groups which can change what kinds of members can be added. Some types of groups in IdM are based not on how members are added, but rather where the member entries originate: Internal groups (the default), where all members belong to the IdM domain. External groups, where some or all of the members exist in an identity store outside of the IdM domain. This can be a local system, an Active Directory domain, or a directory service. Another difference is whether groups are created with POSIX attributes. Most Linux users require some kind of POSIX attributes, but groups which interact with Active Directory or Samba must be non-POSIX. By default, IdM creates non-POSIX groups, but there is an explicit option to create a POSIX group (adding the posixgroup object class). Because groups are easy to create, it is possible to be very flexible in what groups to create and how they are organized. Groups can be defined around organizational divisions like departments, physical locations, or IdM or infrastructure usage guidelines for access controls. 9.11.2. Group Object Classes When a group entry is created, it is automatically assigned certain LDAP object classes. (LDAP object classes and attributes are discussed in detail in the Directory Server Deployment Guide and the Directory Server Schema Reference .) For groups, only two attributes truly matter: the name and the description. Table 9.4. Default Identity Management Group Object Classes Description Object Classes IdM object classes ipaobject ipausergroup nestedgroup Group object classes groupofnames 9.11.2.1. Creating User Groups 9.11.2.1.1. With the Web UI Open the Identity tab, and select the User Groups subtab. Click the Add link at the top of the groups list. Enter all of the information for the group. A unique name. This is the identifier used for the group in the IdM domain, and it cannot be changed after it is created. The name cannot contain spaces, but other separators like an underscore (_) are allowed. A text description of the group. Whether the group is a POSIX group, which adds Linux-specific information to the entry. By default, all groups are POSIX groups unless they are explicitly configured not to be. Non-POSIX groups can be created for interoperability with Windows or Samba. Optionally, the GID number for the group. All POSIX groups require a GID number, but IdM automatically assigns the GID number. Setting a GID number is not necessary because of the risk of collisions. If a GID number is given manually, IdM will not override the specified GID number, even if it is not unique. Click the Add and Edit button to go immediately to the member selection page. Select the members, as described in Section 9.11.2.2.1, "With the Web UI (Group Page)" . 9.11.2.1.2. With the Command Line New groups are created using the group-add command. (This adds only the group; members are added separately.) Two attributes are always required: the group name and the group description. If those attributes are not given as arguments, then the script prompts for them. Additionally, there is one other configuration option, --nonposix . (By default, all groups are created as POSIX groups.) To enable interoperability with Windows users and groups and programs like Samba, it is possible to create non-POSIX groups by using the --nonposix option. This option tells the script not to add the posixGroup object class to the entry. For example: When no arguments are used, the command prompts for the required group account information: Important When a group is created without specifying a GID number, then the group entry is assigned the ID number that is available in the server or replica range. (Number ranges are described more in Section 9.9, "Managing Unique UID and GID Number Assignments" .) This means that a group always has a unique number for its GID number. If a number is manually assigned to a group entry, the server does not validate that the gidNumber is unique. It will allow duplicate IDs; this is expected (though discouraged) behavior for POSIX entries. If two entries are assigned the same ID number, only the first entry is returned in a search for that ID number. However, both entries will be returned in searches for other attributes or with ipa group-find --all . Note You cannot edit the group name. The group name is the primary key, so changing it is the equivalent of deleting the group and creating a new one. 9.11.2.2. Adding Group Members 9.11.2.2.1. With the Web UI (Group Page) Note This procedure adds a user to a group. User groups can contain other user groups as their members. These are nested groups. It can take up to several minutes for the members of the child group to show up as members of the parent group. This is especially true on virtual machines where the nested groups have more than 500 members. When creating nested groups, be careful not to create recursive groups. For example, if GroupA is a member of GroupB, do not add GroupB as a member of GroupA. Recursive groups are not supported and can cause unpredictable behavior. Open the Identity tab, and select the User Groups subtab. Click the name of the group to which to add members. Click the Add link at the top of the task area. Click the checkbox by the names of the users to add, and click the right arrows button, >> , to move the names to the selection box. Click the Add button. Group members can be users or other user groups. It can take up to several minutes for the members of the child group to show up as members of the parent group. This is especially true on virtual machines where the nested groups have more than 500 members. 9.11.2.2.2. With the Web UI (User's Page) Users can also be added to a group through the user's page. Open the Identity tab, and select the Users subtab. Click the name of the user to edit. Open the User Groups tab on the user entry page. Click the Add link at the top of the task area. Click the checkbox by the names of the groups for the user to join, and click the right arrows button, >> , to move the groups to the selection box. Click the Add button. 9.11.2.2.3. With the Command Line Members are added to a group using the group-add-member command. This command can add both users as group members and other groups as group members. The syntax of the group-add-member command requires only the group name and a comma-separated list of users to add: For example, this adds three users to the engineering group: Likewise, other groups can be added as members, which creates nested groups: When displaying nested groups, members are listed as members and the members of any member groups are listed as indirect members. For example: It can take up to several minutes for the members of the child group to show up as members of the parent group. This is especially true on virtual machines where the nested groups have more than 500 members. Note When creating nested groups, be careful not to create recursive groups. For example, if GroupA is a member of GroupB, do not add GroupB as a member of GroupA. Recursive groups are not supported and can cause unpredictable behavior. A group member is removed using the group-remove-member command. 9.11.2.2.4. Viewing Direct and Indirect Members of a Group User groups can contain other user groups as members. This is called a nested group . This also means that a group has two types of members: Direct members , which are added explicitly to the group Indirect members , which are members of the group because they are members of another user group which is a member of the group The IdM web UI has an easy way to view direct and indirect members of a group. The members list is filtered by member type, and this can be toggled by selecting the Direct and Indirect radio buttons at the top right corner of the members list. Figure 9.4. Indirect and Direct Members Being able to track indirect members makes it easier to assign group membership properly, without duplicating membership. 9.11.2.3. Deleting User Groups When a user group is deleted, only the group is removed. The user accounts of group members (including nested groups) are not affected. Additionally, any access control delegations that apply to that group are removed. Warning Deleting a group is immediate and permanent. If any group configuration (such as delegations) is required, it must be assigned to another group or a new group created. 9.11.2.3.1. With the Web UI Open the Identity tab, and select the User Groups subtab. Select the checkbox by the name of the group to delete. Click the Delete link at the top of the task area. When prompted, confirm the delete action. 9.11.2.3.2. With the Command Line The group-del command to deletes the specified group. For example: 9.11.3. Searching for Users and Groups The user searches in IdM can be run against simple (full word) or partial search strings. The range of attributes that are searched is configured as part of the default IdM configuration, as in Section 9.10.4, "Specifying Default User and Group Attributes" . 9.11.3.1. Setting Search Limits 9.11.3.1.1. Types of Search Limits and Where They Apply Some searches can result in a large number of entries being returned, possibly even all entries. Search limits improve overall server performance by limiting how long the server spends in a search and how many entries are returned. Search limits have a dual purpose to improve server performance by reducing the search load and to improve usability by returning a smaller - and therefore easier to browse - set of entries. The IdM server has several different limits imposed on searches: The search limit configuration for the IdM server. This is a setting for the IdM server itself, which is applied to all requests sent to the server from all IdM clients, the IdM CLI tools, and the IdM web UI for normal page display. By default, this limit is 100 entries. The time limit configuration for the IdM server. Much like the search size limit, the time limit sets a maximum amount of time that the IdM server, itself, waits for searches to run. Once it reaches that limit, the server stops the search and returns whatever entries were returned in that time. By default, this limit is two seconds. The page size limit. Although not strictly a search limit, the page size limit does limit how many entries are returned per page. The server returns the set of entries, up to the search limit, and then sorts and displays 20 entries per page. Paging results makes the results more understandable and more viewable. This is hard-coded to 20 for all searches. The LDAP search limit (--pkey option). All searches performed in the UI, and CLI searches which use the --pkey option, override the search limit set in the IdM server configuration and use the search limit set in the underlying LDAP directory. By default, this limit is 2000 entries. It can be edited by editing the 389 Directory Server configuration. 9.11.3.1.2. Setting IdM Search Limits Search limits set caps on the number of records returned or the time spent searching when querying the database for user or group entries. There are two types of search limits: time limits and size (number) limits. With the default settings, users are limited to two-second searches and no more than 100 records returned per search. Important Setting search size or time limits too high can negatively affect IdM server performance. 9.11.3.1.2.1. With the Web UI Open the IPA Server tab. Select the Configuration subtab. Scroll to the Search Options area. Change the search limit settings. Search size limit , the maximum number of records to return in a search. Search time limit , the maximum amount of time, in seconds, to spend on a search before the server returns results. Note Setting the time limit or size limit value to -1 means that there are no limits on searches. When the changes are complete, click the Update link at the top of the Configuration page. 9.11.3.1.2.2. With the Command Line The search limits can be changed using the config-mod command. Note Setting the time limit or size limit value to -1 means that there are no limits on searches. 9.11.3.1.3. Overriding the Search Defaults Part of the server configuration is setting global defaults for size and time limits on searches. While these limits are always enforced in the web UI, they can be overridden with any *-find command run through the command line. The --sizelimit and --timelimit options set alternative size and time limits, respectively, for that specific command run. The limits can be higher or lower, depending on the kinds of results you need. For example, if the default time limit is 60 seconds and a search is going to take longer, the time limit can be increased to 120 seconds: 9.11.3.2. Setting Search Attributes A search for users or groups does not automatically search every possible attribute for that attribute. Rather, it searches a specific subset of attributes, and that list is configurable. When adding attributes to the user or group search fields, make sure that there is a corresponding index within the LDAP directory for that attribute. Searches are performed based on indexes. Most standard LDAP attributes have indexes, but any custom attributes must have indexes created for them. Creating indexes is described in the indexes chapter in the Directory Server Administrator's Guide . 9.11.3.2.1. Default Attributes Checked by Searches By default, there are six attributes that are indexed for user searches and two that are indexed for group searches. These are listed in Table 9.5, "Default Search Attributes" . All search attributes are searched in a user/group search. Table 9.5. Default Search Attributes User Search Attributes First name Last name Login ID Job title Organizational unit Phone number Group Search Attributes Name Description The attributes which are searched in user and group searches can be changed, as described in Section 9.11.3.2, "Setting Search Attributes" and Section 9.11.3.2.3, "Changing Group Search Attributes" . 9.11.3.2.2. Changing User Search Attributes 9.11.3.2.2.1. From the Web UI Open the IPA Server tab. Select the Configuration subtab. Scroll to the User Options area. Add any additional search attributes, in a comma-separated list, in the User search fields field. When the changes are complete, click the Update link at the top of the Configuration page. 9.11.3.2.2.2. From the Command Line To change the search attributes, use the --usersearch option to set the attributes for user searches. Note Always give the complete list of search attributes. Whatever values are passed with the configuration argument overwrite the settings. 9.11.3.2.3. Changing Group Search Attributes A search for users or groups does not automatically search every possible attribute for that attribute. Rather, it searches a specific subset of attributes, and that list is configurable. When adding attributes to the user or group search fields, make sure that there is a corresponding index within the LDAP directory for that attribute. Searches are performed based on indexes. Most standard LDAP attributes have indexes, but any custom attributes must have indexes created for them. Creating indexes is described in the indexes chapter in the Directory Server Administrator's Guide . 9.11.3.2.3.1. From the Web UI Open the IPA Server tab. Select the Configuration subtab. Scroll to the Group Options area. Add any additional search attributes, in a comma-separated list, in the Group search fields field. When the changes are complete, click the Update link at the top of the Configuration page. 9.11.3.2.3.2. From the Command Line To change the search attributes, use the --groupsearch options to set the attributes for group searches. Note Always give the complete list of search attributes. Whatever values are passed with the configuration argument overwrite the settings. 9.11.3.2.4. Limits on Attributes Returned in Search Results Searches can be performed on attributes that are not displayed in the UI. This means that entries can be returned in a search that do not appear to match the given filter. This is especially common if the search information is very short, which increases the likelihood of a match. 9.11.3.3. Searching for Groups Based on Type Group definitions are simple, but because it is possible to create automember rules which automatically assign entries to groups, nested groups which include members implicitly, and groups based on member attributes such as POSIX, the reality of the group definitions can be very complex. There are numerous different options with the group-find command which allow groups to be searched based on who the members are and are not and other attributes of the group definition. For example, user private groups are never displayed in the IdM UI and are not returned in a regular search. Using the --private option, however, limits the search results to only private groups. Group searches can also be based on who does or does not belong to a group. This can mean single users, other groups, or even other configuration entries like roles and host-based access control definitions. For example, the first search shows what groups the user jsmith belongs to: The other search shows all the groups that jsmith does not belong to: Some useful group search options are listed in Table 9.6, "Common Group Search Options" . Table 9.6. Common Group Search Options Option Criteria Description --private Displays only private groups. --gid Displays only the group which matches the complete, specified GID. --group-name Displays only groups with that name or part of their name. --users, --no-users Displays only groups which have the given users as members (or which do not include the given user). --in-hbacrules, --not-inhbac-rules Displays only groups which belong to a given host-based access control rule (or which do not belong to the rule, for the --not-in option). There are similar options to display (or not) groups which belong to a specified sudo rule and role. --in-groups, --not-in-groups Displays only groups which belong to another, specified group (or which do not belong to the group, for the --not-in option). There are similar options to display (or not) groups which belong to a specified netgroup.
[ "[bjensen@server ~]USD ipa group-add groupName --desc=\" description \" [--nonposix]", "[bjensen@server ~]USD ipa group-add examplegroup --desc=\"for examples\" --nonposix ---------------------- Added group \"examplegroup\" ---------------------- Group name: examplegroup Description: for examples GID: 855800010", "[bjensen@server ~]USD ipa group-add Group name: engineering Description: for engineers ------------------------- Added group \"engineering\" ------------------------- Group name: engineering Description: for engineers GID: 387115842", "[bjensen@server ~]USD ipa group-add-member groupName [--users= list ] [--groups= list ]", "[bjensen@server ~]USD ipa group-add-member engineering --users=jsmith,bjensen,mreynolds Group name: engineering Description: for engineers GID: 387115842 Member users: jsmith,bjensen,mreynolds ------------------------- Number of members added 3 -------------------------", "[bjensen@server ~]USD ipa group-add-member engineering --groups=dev,qe1,dev2 Group name: engineering Description: for engineers GID: 387115842 Member groups: dev,qe1,dev2 ------------------------- Number of members added 3 -------------------------", "[bjensen@server ~]USD ipa group-show examplegroup Group name: examplegroup Description: for examples GID: 93200002 Member users: jsmith,bjensen,mreynolds Member groups: californiausers Indirect Member users: sbeckett,acalavicci", "[bjensen@server ~]USD ipa group-remove-member engineering --users=jsmith Group name: engineering Description: for engineers GID: 855800009 Member users: bjensen,mreynolds --------------------------- Number of members removed 1 ---------------------------", "[bjensen@server ~]USD ipa group-del examplegroup", "[bjensen@server ~]USD ipa config-mod --searchtimelimit=5 --searchrecordslimit=500 Max. username length: 32 Home directory base: /home Default shell: /bin/sh Default users group: ipausers Default e-mail domain for new users: example.com Search time limit: 5 Search size limit: 50 User search fields: uid,givenname,sn,telephonenumber,ou,title Group search fields: cn,description Enable migration mode: FALSE Certificate Subject base: O=EXAMPLE.COM Password Expiration Notification (days): 4", "[jsmith@ipaserver ~]USD ipa user-find smith --timelimit=120", "[bjensen@server ~]USD ipa config-mod --usersearch=uid,givenname,sn,telephonenumber,ou,title", "[bjensen@server ~]USD ipa config-mod --groupsearch=cn,description", "ipa group-find --private --------------- 1 group matched --------------- Group name: jsmith Description: User private group for jsmith GID: 1084600001 ---------------------------- Number of entries returned 1 ----------------------------", "ipa group-find --user=jsmith --------------- 1 group matched --------------- Group name: ipausers Description: Default group for all users Member users: jsmith ---------------------------- Number of entries returned 1 ----------------------------", "ipa group-find --no-user=jsmith ---------------- 3 groups matched ---------------- Group name: admins Description: Account administrators group GID: 1084600000 Member users: admin Group name: editors Description: Limited admins who can edit other users GID: 1084600002 Group name: trust admins Description: Trusts administrators group Member users: admin ---------------------------- Number of entries returned 3 ----------------------------" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/identity_management_guide/user-groups
Chapter 2. Apicurio Registry content rules
Chapter 2. Apicurio Registry content rules This chapter introduces the optional rules used to govern Apicurio Registry content and provides details on the available rule configuration: Section 2.1, "Govern Apicurio Registry content using rules" Section 2.1.1, "When rules are applied" Section 2.1.2, "Order of precedence of rules" Section 2.1.3, "How rules work" Section 2.1.4, "Content rule configuration" 2.1. Govern Apicurio Registry content using rules To govern the evolution of artifact content added to Apicurio Registry, you can configure optional rules. All configured global rules or artifact-specific rules must pass before a new artifact version can be uploaded to Apicurio Registry. Configured artifact-specific rules override any configured global rules. The goal of these rules is to prevent invalid content from being added to Apicurio Registry. For example, content can be invalid for the following reasons: Invalid syntax for a given artifact type, for example, AVRO or PROTOBUF . Valid syntax, but semantics violate a specification. Incompatibility, when new content includes breaking changes relative to the current artifact version. Artifact reference integrity, for example, a duplicate or non-existent artifact reference mapping. You can enable optional content rules using the Apicurio Registry web console, REST API commands, or a Java client application. 2.1.1. When rules are applied Rules are applied only when content is added to Apicurio Registry. This includes the following REST operations: Adding an artifact Updating an artifact Adding an artifact version If a rule is violated, Apicurio Registry returns an HTTP error. The response body includes the violated rule and a message showing what went wrong. 2.1.2. Order of precedence of rules The order of precedence for artifact-specific and global rules is as follows: If you enable an artifact-specific rule, and the equivalent global rule is enabled, the artifact rule overrides the global rule. If you disable an artifact-specific rule, and the equivalent global rule is enabled, the global rule applies. If you disable an artifact-specific rule, and the equivalent global rule is disabled, the rule is disabled for all artifacts. If you set a rule value to NONE at the artifact level, you override the enabled global rule. In this case, the artifact rule value of NONE takes precedence for this artifact, but the enabled global rule continues to apply to any other artifacts that have the rule disabled at the artifact level. 2.1.3. How rules work Each rule has a name and configuration information. Apicurio Registry maintains the list of rules for each artifact and the list of global rules. Each rule in the list consists of a name and configuration for the rule implementation. A rule is provided with the content of the current version of the artifact (if one exists) and the new version of the artifact being added. The rule implementation returns true or false depending on whether the artifact passes the rule. If not, Apicurio Registry reports the reason why in an HTTP error response. Some rules might not use the version of the content. For example, compatibility rules use versions, but syntax or semantic validity rules do not. Additional resources For more details, see Chapter 10, Apicurio Registry content rule reference . 2.1.4. Content rule configuration Administrators can configure Apicurio Registry global rules and artifact-specific rules. Developers can configure artifact-specific rules only. Apicurio Registry applies the rules configured for the specific artifact. If no rules are configured at that level, Apicurio Registry applies the globally configured rules. If no global rules are configured, no rules are applied. Configure artifact rules You can configure artifact rules using the Apicurio Registry web console or REST API. For details, see the following: Chapter 3, Managing Apicurio Registry content using the web console Apicurio Registry REST API documentation Configure global rules Administrators can configure global rules in several ways: Use the admin/rules operations in the REST API Use the Apicurio Registry web console Set default global rules using Apicurio Registry application properties Configure default global rules Administrators can configure Apicurio Registry at the application level to enable or disable global rules. You can configure default global rules at installation time without post-install configuration using the following application property format: The following rule names are currently supported: compatibility validity integrity The value of the application property must be a valid configuration option that is specific to the rule being configured. Note You can configure these application properties as Java system properties or include them in the Quarkus application.properties file. For more details, see the Quarkus documentation .
[ "registry.rules.global.<ruleName>" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apicurio_registry/2.6/html/apicurio_registry_user_guide/intro-to-registry-rules_registry
Release notes for Eclipse Temurin 11.0.20
Release notes for Eclipse Temurin 11.0.20 Red Hat build of OpenJDK 11 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_eclipse_temurin_11.0.20/index
Chapter 5. Scaling storage of VMware OpenShift Data Foundation cluster
Chapter 5. Scaling storage of VMware OpenShift Data Foundation cluster 5.1. Scaling up storage on a VMware cluster To increase the storage capacity in a dynamically created storage cluster on a VMware user-provisioned infrastructure, you can add storage capacity and performance to your configured Red Hat OpenShift Data Foundation worker nodes. Prerequisites Administrative privilege to the OpenShift Container Platform Console. A running OpenShift Data Foundation Storage Cluster. Make sure that the disk is of the same size and type as the disk used during initial deployment. Procedure Log in to the OpenShift Web Console. Click Operators Installed Operators . Click OpenShift Data Foundation Operator. Click the Storage Systems tab. Click the Action Menu (...) on the far right of the storage system name to extend the options menu. Select Add Capacity from the options menu. Select the Storage Class . Choose the storage class which you wish to use to provision new storage devices. Click Add . To check the status, navigate to Storage Data Foundation and verify that Storage System in the Status card has a green tick. Verification steps Verify the Raw Capacity card. In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Block and File tab, check the Raw Capacity card. Note that the capacity increases based on your selections. Note The raw capacity does not take replication into account and shows the full capacity. Verify that the new OSDs and their corresponding new Persistent Volume Claims (PVCs) are created. To view the state of the newly created OSDs: Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. To view the state of the PVCs: Click Storage Persistent Volume Claims from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. Identify the nodes where the new OSD pods are running. <OSD-pod-name> Is the name of the OSD pod. For example: Example output: For each of the nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the selected hosts. <node-name> Is the name of the node. Check for the crypt keyword beside the ocs-deviceset names. Important Cluster reduction is supported only with the Red Hat Support Team's assistance. 5.2. Scaling up a cluster created using local storage devices To scale up an OpenShift Data Foundation cluster which was created using local storage devices, you need to add a new disk to the storage node. The new disks size must be of the same size as the disks used during the deployment because OpenShift Data Foundation does not support heterogeneous disks/OSDs. For deployments having three failure domains, you can scale up the storage by adding disks in the multiples of three, with the same number of disks coming from nodes in each of the failure domains. For example, if we scale by adding six disks, two disks are taken from nodes in each of the three failure domains. If the number of disks is not in multiples of three, it will only consume the disk to the maximum in the multiple of three while the remaining disks remain unused. For deployments having less than three failure domains, there is a flexibility to add any number of disks. Make sure to verify that flexible scaling is enabled. For information, refer to the Knowledgebase article Verify if flexible scaling is enabled . Note Flexible scaling features get enabled at the time of deployment and cannot be enabled or disabled later on. Prerequisites Administrative privilege to the OpenShift Container Platform Console. A running OpenShift Data Foundation Storage Cluster. Make sure that the disks to be used for scaling are attached to the storage node Make sure that LocalVolumeDiscovery and LocalVolumeSet objects are created. Procedure To add capacity, you can either use a storage class that you provisioned during the deployment or any other storage class that matches the filter. In the OpenShift Web Console, click Operators Installed Operators . Click OpenShift Data Foundation Operator. Click the Storage Systems tab. Click the Action menu (...) to the visible list to extend the options menu. Select Add Capacity from the options menu. Select the Storage Class for which you added disks or the new storage class depending on your requirement. Available Capacity displayed is based on the local disks available in storage class. Click Add . To check the status, navigate to Storage Data Foundation and verify that the Storage System in the Status card has a green tick. Verification steps Verify the Raw Capacity card. In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Block and File tab, check the Raw Capacity card. Note that the capacity increases based on your selections. Note The raw capacity does not take replication into account and shows the full capacity. Verify that the new OSDs and their corresponding new Persistent Volume Claims (PVCs) are created. To view the state of the newly created OSDs: Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. To view the state of the PVCs: Click Storage Persistent Volume Claims from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. Identify the nodes where the new OSD pods are running. <OSD-pod-name> Is the name of the OSD pod. For example: Example output: For each of the nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the selected host(s). <node-name> Is the name of the node. Check for the crypt keyword beside the ocs-deviceset names. Important Cluster reduction is supported only with the Red Hat Support Team's assistance. 5.3. Scaling out storage capacity on a VMware cluster 5.3.1. Adding a node to an installer-provisioned infrastructure Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. Procedure Navigate to Compute Machine Sets . On the machine set where you want to add nodes, select Edit Machine Count . Add the amount of nodes, and click Save . Click Compute Nodes and confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node. For the new node, click Action menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . Note It is recommended to add 3 nodes, one each in different zones. You must add 3 nodes and perform this procedure for all of them. In case of bare metal installer-provisioned infrastructure deployment, you must expand the cluster first. For instructions, see Expanding the cluster . Verification steps Execute the following command in the terminal and verify that the new node is present in the output: On the OpenShift web console, click Workloads Pods , confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* 5.3.2. Adding a node to an user-provisioned infrastructure Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. Procedure Depending on the type of infrastructure, perform the following steps: Get a new machine with the required infrastructure. See Platform requirements . Create a new OpenShift Container Platform worker node using the new machine. Check for certificate signing requests (CSRs) that are in Pending state. Approve all the required CSRs for the new node. <Certificate_Name> Is the name of the CSR. Click Compute Nodes , confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From User interface For the new node, click Action Menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From Command line interface Apply the OpenShift Data Foundation label to the new node. <new_node_name> Is the name of the new node. Verification steps Execute the following command in the terminal and verify that the new node is present in the output: On the OpenShift web console, click Workloads Pods , confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* 5.3.3. Adding a node using a local storage device You can add nodes to increase the storage capacity when existing worker nodes are already running at their maximum supported OSDs or when there are not enough resources to add new OSDs on the existing nodes. Add nodes in the multiple of 3, each of them in different failure domains. Though it is recommended to add nodes in multiples of 3 nodes, you have the flexibility to add one node at a time in flexible scaling deployment. See Knowledgebase article Verify if flexible scaling is enabled Note OpenShift Data Foundation does not support heterogeneous disk size and types. The new nodes to be added should have the disk of the same type and size which was used during initial OpenShift Data Foundation deployment. Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. Procedure Depending on the type of infrastructure, perform the following steps: Get a new machine with the required infrastructure. See Platform requirements . Create a new OpenShift Container Platform worker node using the new machine. Check for certificate signing requests (CSRs) that are in Pending state. Approve all the required CSRs for the new node. <Certificate_Name> Is the name of the CSR. Click Compute Nodes , confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From User interface For the new node, click Action Menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From Command line interface Apply the OpenShift Data Foundation label to the new node. <new_node_name> Is the name of the new node. Click Operators Installed Operators from the OpenShift Web Console. From the Project drop-down list, make sure to select the project where the Local Storage Operator is installed. Click Local Storage . Click the Local Volume Discovery tab. Beside the LocalVolumeDiscovery , click Action menu (...) Edit Local Volume Discovery . In the YAML, add the hostname of the new node in the values field under the node selector. Click Save . Click the Local Volume Sets tab. Beside the LocalVolumeSet , click Action menu (...) Edit Local Volume Set . In the YAML, add the hostname of the new node in the values field under the node selector . Click Save . Note It is recommended to add 3 nodes, one each in different zones. You must add 3 nodes and perform this procedure for all of them. Verification steps Execute the following command in the terminal and verify that the new node is present in the output: On the OpenShift web console, click Workloads Pods , confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* 5.3.4. Scaling up storage capacity To scale up storage capacity: For dynamic storage devices, see Scaling up storage capacity on a cluster . For local storage devices, see Scaling up a cluster created using local storage devices
[ "oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/ <OSD-pod-name>", "oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqm", "NODE compute-1", "oc debug node/ <node-name>", "chroot /host", "lsblk", "oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/ <OSD-pod-name>", "oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqm", "NODE compute-1", "oc debug node/ <node-name>", "chroot /host", "lsblk", "oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1", "oc get csr", "oc adm certificate approve <Certificate_Name>", "oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"", "oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1", "oc get csr", "oc adm certificate approve <Certificate_Name>", "oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"", "oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/scaling_storage/scaling_storage_of_vmware_openshift_data_foundation_cluster
About
About OpenShift Container Platform 4.15 Introduction to OpenShift Container Platform Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html-single/about/index
Chapter 20. Directory Servers
Chapter 20. Directory Servers 20.1. OpenLDAP LDAP (Lightweight Directory Access Protocol) is a set of open protocols used to access centrally stored information over a network. It is based on the X.500 standard for directory sharing, but is less complex and resource-intensive. For this reason, LDAP is sometimes referred to as " X.500 Lite " . Like X.500, LDAP organizes information in a hierarchical manner using directories. These directories can store a variety of information such as names, addresses, or phone numbers, and can even be used in a manner similar to the Network Information Service ( NIS ), enabling anyone to access their account from any machine on the LDAP enabled network. LDAP is commonly used for centrally managed users and groups, user authentication, or system configuration. It can also serve as a virtual phone directory, allowing users to easily access contact information for other users. Additionally, it can refer a user to other LDAP servers throughout the world, and thus provide an ad-hoc global repository of information. However, it is most frequently used within individual organizations such as universities, government departments, and private companies. This section covers the installation and configuration of OpenLDAP 2.4 , an open source implementation of the LDAPv2 and LDAPv3 protocols. 20.1.1. Introduction to LDAP Using a client-server architecture, LDAP provides a reliable means to create a central information directory accessible from the network. When a client attempts to modify information within this directory, the server verifies the user has permission to make the change, and then adds or updates the entry as requested. To ensure the communication is secure, the Transport Layer Security ( TLS ) cryptographic protocol can be used to prevent an attacker from intercepting the transmission. Important The OpenLDAP suite in Red Hat Enterprise Linux 6 no longer uses OpenSSL. Instead, it uses the Mozilla implementation of Network Security Services ( NSS ). OpenLDAP continues to work with existing certificates, keys, and other TLS configuration. For more information on how to configure it to use Mozilla certificate and key database, see How do I use TLS/SSL with Mozilla NSS . Important Due to the vulnerability described in Resolution for POODLE SSLv3.0 vulnerability (CVE-2014-3566) for components that do not allow SSLv3 to be disabled via configuration settings , Red Hat recommends that you do not rely on the SSLv3 protocol for security. OpenLDAP is one of the system components that do not provide configuration parameters that allow SSLv3 to be effectively disabled. To mitigate the risk, it is recommended that you use the stunnel command to provide a secure tunnel, and disable stunnel from using SSLv3 . For more information on using stunnel , see the Red Hat Enterprise Linux 6 Security Guide . The LDAP server supports several database systems, which gives administrators the flexibility to choose the best suited solution for the type of information they are planning to serve. Because of a well-defined client Application Programming Interface ( API ), the number of applications able to communicate with an LDAP server is numerous, and increasing in both quantity and quality. 20.1.1.1. LDAP Terminology The following is a list of LDAP-specific terms that are used within this chapter: entry A single unit within an LDAP directory. Each entry is identified by its unique Distinguished Name ( DN ). attribute Information directly associated with an entry. For example, if an organization is represented as an LDAP entry, attributes associated with this organization might include an address, a fax number, etc. Similarly, people can be represented as entries with common attributes such as personal telephone number or email address. An attribute can either have a single value, or an unordered space-separated list of values. While certain attributes are optional, others are required. Required attributes are specified using the objectClass definition, and can be found in schema files located in the /etc/openldap/slapd.d/cn=config/cn=schema/ directory. The assertion of an attribute and its corresponding value is also referred to as a Relative Distinguished Name ( RDN ). Unlike distinguished names that are unique globally, a relative distinguished name is only unique per entry. LDIF The LDAP Data Interchange Format ( LDIF ) is a plain text representation of an LDAP entry. It takes the following form: The optional id is a number determined by the application that is used to edit the entry. Each entry can contain as many attribute_type and attribute_value pairs as needed, as long as they are all defined in a corresponding schema file. A blank line indicates the end of an entry. 20.1.1.2. OpenLDAP Features OpenLDAP suite provides a number of important features: LDAPv3 Support - Many of the changes in the protocol since LDAP version 2 are designed to make LDAP more secure. Among other improvements, this includes the support for Simple Authentication and Security Layer ( SASL ), and Transport Layer Security ( TLS ) protocols. LDAP Over IPC - The use of inter-process communication ( IPC ) enhances security by eliminating the need to communicate over a network. IPv6 Support - OpenLDAP is compliant with Internet Protocol version 6 ( IPv6 ), the generation of the Internet Protocol. LDIFv1 Support - OpenLDAP is fully compliant with LDIF version 1. Updated C API - The current C API improves the way programmers can connect to and use LDAP directory servers. Enhanced Standalone LDAP Server - This includes an updated access control system, thread pooling, better tools, and much more. 20.1.1.3. OpenLDAP Server Setup The typical steps to set up an LDAP server on Red Hat Enterprise Linux are as follows: Install the OpenLDAP suite. See Section 20.1.2, "Installing the OpenLDAP Suite" for more information on required packages. Customize the configuration as described in Section 20.1.3, "Configuring an OpenLDAP Server" . Start the slapd service as described in Section 20.1.4, "Running an OpenLDAP Server" . Use the ldapadd utility to add entries to the LDAP directory. Use the ldapsearch utility to verify that the slapd service is accessing the information correctly. 20.1.2. Installing the OpenLDAP Suite The suite of OpenLDAP libraries and tools is provided by the following packages: Table 20.1. List of OpenLDAP packages Package Description openldap A package containing the libraries necessary to run the OpenLDAP server and client applications. openldap-clients A package containing the command-line utilities for viewing and modifying directories on an LDAP server. openldap-servers A package containing both the services and utilities to configure and run an LDAP server. This includes the Standalone LDAP Daemon , slapd . compat-openldap A package containing the OpenLDAP compatibility libraries. Additionally, the following packages are commonly used along with the LDAP server: Table 20.2. List of commonly installed additional LDAP packages Package Description sssd A package containing the System Security Services Daemon (SSSD) , a set of daemons to manage access to remote directories and authentication mechanisms. It provides the Name Service Switch (NSS) and the Pluggable Authentication Modules (PAM) interfaces toward the system and a pluggable back-end system to connect to multiple different account sources. mod_authz_ldap A package containing mod_authz_ldap , the LDAP authorization module for the Apache HTTP Server. This module uses the short form of the distinguished name for a subject and the issuer of the client SSL certificate to determine the distinguished name of the user within an LDAP directory. It is also capable of authorizing users based on attributes of that user's LDAP directory entry, determining access to assets based on the user and group privileges of the asset, and denying access for users with expired passwords. Note that the mod_ssl module is required when using the mod_authz_ldap module. To install these packages, use the yum command in the following form: For example, to perform the basic LDAP server installation, type the following at a shell prompt: Note that you must have superuser privileges (that is, you must be logged in as root ) to run this command. For more information on how to install new packages in Red Hat Enterprise Linux, see Section 8.2.4, "Installing Packages" . 20.1.2.1. Overview of OpenLDAP Server Utilities To perform administrative tasks, the openldap-servers package installs the following utilities along with the slapd service: Table 20.3. List of OpenLDAP server utilities Command Description slapacl Allows you to check the access to a list of attributes. slapadd Allows you to add entries from an LDIF file to an LDAP directory. slapauth Allows you to check a list of IDs for authentication and authorization permissions. slapcat Allows you to pull entries from an LDAP directory in the default format and save them in an LDIF file. slapdn Allows you to check a list of Distinguished Names (DNs) based on available schema syntax. slapindex Allows you to re-index the slapd directory based on the current content. Run this utility whenever you change indexing options in the configuration file. slappasswd Allows you to create an encrypted user password to be used with the ldapmodify utility, or in the slapd configuration file. slapschema Allows you to check the compliance of a database with the corresponding schema. slaptest Allows you to check the LDAP server configuration. For a detailed description of these utilities and their usage, see the corresponding manual pages as referred to in Section 20.1.6.1, "Installed Documentation" . Important Although only root can run slapadd , the slapd service runs as the ldap user. Because of this, the directory server is unable to modify any files created by slapadd . To correct this issue, after running the slapd utility, type the following at a shell prompt: Warning To preserve the data integrity, stop the slapd service before using slapadd , slapcat , or slapindex . You can do so by typing the following at a shell prompt: For more information on how to start, stop, restart, and check the current status of the slapd service, see Section 20.1.4, "Running an OpenLDAP Server" . 20.1.2.2. Overview of OpenLDAP Client Utilities The openldap-clients package installs the following utilities which can be used to add, modify, and delete entries in an LDAP directory: Table 20.4. List of OpenLDAP client utilities Command Description ldapadd Allows you to add entries to an LDAP directory, either from a file, or from standard input. It is a symbolic link to ldapmodify -a . ldapcompare Allows you to compare given attribute with an LDAP directory entry. ldapdelete Allows you to delete entries from an LDAP directory. ldapexop Allows you to perform extended LDAP operations. ldapmodify Allows you to modify entries in an LDAP directory, either from a file, or from standard input. ldapmodrdn Allows you to modify the RDN value of an LDAP directory entry. ldappasswd Allows you to set or change the password for an LDAP user. ldapsearch Allows you to search LDAP directory entries. ldapurl Allows you to compose or decompose LDAP URLs. ldapwhoami Allows you to perform a whoami operation on an LDAP server. With the exception of ldapsearch , each of these utilities is more easily used by referencing a file containing the changes to be made rather than typing a command for each entry to be changed within an LDAP directory. The format of such a file is outlined in the man page for each utility. 20.1.2.3. Overview of Common LDAP Client Applications Although there are various graphical LDAP clients capable of creating and modifying directories on the server, none of them is included in Red Hat Enterprise Linux. Popular applications that can access directories in a read-only mode include Mozilla Thunderbird , Evolution , or Ekiga . 20.1.3. Configuring an OpenLDAP Server By default, OpenLDAP stores its configuration in the /etc/openldap/ directory. Table 20.5, "List of OpenLDAP configuration files and directories" highlights the most important files and directories within this directory. Table 20.5. List of OpenLDAP configuration files and directories Path Description /etc/openldap/ldap.conf The configuration file for client applications that use the OpenLDAP libraries. This includes ldapadd , ldapsearch , Evolution , etc. /etc/openldap/slapd.d/ The directory containing the slapd configuration. In Red Hat Enterprise Linux 6, the slapd service uses a configuration database located in the /etc/openldap/slapd.d/ directory and only reads the old /etc/openldap/slapd.conf configuration file if this directory does not exist. If you have an existing slapd.conf file from a installation, you can either wait for the openldap-servers package to convert it to the new format the time you update this package, or type the following at a shell prompt as root to convert it immediately: The slapd configuration consists of LDIF entries organized in a hierarchical directory structure, and the recommended way to edit these entries is to use the server utilities described in Section 20.1.2.1, "Overview of OpenLDAP Server Utilities" . Important An error in an LDIF file can render the slapd service unable to start. Because of this, it is strongly advised that you avoid editing the LDIF files within the /etc/openldap/slapd.d/ directory directly. 20.1.3.1. Changing the Global Configuration Global configuration options for the LDAP server are stored in the /etc/openldap/slapd.d/cn=config.ldif file. The following directives are commonly used: olcAllows The olcAllows directive allows you to specify which features to enable. It takes the following form: It accepts a space-separated list of features as described in Table 20.6, "Available olcAllows options" . The default option is bind_v2 . Table 20.6. Available olcAllows options Option Description bind_v2 Enables the acceptance of LDAP version 2 bind requests. bind_anon_cred Enables an anonymous bind when the Distinguished Name (DN) is empty. bind_anon_dn Enables an anonymous bind when the Distinguished Name (DN) is not empty. update_anon Enables processing of anonymous update operations. proxy_authz_anon Enables processing of anonymous proxy authorization control. Example 20.1. Using the olcAllows directive olcConnMaxPending The olcConnMaxPending directive allows you to specify the maximum number of pending requests for an anonymous session. It takes the following form: The default option is 100 . Example 20.2. Using the olcConnMaxPending directive olcConnMaxPendingAuth The olcConnMaxPendingAuth directive allows you to specify the maximum number of pending requests for an authenticated session. It takes the following form: The default option is 1000 . Example 20.3. Using the olcConnMaxPendingAuth directive olcDisallows The olcDisallows directive allows you to specify which features to disable. It takes the following form: It accepts a space-separated list of features as described in Table 20.7, "Available olcDisallows options" . No features are disabled by default. Table 20.7. Available olcDisallows options Option Description bind_anon Disables the acceptance of anonymous bind requests. bind_simple Disables the simple bind authentication mechanism. tls_2_anon Disables the enforcing of an anonymous session when the STARTTLS command is received. tls_authc Disallows the STARTTLS command when authenticated. Example 20.4. Using the olcDisallows directive olcIdleTimeout The olcIdleTimeout directive allows you to specify how many seconds to wait before closing an idle connection. It takes the following form: This option is disabled by default (that is, set to 0 ). Example 20.5. Using the olcIdleTimeout directive olcLogFile The olcLogFile directive allows you to specify a file in which to write log messages. It takes the following form: The log messages are written to standard error by default. Example 20.6. Using the olcLogFile directive olcReferral The olcReferral option allows you to specify a URL of a server to process the request in case the server is not able to handle it. It takes the following form: This option is disabled by default. Example 20.7. Using the olcReferral directive olcWriteTimeout The olcWriteTimeout option allows you to specify how many seconds to wait before closing a connection with an outstanding write request. It takes the following form: This option is disabled by default (that is, set to 0 ). Example 20.8. Using the olcWriteTimeout directive 20.1.3.2. Changing the Database-Specific Configuration By default, the OpenLDAP server uses Berkeley DB (BDB) as a database back end. The configuration for this database is stored in the /etc/openldap/slapd.d/cn=config/olcDatabase={2}bdb.ldif file. The following directives are commonly used in a database-specific configuration: olcReadOnly The olcReadOnly directive allows you to use the database in a read-only mode. It takes the following form: It accepts either TRUE (enable the read-only mode), or FALSE (enable modifications of the database). The default option is FALSE . Example 20.9. Using the olcReadOnly directive olcRootDN The olcRootDN directive allows you to specify the user that is unrestricted by access controls or administrative limit parameters set for operations on the LDAP directory. It takes the following form: It accepts a Distinguished Name ( DN ). The default option is cn=Manager,dc=my-domain,dc=com . Example 20.10. Using the olcRootDN directive olcRootPW The olcRootPW directive allows you to set a password for the user that is specified using the olcRootDN directive. It takes the following form: It accepts either a plain text string, or a hash. To generate a hash, type the following at a shell prompt: Example 20.11. Using the olcRootPW directive olcSuffix The olcSuffix directive allows you to specify the domain for which to provide information. It takes the following form: It accepts a fully qualified domain name ( FQDN ). The default option is dc=my-domain,dc=com . Example 20.12. Using the olcSuffix directive 20.1.3.3. Extending Schema Since OpenLDAP 2.3, the /etc/openldap/slapd.d/cn=config/cn=schema/ directory also contains LDAP definitions that were previously located in /etc/openldap/schema/ . It is possible to extend the schema used by OpenLDAP to support additional attribute types and object classes using the default schema files as a guide. However, this task is beyond the scope of this chapter. For more information on this topic, see http://www.openldap.org/doc/admin/schema.html . 20.1.4. Running an OpenLDAP Server This section describes how to start, stop, restart, and check the current status of the Standalone LDAP Daemon . For more information on how to manage system services in general, see Chapter 12, Services and Daemons . 20.1.4.1. Starting the Service To run the slapd service, type the following at a shell prompt: If you want the service to start automatically at the boot time, use the following command: Note that you can also use the Service Configuration utility as described in Section 12.2.1.1, "Enabling and Disabling a Service" . 20.1.4.2. Stopping the Service To stop the running slapd service, type the following at a shell prompt: To prevent the service from starting automatically at the boot time, type: Alternatively, you can use the Service Configuration utility as described in Section 12.2.1.1, "Enabling and Disabling a Service" . 20.1.4.3. Restarting the Service To restart the running slapd service, type the following at a shell prompt: This stops the service, and then starts it again. Use this command to reload the configuration. 20.1.4.4. Checking the Service Status To check whether the service is running, type the following at a shell prompt: 20.1.5. Configuring a System to Authenticate Using OpenLDAP In order to configure a system to authenticate using OpenLDAP, make sure that the appropriate packages are installed on both LDAP server and client machines. For information on how to set up the server, follow the instructions in Section 20.1.2, "Installing the OpenLDAP Suite" and Section 20.1.3, "Configuring an OpenLDAP Server" . On a client, type the following at a shell prompt: Chapter 13, Configuring Authentication provides detailed instructions on how to configure applications to use LDAP for authentication. 20.1.5.1. Migrating Old Authentication Information to LDAP Format The migrationtools package provides a set of shell and Perl scripts to help you migrate authentication information into an LDAP format. To install this package, type the following at a shell prompt: This will install the scripts to the /usr/share/migrationtools/ directory. Once installed, edit the /usr/share/migrationtools/migrate_common.ph file and change the following lines to reflect the correct domain, for example: Alternatively, you can specify the environment variables directly on the command line. For example, to run the migrate_all_online.sh script with the default base set to dc=example,dc=com , type: To decide which script to run in order to migrate the user database, see Table 20.8, "Commonly used LDAP migration scripts" . Table 20.8. Commonly used LDAP migration scripts Existing Name Service Is LDAP Running? Script to Use /etc flat files yes migrate_all_online.sh /etc flat files no migrate_all_offline.sh NetInfo yes migrate_all_netinfo_online.sh NetInfo no migrate_all_netinfo_offline.sh NIS (YP) yes migrate_all_nis_online.sh NIS (YP) no migrate_all_nis_offline.sh For more information on how to use these scripts, see the README and the migration-tools.txt files in the /usr/share/doc/migrationtools- version / directory. 20.1.6. Additional Resources The following resources offer additional information on the Lightweight Directory Access Protocol. Before configuring LDAP on your system, it is highly recommended that you review these resources, especially the OpenLDAP Software Administrator's Guide . 20.1.6.1. Installed Documentation The following documentation is installed with the openldap-servers package: /usr/share/doc/openldap-servers- version /guide.html A copy of the OpenLDAP Software Administrator's Guide . /usr/share/doc/openldap-servers- version /README.schema A README file containing the description of installed schema files. Additionally, there is also a number of manual pages that are installed with the openldap , openldap-servers , and openldap-clients packages: Client Applications man ldapadd - Describes how to add entries to an LDAP directory. man ldapdelete - Describes how to delete entries within an LDAP directory. man ldapmodify - Describes how to modify entries within an LDAP directory. man ldapsearch - Describes how to search for entries within an LDAP directory. man ldappasswd - Describes how to set or change the password of an LDAP user. man ldapcompare - Describes how to use the ldapcompare tool. man ldapwhoami - Describes how to use the ldapwhoami tool. man ldapmodrdn - Describes how to modify the RDNs of entries. Server Applications man slapd - Describes command-line options for the LDAP server. Administrative Applications man slapadd - Describes command-line options used to add entries to a slapd database. man slapcat - Describes command-line options used to generate an LDIF file from a slapd database. man slapindex - Describes command-line options used to regenerate an index based upon the contents of a slapd database. man slappasswd - Describes command-line options used to generate user passwords for LDAP directories. Configuration Files man ldap.conf - Describes the format and options available within the configuration file for LDAP clients. man slapd-config - Describes the format and options available within the configuration directory. 20.1.6.2. Useful Websites http://www.openldap.org/doc/admin24/ The current version of the OpenLDAP Software Administrator's Guide . 20.1.6.3. Related Books OpenLDAP by Example by John Terpstra and Benjamin Coles; Prentice Hall. A collection of practical exercises in the OpenLDAP deployment. Implementing LDAP by Mark Wilcox; Wrox Press, Inc. A book covering LDAP from both the system administrator's and software developer's perspective. Understanding and Deploying LDAP Directory Services by Tim Howes et al.; Macmillan Technical Publishing. A book covering LDAP design principles, as well as its deployment in a production environment.
[ "[ id ] dn: distinguished_name attribute_type : attribute_value attribute_type : attribute_value", "install package", "~]# yum install openldap openldap-clients openldap-servers", "~]# chown -R ldap:ldap /var/lib/ldap", "~]# service slapd stop Stopping slapd: [ OK ]", "~]# slaptest -f /etc/openldap/slapd.conf -F /etc/openldap/slapd.d/", "olcAllows : feature", "olcAllows: bind_v2 update_anon", "olcConnMaxPending : number", "olcConnMaxPending: 100", "olcConnMaxPendingAuth : number", "olcConnMaxPendingAuth: 1000", "olcDisallows : feature", "olcDisallows: bind_anon", "olcIdleTimeout : number", "olcIdleTimeout: 180", "olcLogFile : file_name", "olcLogFile: /var/log/slapd.log", "olcReferral : URL", "olcReferral: ldap://root.openldap.org", "olcWriteTimeout", "olcWriteTimeout: 180", "olcReadOnly : boolean", "olcReadOnly: TRUE", "olcRootDN : distinguished_name", "olcRootDN: cn=root,dc=example,dc=com", "olcRootPW : password", "~]USD slappaswd New password: Re-enter new password: {SSHA}WczWsyPEnMchFf1GRTweq2q7XJcvmSxD", "olcRootPW: {SSHA}WczWsyPEnMchFf1GRTweq2q7XJcvmSxD", "olcSuffix : domain_name", "olcSuffix: dc=example,dc=com", "~]# service slapd start Starting slapd: [ OK ]", "~]# chkconfig slapd on", "~]# service slapd stop Stopping slapd: [ OK ]", "~]# chkconfig slapd off", "~]# service slapd restart Stopping slapd: [ OK ] Starting slapd: [ OK ]", "~]# service slapd status slapd (pid 3672) is running", "~]# yum install openldap openldap-clients sssd", "~]# yum install migrationtools", "Default DNS domain USDDEFAULT_MAIL_DOMAIN = \"example.com\"; Default base USDDEFAULT_BASE = \"dc=example,dc=com\";", "~]# export DEFAULT_BASE=\"dc=example,dc=com\" /usr/share/migrationtools/migrate_all_online.sh" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/ch-directory_servers
5.4.2.3. Moving the Data
5.4.2.3. Moving the Data Use the pvmove command to move the data from /dev/sdb1 to /dev/sdd1 .
[ "pvmove /dev/sdb1 /dev/sdd1 /dev/sdb1: Moved: 10.0% /dev/sdb1: Moved: 79.7% /dev/sdb1: Moved: 100.0% pvs -o+pv_used PV VG Fmt Attr PSize PFree Used /dev/sda1 myvg lvm2 a- 17.15G 7.15G 10.00G /dev/sdb1 myvg lvm2 a- 17.15G 17.15G 0 /dev/sdc1 myvg lvm2 a- 17.15G 15.15G 2.00G /dev/sdd1 myvg lvm2 a- 17.15G 15.15G 2.00G" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_logical_volume_manager/move_data_ex4
10.5.54. LanguagePriority
10.5.54. LanguagePriority LanguagePriority sets precedence for different languages in case the client Web browser has no language preference set.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-apache-languagepriority
Chapter 6. Installing and configuring the Topology plugin
Chapter 6. Installing and configuring the Topology plugin 6.1. Installation The Topology plugin enables you to visualize the workloads such as Deployment, Job, Daemonset, Statefulset, CronJob, Pods and Virtual Machines powering any service on your Kubernetes cluster. Prerequisites You have installed and configured the @backstage/plugin-kubernetes-backend dynamic plugins. You have configured the Kubernetes plugin to connect to the cluster using a ServiceAccount. The ClusterRole must be granted to ServiceAccount accessing the cluster. Note If you have the Developer Hub Kubernetes plugin configured, then the ClusterRole is already granted. Procedure The Topology plugin is pre-loaded in Developer Hub with basic configuration properties. To enable it, set the disabled property to false as follows: app-config.yaml fragment auth: global: dynamic: includes: - dynamic-plugins.default.yaml plugins: - package: ./dynamic-plugins/dist/janus-idp-backstage-plugin-topology disabled: false 6.2. Configuration 6.2.1. Viewing OpenShift routes To view OpenShift routes, you must grant read access to the routes resource in the Cluster Role: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: backstage-read-only rules: ... - apiGroups: - route.openshift.io resources: - routes verbs: - get - list You must also add the following in kubernetes.customResources property in your app-config.yaml file: kubernetes: ... customResources: - group: 'route.openshift.io' apiVersion: 'v1' plural: 'routes' 6.2.2. Viewing pod logs To view pod logs, you must grant the following permission to the ClusterRole : apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: backstage-read-only rules: ... - apiGroups: - '' resources: - pods - pods/log verbs: - get - list - watch 6.2.3. Viewing Tekton PipelineRuns To view the Tekton PipelineRuns you must grant read access to the pipelines , pipelinesruns , and taskruns resources in the ClusterRole : ... apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: backstage-read-only rules: ... - apiGroups: - tekton.dev resources: - pipelines - pipelineruns - taskruns verbs: - get - list To view the Tekton PipelineRuns list in the side panel and the latest PipelineRuns status in the Topology node decorator, you must add the following code to the kubernetes.customResources property in your app-config.yaml file: kubernetes: ... customResources: - group: 'tekton.dev' apiVersion: 'v1' plural: 'pipelines' - group: 'tekton.dev' apiVersion: 'v1' plural: 'pipelineruns' - group: 'tekton.dev' apiVersion: 'v1' plural: 'taskruns' 6.2.4. Viewing virtual machines To view virtual machines, the OpenShift Virtualization operator must be installed and configured on a Kubernetes cluster. You must also grant read access to the VirtualMachines resource in the ClusterRole : ... apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: backstage-read-only rules: ... - apiGroups: - kubevirt.io resources: - virtualmachines - virtualmachineinstances verbs: - get - list To view the virtual machine nodes on the topology plugin, you must add the following code to the kubernetes.customResources property in the app-config.yaml file: kubernetes: ... customResources: - group: 'kubevirt.io' apiVersion: 'v1' plural: 'virtualmachines' - group: 'kubevirt.io' apiVersion: 'v1' plural: 'virtualmachineinstances' 6.2.5. Enabling the source code editor To enable the source code editor, you must grant read access to the CheClusters resource in the ClusterRole as shown in the following example code: ... apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: backstage-read-only rules: ... - apiGroups: - org.eclipse.che resources: - checlusters verbs: - get - list To use the source code editor, you must add the following configuration to the kubernetes.customResources property in your app-config.yaml file: kubernetes: ... customResources: - group: 'org.eclipse.che' apiVersion: 'v2' plural: 'checlusters' 6.2.6. Labels and annotations 6.2.6.1. Linking to the source code editor or the source Add the following annotations to workload resources, such as Deployments to navigate to the Git repository of the associated application using the source code editor: annotations: app.openshift.io/vcs-uri: <GIT_REPO_URL> Add the following annotation to navigate to a specific branch: annotations: app.openshift.io/vcs-ref: <GIT_REPO_BRANCH> Note If Red Hat OpenShift Dev Spaces is installed and configured and git URL annotations are also added to the workload YAML file, then clicking on the edit code decorator redirects you to the Red Hat OpenShift Dev Spaces instance. Note When you deploy your application using the OCP Git import flows, then you do not need to add the labels as import flows do that. Otherwise, you need to add the labels manually to the workload YAML file. You can also add the app.openshift.io/edit-url annotation with the edit URL that you want to access using the decorator. 6.2.6.2. Entity annotation/label For RHDH to detect that an entity has Kubernetes components, add the following annotation to the entity's catalog-info.yaml : annotations: backstage.io/kubernetes-id: <BACKSTAGE_ENTITY_NAME> The following label is added to the resources so that the Kubernetes plugin gets the Kubernetes resources from the requested entity, add the following label to the resources: labels: backstage.io/kubernetes-id: <BACKSTAGE_ENTITY_NAME>` Note When using the label selector, the mentioned labels must be present on the resource. 6.2.6.3. Namespace annotation To identify the Kubernetes resources using the defined namespace, add the backstage.io/kubernetes-namespace annotation: annotations: backstage.io/kubernetes-namespace: <RESOURCE_NS> The Red Hat OpenShift Dev Spaces instance is not accessible using the source code editor if the backstage.io/kubernetes-namespace annotation is added to the catalog-info.yaml file. To retrieve the instance URL, you require the CheCluster Custom Resource (CR). As the CheCluster CR is created in the openshift-devspaces namespace, the instance URL is not retrieved if the namespace annotation value is not openshift-devspaces. 6.2.6.4. Label selector query annotation You can write your own custom label, which RHDH uses to find the Kubernetes resources. The label selector takes precedence over the ID annotations: annotations: backstage.io/kubernetes-label-selector: 'app=my-app,component=front-end' If you have multiple entities while Red Hat Dev Spaces is configured and want multiple entities to support the edit code decorator that redirects to the Red Hat Dev Spaces instance, you can add the backstage.io/kubernetes-label-selector annotation to the catalog-info.yaml file for each entity. annotations: backstage.io/kubernetes-label-selector: 'component in (<BACKSTAGE_ENTITY_NAME>,che)' If you are using the label selector, you must add the following labels to your resources so that the Kubernetes plugin gets the Kubernetes resources from the requested entity: labels: component: che # add this label to your che cluster instance labels: component: <BACKSTAGE_ENTITY_NAME> # add this label to the other resources associated with your entity You can also write your own custom query for the label selector with unique labels to differentiate your entities. However, you need to ensure that you add those labels to the resources associated with your entities including your CheCluster instance. 6.2.6.5. Icon displayed in the node To display a runtime icon in the topology nodes, add the following label to workload resources, such as Deployments: labels: app.openshift.io/runtime: <RUNTIME_NAME> Alternatively, you can include the following label to display the runtime icon: labels: app.kubernetes.io/name: <RUNTIME_NAME> Supported values of <RUNTIME_NAME> include: django dotnet drupal go-gopher golang grails jboss jruby js nginx nodejs openjdk perl phalcon php python quarkus rails redis rh-spring-boot rust java rh-openjdk ruby spring spring-boot Note Other values result in icons not being rendered for the node. 6.2.6.6. App grouping To display workload resources such as deployments or pods in a visual group, add the following label: labels: app.kubernetes.io/part-of: <GROUP_NAME> 6.2.6.7. Node connector To display the workload resources such as deployments or pods with a visual connector, add the following annotation: annotations: app.openshift.io/connects-to: '[{"apiVersion": <RESOURCE_APIVERSION>,"kind": <RESOURCE_KIND>,"name": <RESOURCE_NAME>}]' For more information about the labels and annotations, see Guidelines for labels and annotations for OpenShift applications .
[ "auth: global: dynamic: includes: - dynamic-plugins.default.yaml plugins: - package: ./dynamic-plugins/dist/janus-idp-backstage-plugin-topology disabled: false", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: backstage-read-only rules: - apiGroups: - route.openshift.io resources: - routes verbs: - get - list", "kubernetes: customResources: - group: 'route.openshift.io' apiVersion: 'v1' plural: 'routes'", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: backstage-read-only rules: - apiGroups: - '' resources: - pods - pods/log verbs: - get - list - watch", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: backstage-read-only rules: - apiGroups: - tekton.dev resources: - pipelines - pipelineruns - taskruns verbs: - get - list", "kubernetes: customResources: - group: 'tekton.dev' apiVersion: 'v1' plural: 'pipelines' - group: 'tekton.dev' apiVersion: 'v1' plural: 'pipelineruns' - group: 'tekton.dev' apiVersion: 'v1' plural: 'taskruns'", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: backstage-read-only rules: - apiGroups: - kubevirt.io resources: - virtualmachines - virtualmachineinstances verbs: - get - list", "kubernetes: customResources: - group: 'kubevirt.io' apiVersion: 'v1' plural: 'virtualmachines' - group: 'kubevirt.io' apiVersion: 'v1' plural: 'virtualmachineinstances'", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: backstage-read-only rules: - apiGroups: - org.eclipse.che resources: - checlusters verbs: - get - list", "kubernetes: customResources: - group: 'org.eclipse.che' apiVersion: 'v2' plural: 'checlusters'", "annotations: app.openshift.io/vcs-uri: <GIT_REPO_URL>", "annotations: app.openshift.io/vcs-ref: <GIT_REPO_BRANCH>", "annotations: backstage.io/kubernetes-id: <BACKSTAGE_ENTITY_NAME>", "labels: backstage.io/kubernetes-id: <BACKSTAGE_ENTITY_NAME>`", "annotations: backstage.io/kubernetes-namespace: <RESOURCE_NS>", "annotations: backstage.io/kubernetes-label-selector: 'app=my-app,component=front-end'", "annotations: backstage.io/kubernetes-label-selector: 'component in (<BACKSTAGE_ENTITY_NAME>,che)'", "labels: component: che # add this label to your che cluster instance labels: component: <BACKSTAGE_ENTITY_NAME> # add this label to the other resources associated with your entity", "labels: app.openshift.io/runtime: <RUNTIME_NAME>", "labels: app.kubernetes.io/name: <RUNTIME_NAME>", "labels: app.kubernetes.io/part-of: <GROUP_NAME>", "annotations: app.openshift.io/connects-to: '[{\"apiVersion\": <RESOURCE_APIVERSION>,\"kind\": <RESOURCE_KIND>,\"name\": <RESOURCE_NAME>}]'" ]
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.3/html/configuring_dynamic_plugins/installing-and-configuring-the-topology-plugin
Chapter 1. Building applications overview
Chapter 1. Building applications overview Using OpenShift Container Platform, you can create, edit, delete, and manage applications using the web console or command line interface (CLI). 1.1. Working on a project Using projects, you can organize and manage applications in isolation. You can manage the entire project lifecycle, including creating, viewing, and deleting a project in OpenShift Container Platform. After you create the project, you can grant or revoke access to a project and manage cluster roles for the users using the Developer perspective. You can also edit the project configuration resource while creating a project template that is used for automatic provisioning of new projects. Using the CLI, you can create a project as a different user by impersonating a request to the OpenShift Container Platform API. When you make a request to create a new project, the OpenShift Container Platform uses an endpoint to provision the project according to a customizable template. As a cluster administrator, you can choose to prevent an authenticated user group from self-provisioning new projects . 1.2. Working on an application 1.2.1. Creating an application To create applications, you must have created a project or have access to a project with the appropriate roles and permissions. You can create an application by using either the Developer perspective in the web console , installed Operators , or the OpenShift Container Platform CLI . You can source the applications to be added to the project from Git, JAR files, devfiles, or the developer catalog. You can also use components that include source or binary code, images, and templates to create an application by using the OpenShift Container Platform CLI. With the OpenShift Container Platform web console, you can create an application from an Operator installed by a cluster administrator. 1.2.2. Maintaining an application After you create the application you can use the web console to monitor your project or application metrics . You can also edit or delete the application using the web console. When the application is running, not all applications resources are used. As a cluster administrator, you can choose to idle these scalable resources to reduce resource consumption. 1.2.3. Connecting an application to services An application uses backing services to build and connect workloads, which vary according to the service provider. Using the Service Binding Operator , as a developer, you can bind workloads together with Operator-managed backing services, without any manual procedures to configure the binding connection. You can apply service binding also on IBM Power Systems, IBM Z, and LinuxONE environments . 1.2.4. Deploying an application You can deploy your application using Deployment or DeploymentConfig objects and manage them from the web console. You can create deployment strategies that help reduce downtime during a change or an upgrade to the application. You can also use Helm , a software package manager that simplifies deployment of applications and services to OpenShift Container Platform clusters. 1.3. Using the Red Hat Marketplace The Red Hat Marketplace is an open cloud marketplace where you can discover and access certified software for container-based environments that run on public clouds and on-premises.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/building_applications/building-applications-overview
Chapter 1. Users and organizations in Red Hat Quay
Chapter 1. Users and organizations in Red Hat Quay Before you begin creating repositories to hold your container images in Red Hat Quay, you should consider how you want to organize those repositories. Every repository in a Red Hat Quay instance must be associated with either an Organization or a User. 1.1. Red Hat Quay tenancy model Organizations provide a way of sharing repositories under a common namespace which does not belong to a single user, but rather to many users in a shared setting (such as a company). Teams provide a way for an organization to delegate permissions (both global and on specific repositories) to sets or groups of users. Users can log in to a registry through the Red Hat Quay web UI or a client (such as podman login ). Each user automatically gets a user namespace, for example, quay-server.example.com/user/<username> . Super users have enhanced access and privileges via the Super User Admin Panel in the user interface and through Super User API calls that are not visible or accessible to normal users. Robot accounts provide automated access to repositories for non-human users such as pipeline tools and are similar in nature to OpenShift service accounts. Permissions can be granted to a robot account in a repository by adding that account like any other user or team. 1.2. Creating user accounts To create a new user for your Red Hat Quay instance: Log in to Red Hat Quay as the superuser (quay by default). Select your account name from the upper right corner of the home page and choose Super User Admin Panel. Select the Users icon from the left column. Select the Create User button. Enter the new user's Username and Email address, then select the Create User button. Back on the Users page, select the Options icon to the right of the new Username. A drop-down menu appears, as shown in the following figure: Choose Change Password from the menu. Add the new password and verify it, then select the Change User Password button. The new user can now use that username and password to log in via the web ui or through some container client. 1.3. Deleting a Red Hat Quay user from the command line When accessing the Users tab in the Superuser Admin panel of the Red Hat Quay UI, you might encounter a situation where no users are listed. Instead, a message appears, indicating that Red Hat Quay is configured to use external authentication, and users can only be created in that system. This error occurs for one of two reasons: The web UI times out when loading users. When this happens, users are not accessible to perform any operations on. On LDAP authentication. When a userID is changed but the associated email is not. Currently, Red Hat Quay does not allow the creation of a new user with an old email address. Use the following procedure to delete a user from Red Hat Quay when facing this issue. Procedure Enter the following curl command to delete a user from the command line: USD curl -X DELETE -H "Authorization: Bearer <insert token here>" https://<quay_hostname>/api/v1/superuser/users/<name_of_user> Note After deleting the user, any repositories that this user had in his private account become unavailable. 1.4. Creating organization accounts Any user can create their own organization to share repositories of container images. To create a new organization: While logged in as any user, select the plus sign (+) from the upper right corner of the home page and choose New Organization. Type the name of the organization. The name must be alphanumeric, all lower case, and between 2 and 255 characters long Select Create Organization. The new organization appears, ready for you to begin adding repositories, teams, robot accounts and other features from icons on the left column. The following figure shows an example of the new organization's page with the settings tab selected.
[ "curl -X DELETE -H \"Authorization: Bearer <insert token here>\" https://<quay_hostname>/api/v1/superuser/users/<name_of_user>" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.9/html/use_red_hat_quay/user-org-intro
9.7. NFS Server Configuration
9.7. NFS Server Configuration There are two ways to configure an NFS server: Manually editing the NFS configuration file, that is, /etc/exports , and through the command line, that is, by using the command exportfs 9.7.1. The /etc/exports Configuration File The /etc/exports file controls which file systems are exported to remote hosts and specifies options. It follows the following syntax rules: Blank lines are ignored. To add a comment, start a line with the hash mark ( # ). You can wrap long lines with a backslash ( \ ). Each exported file system should be on its own individual line. Any lists of authorized hosts placed after an exported file system must be separated by space characters. Options for each of the hosts must be placed in parentheses directly after the host identifier, without any spaces separating the host and the first parenthesis. Each entry for an exported file system has the following structure: The aforementioned structure uses the following variables: export The directory being exported host The host or network to which the export is being shared options The options to be used for host It is possible to specify multiple hosts, along with specific options for each host. To do so, list them on the same line as a space-delimited list, with each hostname followed by its respective options (in parentheses), as in: For information on different methods for specifying hostnames, refer to Section 9.7.4, "Hostname Formats" . In its simplest form, the /etc/exports file only specifies the exported directory and the hosts permitted to access it, as in the following example: Example 9.6. The /etc/exports file Here, bob.example.com can mount /exported/directory/ from the NFS server. Because no options are specified in this example, NFS will use default settings. The default settings are: ro The exported file system is read-only. Remote hosts cannot change the data shared on the file system. To allow hosts to make changes to the file system (that is, read/write), specify the rw option. sync The NFS server will not reply to requests before changes made by requests are written to disk. To enable asynchronous writes instead, specify the option async . wdelay The NFS server will delay writing to the disk if it suspects another write request is imminent. This can improve performance as it reduces the number of times the disk must be accesses by separate write commands, thereby reducing write overhead. To disable this, specify the no_wdelay . no_wdelay is only available if the default sync option is also specified. root_squash This prevents root users connected remotely (as opposed to locally) from having root privileges; instead, the NFS server will assign them the user ID nfsnobody . This effectively "squashes" the power of the remote root user to the lowest local user, preventing possible unauthorized writes on the remote server. To disable root squashing, specify no_root_squash . To squash every remote user (including root), use all_squash . To specify the user and group IDs that the NFS server should assign to remote users from a particular host, use the anonuid and anongid options, respectively, as in: Here, uid and gid are user ID number and group ID number, respectively. The anonuid and anongid options allow you to create a special user and group account for remote NFS users to share. By default, access control lists ( ACLs ) are supported by NFS under Red Hat Enterprise Linux. To disable this feature, specify the no_acl option when exporting the file system. Each default for every exported file system must be explicitly overridden. For example, if the rw option is not specified, then the exported file system is shared as read-only. The following is a sample line from /etc/exports which overrides two default options: /another/exported/directory 192.168.0.3(rw,async) In this example 192.168.0.3 can mount /another/exported/directory/ read/write and all writes to disk are asynchronous. For more information on exporting options, refer to man exportfs . Other options are available where no default value is specified. These include the ability to disable sub-tree checking, allow access from insecure ports, and allow insecure file locks (necessary for certain early NFS client implementations). Refer to man exports for details on these less-used options. Important The format of the /etc/exports file is very precise, particularly in regards to use of the space character. Remember to always separate exported file systems from hosts and hosts from one another with a space character. However, there should be no other space characters in the file except on comment lines. For example, the following two lines do not mean the same thing: The first line allows only users from bob.example.com read/write access to the /home directory. The second line allows users from bob.example.com to mount the directory as read-only (the default), while the rest of the world can mount it read/write.
[ "export host ( options )", "export host1 ( options1 ) host2 ( options2 ) host3 ( options3 )", "/exported/directory bob.example.com", "export host (anonuid= uid ,anongid= gid )", "/home bob.example.com(rw) /home bob.example.com (rw)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/storage_administration_guide/nfs-serverconfig
Chapter 17. Red Hat Decision Manager roles and users
Chapter 17. Red Hat Decision Manager roles and users To access Business Central or KIE Server, you must create users and assign them appropriate roles before the servers are started. You can create users and roles when you install Business Central or KIE Server. If both Business Central and KIE Server are running on a single instance, a user who is authenticated for Business Central can also access KIE Server. However, if Business Central and KIE Server are running on different instances, a user who is authenticated for Business Central must be authenticated separately to access KIE Server. For example, if a user who is authenticated on Business Central but not authenticated on KIE Server tries to view or manage process definitions in Business Central, a 401 error is logged in the log file and the Invalid credentials to load data from remote server. Contact your system administrator. message appears in Business Central. This section describes Red Hat Decision Manager user roles. Note The admin , analyst , and rest-all roles are reserved for Business Central. The kie-server role is reserved for KIE Server. For this reason, the available roles can differ depending on whether Business Central, KIE Server, or both are installed. admin : Users with the admin role are the Business Central administrators. They can manage users and create, clone, and manage repositories. They have full access to make required changes in the application. Users with the admin role have access to all areas within Red Hat Decision Manager. analyst : Users with the analyst role have access to all high-level features. They can model projects. However, these users cannot add contributors to spaces or delete spaces in the Design Projects view. Access to the Deploy Execution Servers view, which is intended for administrators, is not available to users with the analyst role. However, the Deploy button is available to these users when they access the Library perspective. rest-all : Users with the rest-all role can access Business Central REST capabilities. kie-server : Users with the kie-server role can access KIE Server REST capabilities. 17.1. Adding Red Hat Decision Manager users Before you can use RH-SSO to authenticate Business Central or KIE Server, you must add users to the realm that you created. To add new users and assign them a role to access Red Hat Decision Manager, complete the following steps: Log in to the RH-SSO Admin Console and open the realm that you want to add a user to. Click the Users menu item under the Manage section. An empty user list appears on the Users page. Click the Add User button on the empty user list to start creating your new user. The Add User page opens. On the Add User page, enter the user information and click Save . Click the Credentials tab and create a password. Assign the new user one of the roles that allows access to Red Hat Decision Manager. For example, assign the admin role to access Business Central or assign the kie-server role to access KIE Server. Note For projects that deploy from Business Central on OpenShift, create an RH-SSO user called mavenuser without any role assigned, then add this user to the BUSINESS_CENTRAL_MAVEN_USERNAME and BUSINESS_CENTRAL_MAVEN_PASSWORD in your OpenShift template. Define the roles as realm roles in the Realm Roles tab under the Roles section. Alternatively, for roles used in Business Central, you can define the roles as client roles for the kie client. For instructions about configuring the kie client, see Section 18.1, "Creating the Business Central client for RH-SSO" . To use client roles, you must also configure additional settings for Business Central, as described in Section 18.2, "Installing the RH-SSO client adapter for Business Central" . You must define roles used in KIE Server as realm roles. Click the Role Mappings tab on the Users page to assign roles.
null
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/integrating_red_hat_decision_manager_with_other_products_and_components/roles-users-con_integrate-sso
Chapter 1. About Telemetry
Chapter 1. About Telemetry Red Hat Advanced Cluster Security for Kubernetes (RHACS) collects anonymized aggregated information about product usage and product configuration. It helps Red Hat understand how everyone uses the product and identify areas to prioritize for improvements. In addition, Red Hat uses this information to improve the user experience. 1.1. Information collected by Telemetry Telemetry does not collect identifying information such as user names, passwords, or the names or addresses of user resources. Note Telemetry data collection is enabled by default, except for the installations with the offline mode enabled. Telemetry collects the following information: API, roxctl CLI, and user interface (UI) features and settings to know how you use Red Hat Advanced Cluster Security for Kubernetes (RHACS), which helps prioritize efforts. The time you spend on UI screens to help us improve user experience. The integrations are used to know if there are integrations that you have never used. The number of connected secured clusters and their configurations. Errors you encounter to identify the most common problems.
null
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/telemetry/about-telemetry
Installing
Installing Red Hat Enterprise Linux AI 1.1 Installation documentation on various platforms Red Hat RHEL AI Documentation Team
[ "use the embedded container image ostreecontainer --url=/run/install/repo/container --transport=oci --no-signature-verification switch bootc to point to Red Hat container image for upgrades %post bootc switch --mutate-in-place --transport registry registry.redhat.io/rhelai1/bootc-nvidia-rhel9:1.1 touch /etc/cloud/cloud-init.disabled %end ## user customizations follow customize this for your target system network environment network --bootproto=dhcp --device=link --activate customize this for your target system desired disk partitioning clearpart --all --initlabel --disklabel=gpt reqpart --add-boot part / --grow --fstype xfs services can also be customized via Kickstart firewall --disabled services --enabled=sshd optionally add a user user --name=cloud-user --groups=wheel --plaintext --password <password> sshkey --username cloud-user \"ssh-ed25519 AAAAC3Nza.....\" if desired, inject an SSH key for root rootpw --iscrypted locked sshkey --username root \"ssh-ed25519 AAAAC3Nza...\" reboot", "mkksiso rhelai-bootc.ks <downloaded-iso-image> rhelai-bootc-ks.iso", "customize this for your target system network environment network --bootproto=dhcp --device=link --activate customize this for your target system desired disk partitioning clearpart --all --initlabel --disklabel=gpt reqpart --add-boot part / --grow --fstype xfs customize this to include your own bootc container ostreecontainer --url quay.io/<your-user-name>/nvidia-bootc:latest services can also be customized via Kickstart firewall --disabled services --enabled=sshd optionally add a user user --name=cloud-user --groups=wheel --plaintext --password <password> sshkey --username cloud-user \"ssh-ed25519 AAAAC3Nza.....\" if desired, inject an SSH key for root rootpw --iscrypted locked sshkey --username root \"ssh-ed25519 AAAAC3Nza...\" reboot", "mkksiso rhelai-bootc.ks <downloaded-iso-image> rhelai-bootc-ks.iso", "ilab", "ilab Usage: ilab [OPTIONS] COMMAND [ARGS] CLI for interacting with InstructLab. If this is your first time running ilab, it's best to start with `ilab config init` to create the environment. Options: --config PATH Path to a configuration file. [default: /home/auser/.config/instructlab/config.yaml] -v, --verbose Enable debug logging (repeat for even more verbosity) --version Show the version and exit. --help Show this message and exit. Commands: config Command Group for Interacting with the Config of InstructLab. data Command Group for Interacting with the Data generated by model Command Group for Interacting with the Models in InstructLab. system Command group for all system-related command calls taxonomy Command Group for Interacting with the Taxonomy of InstructLab. Aliases: chat model chat convert model convert diff taxonomy diff download model download evaluate model evaluate generate data generate init config init list model list serve model serve sysinfo system info test model test train model train", "export BUCKET=<custom_bucket_name> export RAW_AMI=nvidia-bootc.ami export AMI_NAME=\"rhel-ai\" export DEFAULT_VOLUME_SIZE=1000", "aws s3 mb s3://USDBUCKET", "printf '{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"Service\": \"vmie.amazonaws.com\" }, \"Action\": \"sts:AssumeRole\", \"Condition\": { \"StringEquals\":{ \"sts:Externalid\": \"vmimport\" } } } ] }' > trust-policy.json", "aws iam create-role --role-name vmimport --assume-role-policy-document file://trust-policy.json", "printf '{ \"Version\":\"2012-10-17\", \"Statement\":[ { \"Effect\":\"Allow\", \"Action\":[ \"s3:GetBucketLocation\", \"s3:GetObject\", \"s3:ListBucket\" ], \"Resource\":[ \"arn:aws:s3:::%s\", \"arn:aws:s3:::%s/*\" ] }, { \"Effect\":\"Allow\", \"Action\":[ \"ec2:ModifySnapshotAttribute\", \"ec2:CopySnapshot\", \"ec2:RegisterImage\", \"ec2:Describe*\" ], \"Resource\":\"*\" } ] }' USDBUCKET USDBUCKET > role-policy.json", "aws iam put-role-policy --role-name vmimport --policy-name vmimport-USDBUCKET --policy-document file://role-policy.json", "curl -Lo disk.raw <link-to-raw-file>", "aws s3 cp disk.raw s3://USDBUCKET/USDRAW_AMI", "printf '{ \"Description\": \"my-image\", \"Format\": \"raw\", \"UserBucket\": { \"S3Bucket\": \"%s\", \"S3Key\": \"%s\" } }' USDBUCKET USDRAW_AMI > containers.json", "task_id=USD(aws ec2 import-snapshot --disk-container file://containers.json | jq -r .ImportTaskId)", "aws ec2 describe-import-snapshot-tasks --filters Name=task-state,Values=active", "snapshot_id=USD(aws ec2 describe-snapshots | jq -r '.Snapshots[] | select(.Description | contains(\"'USD{task_id}'\")) | .SnapshotId')", "aws ec2 create-tags --resources USDsnapshot_id --tags Key=Name,Value=\"USDAMI_NAME\"", "ami_id=USD(aws ec2 register-image --name \"USDAMI_NAME\" --description \"USDAMI_NAME\" --architecture x86_64 --root-device-name /dev/sda1 --block-device-mappings \"DeviceName=/dev/sda1,Ebs={VolumeSize=USD{DEFAULT_VOLUME_SIZE},SnapshotId=USD{snapshot_id}}\" --virtualization-type hvm --ena-support | jq -r .ImageId)", "aws ec2 create-tags --resources USDami_id --tags Key=Name,Value=\"USDAMI_NAME\"", "aws ec2 describe-images --owners self", "aws ec2 describe-security-groups", "aws ec2 describe-subnets", "instance_name=rhel-ai-instance ami=<ami-id> instance_type=<instance-type-size> key_name=<key-pair-name> security_group=<sg-id> disk_size=<size-of-disk>", "aws ec2 run-instances --image-id USDami --instance-type USDinstance_type --key-name USDkey_name --security-group-ids USDsecurity_group --subnet-id USDsubnet --block-device-mappings DeviceName=/dev/sda1,Ebs='{VolumeSize='USDdisk_size'}' --tag-specifications 'ResourceType=instance,Tags=[{Key=Name,Value='USDinstance_name'}]'", "ilab", "ilab Usage: ilab [OPTIONS] COMMAND [ARGS] CLI for interacting with InstructLab. If this is your first time running ilab, it's best to start with `ilab config init` to create the environment. Options: --config PATH Path to a configuration file. [default: /home/cloud--user/.config/instructlab/config.yaml] -v, --verbose Enable debug logging (repeat for even more verbosity) --version Show the version and exit. --help Show this message and exit. Commands: config Command Group for Interacting with the Config of InstructLab. data Command Group for Interacting with the Data generated by model Command Group for Interacting with the Models in InstructLab. system Command group for all system-related command calls. taxonomy Command Group for Interacting with the Taxonomy of InstructLab. Aliases: chat model chat convert model convert diff taxonomy diff download model download evaluate model evaluate generate data generate init config init list model list serve model serve sysinfo system info test model test train model train", "ibmcloud login", "ibmcloud login API endpoint: https://cloud.ibm.com Region: us-east Get a one-time code from https://identity-1.eu-central.iam.cloud.ibm.com/identity/passcode to proceed. Open the URL in the default browser? [Y/n] > One-time code > Authenticating OK Select an account: 1. <account-name> 2. <account-name-2> API endpoint: https://cloud.ibm.com Region: us-east User: <user-name> Account: <selected-account> Resource group: No resource group targeted, use 'ibmcloud target -g RESOURCE_GROUP'", "ibmcloud plugin install cloud-object-storage infrastructure-service", "ibmcloud target -g Default", "ibmcloud target -r us-east", "ibmcloud catalog service cloud-object-storage --output json | jq -r '.[].children[] | select(.children != null) | .children[].name'", "cos_deploy_plan=premium-global-deployment", "cos_si_name=THE_NAME_OF_YOUR_SERVICE_INSTANCE", "ibmcloud resource service-instance-create USD{cos_si_name} cloud-object-storage standard global -d USD{cos_deploy_plan}", "cos_crn=USD(ibmcloud resource service-instance USD{cos_si_name} --output json| jq -r '.[] | select(.crn | contains(\"cloud-object-storage\")) | .crn')", "ibmcloud cos config crn --crn USD{cos_crn} --force", "bucket_name=NAME_OF_MY_BUCKET", "ibmcloud cos bucket-create --bucket USD{bucket_name}", "cos_si_guid=USD(ibmcloud resource service-instance USD{cos_si_name} --output json| jq -r '.[] | select(.crn | contains(\"cloud-object-storage\")) | .guid')", "ibmcloud iam authorization-policy-create is cloud-object-storage Reader --source-resource-type image --target-service-instance-id USD{cos_si_guid}", "curl -Lo disk.qcow2 \"PASTE_HERE_THE_LINK_OF_THE_QCOW2_FILE\"", "image_name=rhel-ai-20240703v0", "ibmcloud cos upload --bucket USD{bucket_name} --key USD{image_name}.qcow2 --file disk.qcow2 --region <region>", "ibmcloud is image-create USD{image_name} --file cos://<region>/USD{bucket_name}/USD{image_name}.qcow2 --os-name red-ai-9-amd64-nvidia-byol", "image_id=USD(ibmcloud is images --visibility private --output json | jq -r '.[] | select(.name==\"'USDimage_name'\") | .id')", "while ibmcloud is image --output json USD{image_id} | jq -r .status | grep -xq pending; do sleep 1; done", "ibmcloud is image USD{image_id}", "ibmcloud login -c <ACCOUNT_ID> -r <REGION> -g <RESOURCE_GROUP>", "ibmcloud plugin install infrastructure-service", "ssh-keygen -f ibmcloud -t ed25519", "ibmcloud is key-create my-ssh-key @ibmcloud.pub --key-type ed25519", "ibmcloud is floating-ip-reserve my-public-ip --zone <region>", "ibmcloud is instance-profiles", "name=my-rhelai-instance vpc=my-vpc-in-us-east zone=us-east-1 subnet=my-subnet-in-us-east-1 instance_profile=gx3-64x320x4l4 image=my-custom-rhelai-image sshkey=my-ssh-key floating_ip=my-public-ip disk_size=250", "ibmcloud is instance-create USDname USDvpc USDzone USDinstance_profile USDsubnet --image USDimage --keys USDsshkey --boot-volume '{\"name\": \"'USD{name}'-boot\", \"volume\": {\"name\": \"'USD{name}'-boot\", \"capacity\": 'USD{disk_size}', \"profile\": {\"name\": \"general-purpose\"}}}' --allow-ip-spoofing false", "ibmcloud is floating-ip-update USDfloating_ip --nic primary --in USDname", "ilab", "ilab Usage: ilab [OPTIONS] COMMAND [ARGS] CLI for interacting with InstructLab. If this is your first time running ilab, it's best to start with `ilab config init` to create the environment. Options: --config PATH Path to a configuration file. [default: /home/auser/.config/instructlab/config.yaml] -v, --verbose Enable debug logging (repeat for even more verbosity) --version Show the version and exit. --help Show this message and exit. Commands: config Command Group for Interacting with the Config of InstructLab. data Command Group for Interacting with the Data generated by model Command Group for Interacting with the Models in InstructLab. system Command group for all system-related command calls taxonomy Command Group for Interacting with the Taxonomy of InstructLab. Aliases: chat model chat convert model convert diff taxonomy diff download model download evaluate model evaluate generate data generate init config init list model model_list serve model serve sysinfo system info test model test train model train", "name=my-rhelai-instance", "data_volume_size=1000", "ibmcloud is instance-volume-attachment-add data USD{name} --new-volume-name USD{name}-data --profile general-purpose --capacity USD{data_volume_size}", "lsblk", "disk=/dev/vdb", "sgdisk -n 1:0:0 USDdisk", "mkfs.xfs -L ilab-data USD{disk}1", "echo LABEL=ilab-data /mnt xfs defaults 0 0 >> /etc/fstab", "systemctl daemon-reload", "mount -a", "chmod 1777 /mnt/", "echo 'export ILAB_HOME=/mnt' >> USDHOME/.bash_profile", "source USDHOME/.bash_profile" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.1/html-single/installing/index
function::gettimeofday_ms
function::gettimeofday_ms Name function::gettimeofday_ms - Number of milliseconds since UNIX epoch. Synopsis Arguments None General Syntax gettimeofday_ms: long Description This function returns the number of milliseconds since the UNIX epoch.
[ "function gettimeofday_ms:long()" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-gettimeofday-ms
Data Grid downloads
Data Grid downloads Access the Data Grid Software Downloads on the Red Hat customer portal. Note You must have a Red Hat account to access and download Data Grid software.
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/data_grid_operator_guide/rhdg-downloads_datagrid
Chapter 75. Replacing the web server and LDAP server certificates if they have expired in the whole IdM deployment
Chapter 75. Replacing the web server and LDAP server certificates if they have expired in the whole IdM deployment Identity Management (IdM) uses the following service certificates: The LDAP (or Directory ) server certificate The web (or httpd ) server certificate The PKINIT certificate In an IdM deployment without a CA, certmonger does not by default track IdM service certificates or notify of their expiration. If the IdM system administrator does not manually set up notifications for these certificates, or configure certmonger to track them, the certificates will expire without notice. Follow this procedure to manually replace expired certificates for the httpd and LDAP services running on the server.idm.example.com IdM server. Note The HTTP and LDAP service certificates have different keypairs and subject names on different IdM servers. Therefore, you must renew the certificates on each IdM server individually. Prerequisites The HTTP and LDAP certificates have expired on all IdM replicas in the topology. If not, see Replacing the web server and LDAP server certificates if they have not yet expired on an IdM replica . You have root access to the IdM server and replicas. You know the Directory Manager password. You have created backups of the following directories and files: /etc/dirsrv/slapd- IDM-EXAMPLE-COM / /etc/httpd/alias /var/lib/certmonger /var/lib/ipa/certs/ Procedure Optional: Perform a backup of /var/lib/ipa/private and /var/lib/ipa/passwds . If you are not using the same CA to sign the new certificates or if the already installed CA certificate is no longer valid, update the information about the external CA in your local database with a file that contains a valid CA certificate chain of the external CA. The file is accepted in PEM and DER certificate, PKCS#7 certificate chain, PKCS#8 and raw private key and PKCS#12 formats. Install the certificates available in ca_certificate_chain_file.crt as additional CA certificates into IdM: Update the local IdM certificate databases with certificates from ca_certicate_chain_file.crt : Request the certificates for httpd and LDAP: Create a certificate signing request (CSR) for the Apache web server running on your IdM instances to your third party CA using the OpenSSL utility. The creation of a new private key is optional. If you still have the original private key, you can use the -in option with the openssl req command to specify the input file name to read the request from: If you want to create a new key: Create a certificate signing request (CSR) for the LDAP server running on your IdM instances to your third party CA using the OpenSSL utility: Submit the CSRs, /tmp/http.csr and tmp/ldap.csr , to the external CA, and obtain a certificate for httpd and a certificate for LDAP. The process differs depending on the service to be used as the external CA. Install the certificate for httpd : Install the LDAP certificate into an NSS database: Optional: List the available certificates: The default certificate nickname is Server-Cert , but it is possible that a different name was applied. Remove the old invalid certificate from the NSS database ( NSSDB ) by using the certificate nickname from the step: Create a PKCS12 file to ease the import process into NSSDB : Install the created PKCS#12 file into the NSSDB : Check that the new certificate has been successfully imported: Restart the httpd service: Restart the Directory service: Perform all the steps on all your IdM replicas. This is a prerequisite for establishing TLS connections between the replicas. Enroll the new certificates to LDAP storage: Replace the Apache web server's old private key and certificate with the new key and the newly-signed certificate: In the command above: The -w option specifies that you are installing a certificate into the web server. The --pin option specifies the password protecting the private key. When prompted, enter the Directory Manager password. Replace the LDAP server's old private key and certificate with the new key and the newly-signed certificate: In the command above: The -d option specifies that you are installing a certificate into the LDAP server. The --pin option specifies the password protecting the private key. When prompted, enter the Directory Manager password. Restart the httpd service: Restart the Directory service: Execute the commands from the step on all the other affected replicas. Additional resources man ipa-server-certinstall(1) How do I manually renew Identity Management (IPA) certificates on RHEL 8 after they have expired? (CA-less IPA) (Red Hat Knowledgebase) Converting certificate formats to work with IdM
[ "ipa-cacert-manage install ca_certificate_chain_file.crt", "ipa-certupdate", "openssl req -new -nodes -in /var/lib/ipa/private/httpd.key -out /tmp/http.csr -addext 'subjectAltName = DNS:_server.idm.example.com_, otherName:1.3.6.1.4.1.311.20.2.3;UTF8:HTTP/ [email protected] ' -subj '/O= IDM.EXAMPLE.COM/CN=server.idm.example.com '", "openssl req -new -newkey rsa:2048 -nodes -keyout /var/lib/ipa/private/httpd.key -out /tmp/http.csr -addext 'subjectAltName = DNS: server.idm.example.com , otherName:1.3.6.1.4.1.311.20.2.3;UTF8:HTTP/ [email protected] ' -subj '/O= IDM.EXAMPLE.COM /CN= server.idm.example.com '", "openssl req -new -newkey rsa:2048 -nodes -keyout ~/ldap.key -out /tmp/ldap.csr -addext 'subjectAltName = DNS: server.idm.example.com , otherName:1.3.6.1.4.1.311.20.2.3;UTF8:ldap/ [email protected] ' -subj '/O= IDM.EXAMPLE.COM /CN= server.idm.example.com '", "cp /path/to/httpd.crt /var/lib/ipa/certs/", "certutil -d /etc/dirsrv/slapd- IDM-EXAMPLE-COM / -L Certificate Nickname Trust Attributes SSL,S/MIME,JAR/XPI Server-Cert u,u,u", "certutil -D -d /etc/dirsrv/slapd- IDM-EXAMPLE-COM / -n 'Server-Cert' -f /etc/dirsrv/slapd- IDM-EXAMPLE-COM /pwdfile.txt", "openssl pkcs12 -export -in ldap.crt -inkey ldap.key -out ldap.p12 -name Server-Cert", "pk12util -i ldap.p12 -d /etc/dirsrv/slapd- IDM-EXAMPLE-COM / -k /etc/dirsrv/slapd- IDM-EXAMPLE-COM /pwdfile.txt", "certutil -L -d /etc/dirsrv/slapd- IDM-EXAMPLE-COM /", "systemctl restart httpd.service", "systemctl restart dirsrv@ IDM-EXAMPLE-COM .service", "ipa-server-certinstall -w --pin=password /var/lib/ipa/private/httpd.key /var/lib/ipa/certs/httpd.crt", "ipa-server-certinstall -d --pin=password /etc/dirsrv/slapd- IDM-EXAMPLE-COM /ldap.key /path/to/ldap.crt", "systemctl restart httpd.service", "systemctl restart dirsrv@ IDM-EXAMPLE-COM .service" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_identity_management/proc_replacing-the-web-server-and-ldap-server-certificates-if-they-have-expired-in-the-whole-idm-deployment_configuring-and-managing-idm
Network Observability
Network Observability OpenShift Container Platform 4.15 Configuring and using the Network Observability Operator in OpenShift Container Platform Red Hat OpenShift Documentation Team
[ "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-from-hostnetwork namespace: netobserv spec: podSelector: matchLabels: app: netobserv-operator ingress: - from: - namespaceSelector: matchLabels: policy-group.network.openshift.io/host-network: '' policyTypes: - Ingress", "apiVersion: v1 kind: Secret metadata: name: loki-s3 namespace: netobserv 1 stringData: access_key_id: QUtJQUlPU0ZPRE5ON0VYQU1QTEUK access_key_secret: d0phbHJYVXRuRkVNSS9LN01ERU5HL2JQeFJmaUNZRVhBTVBMRUtFWQo= bucketnames: s3-bucket-name endpoint: https://s3.eu-central-1.amazonaws.com region: eu-central-1", "apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: loki namespace: netobserv 1 spec: size: 1x.small 2 storage: schemas: - version: v12 effectiveDate: '2022-06-01' secret: name: loki-s3 type: s3 storageClassName: gp3 3 tenants: mode: openshift-network", "oc adm groups new cluster-admin", "oc adm groups add-users cluster-admin <username>", "oc adm policy add-cluster-role-to-group cluster-admin cluster-admin", "apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: loki namespace: netobserv spec: tenants: mode: openshift-network 1 openshift: adminGroups: 2 - cluster-admin - custom-admin-group 3", "spec: limits: global: ingestion: ingestionBurstSize: 40 ingestionRate: 20 maxGlobalStreamsPerTenant: 25000 queries: maxChunksPerQuery: 2000000 maxEntriesLimitPerQuery: 10000 maxQuerySeries: 3000", "oc adm policy add-cluster-role-to-user netobserv-reader <user_group_or_name>", "oc adm policy add-role-to-user netobserv-metrics-reader <user_group_or_name> -n <namespace>", "oc adm policy add-cluster-role-to-user netobserv-reader <user_group_or_name>", "oc adm policy add-cluster-role-to-user cluster-monitoring-view <user_group_or_name>", "oc adm policy add-cluster-role-to-user netobserv-metrics-reader <user_group_or_name>", "oc get crd flowcollectors.flows.netobserv.io -ojsonpath='{.status.storedVersions}'", "apiVersion: migration.k8s.io/v1alpha1 kind: StorageVersionMigration metadata: name: migrate-flowcollector-v1alpha1 spec: resource: group: flows.netobserv.io resource: flowcollectors version: v1alpha1", "oc apply -f migrate-flowcollector-v1alpha1.yaml", "oc edit crd flowcollectors.flows.netobserv.io", "oc get flowcollector cluster -o yaml > flowcollector-1.5.yaml", "oc get crd flowcollectors.flows.netobserv.io -ojsonpath='{.status.storedVersions}'", "oc get flowcollector/cluster", "NAME AGENT SAMPLING (EBPF) DEPLOYMENT MODEL STATUS cluster EBPF 50 DIRECT Ready", "oc get pods -n netobserv", "NAME READY STATUS RESTARTS AGE flowlogs-pipeline-56hbp 1/1 Running 0 147m flowlogs-pipeline-9plvv 1/1 Running 0 147m flowlogs-pipeline-h5gkb 1/1 Running 0 147m flowlogs-pipeline-hh6kf 1/1 Running 0 147m flowlogs-pipeline-w7vv5 1/1 Running 0 147m netobserv-plugin-cdd7dc6c-j8ggp 1/1 Running 0 147m", "oc get pods -n netobserv-privileged", "NAME READY STATUS RESTARTS AGE netobserv-ebpf-agent-4lpp6 1/1 Running 0 151m netobserv-ebpf-agent-6gbrk 1/1 Running 0 151m netobserv-ebpf-agent-klpl9 1/1 Running 0 151m netobserv-ebpf-agent-vrcnf 1/1 Running 0 151m netobserv-ebpf-agent-xf5jh 1/1 Running 0 151m", "oc get pods -n openshift-operators-redhat", "NAME READY STATUS RESTARTS AGE loki-operator-controller-manager-5f6cff4f9d-jq25h 2/2 Running 0 18h lokistack-compactor-0 1/1 Running 0 18h lokistack-distributor-654f87c5bc-qhkhv 1/1 Running 0 18h lokistack-distributor-654f87c5bc-skxgm 1/1 Running 0 18h lokistack-gateway-796dc6ff7-c54gz 2/2 Running 0 18h lokistack-index-gateway-0 1/1 Running 0 18h lokistack-index-gateway-1 1/1 Running 0 18h lokistack-ingester-0 1/1 Running 0 18h lokistack-ingester-1 1/1 Running 0 18h lokistack-ingester-2 1/1 Running 0 18h lokistack-querier-66747dc666-6vh5x 1/1 Running 0 18h lokistack-querier-66747dc666-cjr45 1/1 Running 0 18h lokistack-querier-66747dc666-xh8rq 1/1 Running 0 18h lokistack-query-frontend-85c6db4fbd-b2xfb 1/1 Running 0 18h lokistack-query-frontend-85c6db4fbd-jm94f 1/1 Running 0 18h", "oc describe flowcollector/cluster", "apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv deploymentModel: Direct agent: type: eBPF 1 ebpf: sampling: 50 2 logLevel: info privileged: false resources: requests: memory: 50Mi cpu: 100m limits: memory: 800Mi processor: 3 logLevel: info resources: requests: memory: 100Mi cpu: 100m limits: memory: 800Mi logTypes: Flows advanced: conversationEndTimeout: 10s conversationHeartbeatInterval: 30s loki: 4 mode: LokiStack 5 consolePlugin: register: true logLevel: info portNaming: enable: true portNames: \"3100\": loki quickFilters: 6 - name: Applications filter: src_namespace!: 'openshift-,netobserv' dst_namespace!: 'openshift-,netobserv' default: true - name: Infrastructure filter: src_namespace: 'openshift-,netobserv' dst_namespace: 'openshift-,netobserv' - name: Pods network filter: src_kind: 'Pod' dst_kind: 'Pod' default: true - name: Services network filter: dst_kind: 'Service'", "apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: deploymentModel: Kafka 1 kafka: address: \"kafka-cluster-kafka-bootstrap.netobserv\" 2 topic: network-flows 3 tls: enable: false 4", "apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: exporters: - type: Kafka 1 kafka: address: \"kafka-cluster-kafka-bootstrap.netobserv\" topic: netobserv-flows-export 2 tls: enable: false 3 - type: IPFIX 4 ipfix: targetHost: \"ipfix-collector.ipfix.svc.cluster.local\" targetPort: 4739 transport: tcp or udp 5 - type: OpenTelemetry 6 openTelemetry: targetHost: my-otelcol-collector-headless.otlp.svc targetPort: 4317 type: grpc 7 logs: 8 enable: true metrics: 9 enable: true prefix: netobserv pushTimeInterval: 20s 10 expiryTime: 2m # fieldsMapping: 11 # input: SrcAddr # output: source.address", "oc patch flowcollector cluster --type=json -p \"[{\"op\": \"replace\", \"path\": \"/spec/agent/ebpf/sampling\", \"value\": <new value>}] -n netobserv\"", "apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv networkPolicy: enable: true 1 additionalNamespaces: [\"openshift-console\", \"openshift-monitoring\"] 2", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy spec: ingress: - from: - podSelector: {} - namespaceSelector: matchLabels: kubernetes.io/metadata.name: netobserv-privileged - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-console ports: - port: 9001 protocol: TCP - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-monitoring podSelector: {} policyTypes: - Ingress", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: netobserv namespace: netobserv-privileged spec: ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-monitoring podSelector: {} policyTypes: - Ingress", "apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: processor: logTypes: Flows 1 advanced: conversationEndTimeout: 10s 2 conversationHeartbeatInterval: 30s 3", "apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv agent: type: eBPF ebpf: features: - PacketDrop 1 privileged: true 2", "apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv agent: type: eBPF ebpf: features: - DNSTracking 1 sampling: 1 2", "apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv agent: type: eBPF ebpf: features: - FlowRTT 1", "apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: processor: addZone: true", "apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv deploymentModel: Direct agent: type: eBPF ebpf: flowFilter: action: Accept 1 cidr: 172.210.150.1/24 2 protocol: SCTP direction: Ingress destPortRange: 80-100 peerIP: 10.10.10.10 enable: true 3", "apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv deploymentModel: Direct agent: type: eBPF ebpf: flowFilter: action: Accept 1 cidr: 0.0.0.0/0 2 protocol: TCP direction: Egress sourcePort: 100 peerIP: 192.168.127.12 3 enable: true 4", "apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv agent: type: eBPF ebpf: features: - PacketTranslation 1", "apiVersion: monitoring.openshift.io/v1 kind: AlertingRule metadata: name: netobserv-alerts namespace: openshift-monitoring spec: groups: - name: NetObservAlerts rules: - alert: NetObservIncomingBandwidth annotations: message: |- {{ USDlabels.job }}: incoming traffic exceeding 10 MBps for 30s on {{ USDlabels.DstK8S_OwnerType }} {{ USDlabels.DstK8S_OwnerName }} ({{ USDlabels.DstK8S_Namespace }}). summary: \"High incoming traffic.\" expr: sum(rate(netobserv_workload_ingress_bytes_total {SrcK8S_Namespace=\"openshift-ingress\"}[1m])) by (job, DstK8S_Namespace, DstK8S_OwnerName, DstK8S_OwnerType) > 10000000 1 for: 30s labels: severity: warning", "apiVersion: flows.netobserv.io/v1alpha1 kind: FlowMetric metadata: name: flowmetric-cluster-external-ingress-traffic namespace: netobserv 1 spec: metricName: cluster_external_ingress_bytes_total 2 type: Counter 3 valueField: Bytes direction: Ingress 4 labels: [DstK8S_HostName,DstK8S_Namespace,DstK8S_OwnerName,DstK8S_OwnerType] 5 filters: 6 - field: SrcSubnetLabel matchType: Absence", "apiVersion: flows.netobserv.io/v1alpha1 kind: FlowMetric metadata: name: flowmetric-cluster-external-ingress-rtt namespace: netobserv 1 spec: metricName: cluster_external_ingress_rtt_seconds type: Histogram 2 valueField: TimeFlowRttNs direction: Ingress labels: [DstK8S_HostName,DstK8S_Namespace,DstK8S_OwnerName,DstK8S_OwnerType] filters: - field: SrcSubnetLabel matchType: Absence - field: TimeFlowRttNs matchType: Presence divider: \"1000000000\" 3 buckets: [\".001\", \".005\", \".01\", \".02\", \".03\", \".04\", \".05\", \".075\", \".1\", \".25\", \"1\"] 4", "apiVersion: flows.netobserv.io/v1alpha1 kind: FlowMetric metadata: name: flowmetric-cluster-external-ingress-traffic namespace: netobserv 1 charts: - dashboardName: Main 2 title: External ingress traffic unit: Bps type: SingleStat queries: - promQL: \"sum(rate(USDMETRIC[2m]))\" legend: \"\" - dashboardName: Main 3 sectionName: External title: Top external ingress traffic per workload unit: Bps type: StackArea queries: - promQL: \"sum(rate(USDMETRIC{DstK8S_Namespace!=\\\"\\\"}[2m])) by (DstK8S_Namespace, DstK8S_OwnerName)\" legend: \"{{DstK8S_Namespace}} / {{DstK8S_OwnerName}}\"", "apiVersion: flows.netobserv.io/v1alpha1 kind: FlowMetric metadata: name: flowmetric-cluster-external-ingress-traffic namespace: netobserv 1 charts: - dashboardName: Main 2 title: External ingress TCP latency unit: seconds type: SingleStat queries: - promQL: \"histogram_quantile(0.99, sum(rate(USDMETRIC_bucket[2m])) by (le)) > 0\" legend: \"p99\" - dashboardName: Main 3 sectionName: External title: \"Top external ingress sRTT per workload, p50 (ms)\" unit: seconds type: Line queries: - promQL: \"histogram_quantile(0.5, sum(rate(USDMETRIC_bucket{DstK8S_Namespace!=\\\"\\\"}[2m])) by (le,DstK8S_Namespace,DstK8S_OwnerName))*1000 > 0\" legend: \"{{DstK8S_Namespace}} / {{DstK8S_OwnerName}}\" - dashboardName: Main 4 sectionName: External title: \"Top external ingress sRTT per workload, p99 (ms)\" unit: seconds type: Line queries: - promQL: \"histogram_quantile(0.99, sum(rate(USDMETRIC_bucket{DstK8S_Namespace!=\\\"\\\"}[2m])) by (le,DstK8S_Namespace,DstK8S_OwnerName))*1000 > 0\" legend: \"{{DstK8S_Namespace}} / {{DstK8S_OwnerName}}\"", "promQL: \"(sum(rate(USDMETRIC_sum{DstK8S_Namespace!=\\\"\\\"}[2m])) by (DstK8S_Namespace,DstK8S_OwnerName) / sum(rate(USDMETRIC_count{DstK8S_Namespace!=\\\"\\\"}[2m])) by (DstK8S_Namespace,DstK8S_OwnerName))*1000\"", "apiVersion: flows.netobserv.io/v1alpha1 kind: FlowMetric metadata: name: flows-with-flags-per-destination spec: metricName: flows_with_flags_per_destination_total type: Counter labels: [SrcSubnetLabel,DstSubnetLabel,DstK8S_Name,DstK8S_Type,DstK8S_HostName,DstK8S_Namespace,Flags]", "apiVersion: flows.netobserv.io/v1alpha1 kind: FlowMetric metadata: name: flows-with-flags-per-source spec: metricName: flows_with_flags_per_source_total type: Counter labels: [DstSubnetLabel,SrcSubnetLabel,SrcK8S_Name,SrcK8S_Type,SrcK8S_HostName,SrcK8S_Namespace,Flags]", "apiVersion: monitoring.openshift.io/v1 kind: AlertingRule metadata: name: netobserv-syn-alerts namespace: openshift-monitoring spec: groups: - name: NetObservSYNAlerts rules: - alert: NetObserv-SYNFlood-in annotations: message: |- {{ USDlabels.job }}: incoming SYN-flood attack suspected to Host={{ USDlabels.DstK8S_HostName}}, Namespace={{ USDlabels.DstK8S_Namespace }}, Resource={{ USDlabels.DstK8S_Name }}. This is characterized by a high volume of SYN-only flows with different source IPs and/or ports. summary: \"Incoming SYN-flood\" expr: sum(rate(netobserv_flows_with_flags_per_destination_total{Flags=\"2\"}[1m])) by (job, DstK8S_HostName, DstK8S_Namespace, DstK8S_Name) > 300 1 for: 15s labels: severity: warning app: netobserv - alert: NetObserv-SYNFlood-out annotations: message: |- {{ USDlabels.job }}: outgoing SYN-flood attack suspected from Host={{ USDlabels.SrcK8S_HostName}}, Namespace={{ USDlabels.SrcK8S_Namespace }}, Resource={{ USDlabels.SrcK8S_Name }}. This is characterized by a high volume of SYN-only flows with different source IPs and/or ports. summary: \"Outgoing SYN-flood\" expr: sum(rate(netobserv_flows_with_flags_per_source_total{Flags=\"2\"}[1m])) by (job, SrcK8S_HostName, SrcK8S_Namespace, SrcK8S_Name) > 300 2 for: 15s labels: severity: warning app: netobserv", "apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: processor: metrics: disableAlerts: [NetObservLokiError, NetObservNoFlows] 1", "apiVersion: monitoring.openshift.io/v1 kind: AlertingRule metadata: name: loki-alerts namespace: openshift-monitoring spec: groups: - name: LokiRateLimitAlerts rules: - alert: LokiTenantRateLimit annotations: message: |- {{ USDlabels.job }} {{ USDlabels.route }} is experiencing 429 errors. summary: \"At any number of requests are responded with the rate limit error code.\" expr: sum(irate(loki_request_duration_seconds_count{status_code=\"429\"}[1m])) by (job, namespace, route) / sum(irate(loki_request_duration_seconds_count[1m])) by (job, namespace, route) * 100 > 0 for: 10s labels: severity: warning", "apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv deploymentModel: Direct agent: type: eBPF ebpf: cacheMaxFlows: 200000 1", "apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: advanced: scheduling: tolerations: - key: \"<taint key>\" operator: \"Equal\" value: \"<taint value>\" effect: \"<taint effect>\" nodeSelector: <key>: <value> affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: name operator: In values: - app-worker-node priorityClassName: \"\"\"", "apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv deploymentModel: Direct agent: type: eBPF ebpf: privileged: true 1", "oc get pod virt-launcher-<vm_name>-<suffix> -n <namespace> -o yaml", "apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/network-status: |- [{ \"name\": \"ovn-kubernetes\", \"interface\": \"eth0\", \"ips\": [ \"10.129.2.39\" ], \"mac\": \"0a:58:0a:81:02:27\", \"default\": true, \"dns\": {} }, { \"name\": \"my-vms/l2-network\", 1 \"interface\": \"podc0f69e19ba2\", 2 \"ips\": [ 3 \"10.10.10.15\" ], \"mac\": \"02:fb:f8:00:00:12\", 4 \"dns\": {} }] name: virt-launcher-fedora-aqua-fowl-13-zr2x9 namespace: my-vms spec: status:", "apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: agent: ebpf: privileged: true 1 processor: advanced: secondaryNetworks: - index: 2 - MAC 3 name: my-vms/l2-network 4", "curl -LO https://mirror.openshift.com/pub/cgw/netobserv/latest/oc-netobserv-amd64", "chmod +x ./oc-netobserv-amd64", "sudo mv ./oc-netobserv-amd64 /usr/local/bin/oc-netobserv", "oc netobserv version", "Netobserv CLI version <version>", "oc netobserv flows --enable_filter=true --action=Accept --cidr=0.0.0.0/0 --protocol=TCP --port=49051", "live table filter: [SrcK8S_Zone:us-west-1b] press enter to match multiple regular expressions at once", "{ \"AgentIP\": \"10.0.1.76\", \"Bytes\": 561, \"DnsErrno\": 0, \"Dscp\": 20, \"DstAddr\": \"f904:ece9:ba63:6ac7:8018:1e5:7130:0\", \"DstMac\": \"0A:58:0A:80:00:37\", \"DstPort\": 9999, \"Duplicate\": false, \"Etype\": 2048, \"Flags\": 16, \"FlowDirection\": 0, \"IfDirection\": 0, \"Interface\": \"ens5\", \"K8S_FlowLayer\": \"infra\", \"Packets\": 1, \"Proto\": 6, \"SrcAddr\": \"3e06:6c10:6440:2:a80:37:b756:270f\", \"SrcMac\": \"0A:58:0A:80:00:01\", \"SrcPort\": 46934, \"TimeFlowEndMs\": 1709741962111, \"TimeFlowRttNs\": 121000, \"TimeFlowStartMs\": 1709741962111, \"TimeReceived\": 1709741964 }", "sqlite3 ./output/flow/<capture_date_time>.db", "sqlite> SELECT DnsLatencyMs, DnsFlagsResponseCode, DnsId, DstAddr, DstPort, Interface, Proto, SrcAddr, SrcPort, Bytes, Packets FROM flow WHERE DnsLatencyMs >10 LIMIT 10;", "12|NoError|58747|10.128.0.63|57856||17|172.30.0.10|53|284|1 11|NoError|20486|10.128.0.52|56575||17|169.254.169.254|53|225|1 11|NoError|59544|10.128.0.103|51089||17|172.30.0.10|53|307|1 13|NoError|32519|10.128.0.52|55241||17|169.254.169.254|53|254|1 12|NoError|32519|10.0.0.3|55241||17|169.254.169.254|53|254|1 15|NoError|57673|10.128.0.19|59051||17|172.30.0.10|53|313|1 13|NoError|35652|10.0.0.3|46532||17|169.254.169.254|53|183|1 32|NoError|37326|10.0.0.3|52718||17|169.254.169.254|53|169|1 14|NoError|14530|10.0.0.3|58203||17|169.254.169.254|53|246|1 15|NoError|40548|10.0.0.3|45933||17|169.254.169.254|53|174|1", "oc netobserv packets --action=Accept --cidr=0.0.0.0/0 --protocol=TCP --port=49051", "live table filter: [SrcK8S_Zone:us-west-1b] press enter to match multiple regular expressions at once", "oc netobserv metrics --enable_filter=true --cidr=0.0.0.0/0 --protocol=TCP --port=49051", "https://console-openshift-console.apps.rosa...openshiftapps.com/monitoring/dashboards/netobserv-cli", "oc netobserv cleanup", "oc netobserv [<command>] [<feature_option>] [<command_options>] 1", "oc netobserv flows [<feature_option>] [<command_options>]", "oc netobserv flows --enable_pkt_drop --enable_rtt --action=Accept --cidr=0.0.0.0/0 --protocol=TCP --port=49051", "oc netobserv packets [<option>]", "oc netobserv packets --action=Accept --cidr=0.0.0.0/0 --protocol=TCP --port=49051", "oc netobserv metrics [<option>]", "oc netobserv metrics --enable_pkt_drop --protocol=TCP", "oc adm must-gather --image-stream=openshift/must-gather --image=quay.io/netobserv/must-gather", "oc -n netobserv get flowcollector cluster -o yaml", "apiVersion: flows.netobserv.io/v1alpha1 kind: FlowCollector metadata: name: cluster spec: consolePlugin: register: false", "oc edit console.operator.openshift.io cluster", "spec: plugins: - netobserv-plugin", "oc -n netobserv edit flowcollector cluster -o yaml", "apiVersion: flows.netobserv.io/v1alpha1 kind: FlowCollector metadata: name: cluster spec: consolePlugin: register: true", "oc get pods -n openshift-console -l app=console", "oc delete pods -n openshift-console -l app=console", "oc get pods -n netobserv -l app=netobserv-plugin", "NAME READY STATUS RESTARTS AGE netobserv-plugin-68c7bbb9bb-b69q6 1/1 Running 0 21s", "oc logs -n netobserv -l app=netobserv-plugin", "time=\"2022-12-13T12:06:49Z\" level=info msg=\"Starting netobserv-console-plugin [build version: , build date: 2022-10-21 15:15] at log level info\" module=main time=\"2022-12-13T12:06:49Z\" level=info msg=\"listening on https://:9001\" module=server", "oc delete pods -n netobserv -l app=flowlogs-pipeline-transformer", "oc edit -n netobserv flowcollector.yaml -o yaml", "apiVersion: flows.netobserv.io/v1alpha1 kind: FlowCollector metadata: name: cluster spec: agent: type: EBPF ebpf: interfaces: [ 'br-int', 'br-ex' ] 1", "oc edit subscription netobserv-operator -n openshift-netobserv-operator", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: netobserv-operator namespace: openshift-netobserv-operator spec: channel: stable config: resources: limits: memory: 800Mi 1 requests: cpu: 100m memory: 100Mi installPlanApproval: Automatic name: netobserv-operator source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: <network_observability_operator_latest_version> 2", "oc exec deployment/netobserv-plugin -n netobserv -- curl -G -s -H 'X-Scope-OrgID:network' -H 'Authorization: Bearer <api_token>' -k https://loki-gateway-http.netobserv.svc:8080/api/logs/v1/network/loki/api/v1/labels | jq", "oc exec deployment/netobserv-plugin -n netobserv -- curl -G -s -H 'X-Scope-OrgID:network' -H 'Authorization: Bearer <api_token>' -k https://loki-gateway-http.netobserv.svc:8080/api/logs/v1/network/loki/api/v1/query --data-urlencode 'query={SrcK8S_Namespace=\"my-namespace\"}' | jq", "apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: loki namespace: netobserv spec: limits: global: ingestion: perStreamRateLimit: 6 1 perStreamRateLimitBurst: 30 2 tenants: mode: openshift-network managementState: Managed" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html-single/network_observability/index
A.11. Optional Workaround to Allow for Graceful Shutdown
A.11. Optional Workaround to Allow for Graceful Shutdown The libvirt-guests service has parameter settings that can be configured to assure that the guest can shutdown properly. It is a package that is a part of the libvirt installation and is installed by default. This service automatically saves guests to the disk when the host shuts down, and restores them to their pre-shutdown state when the host reboots. By default, this setting is set to suspend the guest. If you want the guest to be gracefully shutdown, you will need to change one of the parameters of the libvirt-guests configuration file. Procedure A.5. Changing the libvirt-guests service parameters to allow for the graceful shutdown of guests The procedure described here allows for the graceful shutdown of guest virtual machines when the host physical machine is stuck, powered off, or needs to be restarted. Open the configuration file The configuration file is located in /etc/sysconfig/libvirt-guests . Edit the file, remove the comment mark (#) and change the ON_SHUTDOWN=suspend to ON_SHUTDOWN=shutdown . Remember to save the change. URIS - checks the specified connections for a running guest. The Default setting functions in the same manner as virsh does when no explicit URI is set In addition, one can explicitly set the URI from /etc/libvirt/libvirt.conf . Note that when using the libvirt configuration file default setting, no probing will be used. ON_BOOT - specifies the action to be done to / on the guests when the host boots. The start option starts all guests that were running prior to shutdown regardless on their autostart settings. The ignore option will not start the formally running guest on boot, however, any guest marked as autostart will still be automatically started by libvirtd . The START_DELAY - sets a delay interval in between starting up the guests. This time period is set in seconds. Use the 0 time setting to make sure there is no delay and that all guests are started simultaneously. ON_SHUTDOWN - specifies the action taken when a host shuts down. Options that can be set include: suspend which suspends all running guests using virsh managedsave and shutdown which shuts down all running guests. It is best to be careful with using the shutdown option as there is no way to distinguish between a guest which is stuck or ignores shutdown requests and a guest that just needs a longer time to shutdown. When setting the ON_SHUTDOWN=shutdown , you must also set SHUTDOWN_TIMEOUT to a value suitable for the guests. PARALLEL_SHUTDOWN Dictates that the number of guests on shutdown at any time will not exceed number set in this variable and the guests will be suspended concurrently. If set to 0 , then guests are not shutdown concurrently. Number of seconds to wait for a guest to shut down. If SHUTDOWN_TIMEOUT is enabled, this timeout applies as a timeout for shutting down all guests on a single URI defined in the variable URIS. If SHUTDOWN_TIMEOUT is set to 0 , then there is no timeout (use with caution, as guests might not respond to a shutdown request). The default value is 300 seconds (5 minutes). BYPASS_CACHE can have 2 values, 0 to disable and 1 to enable. If enabled it will by-pass the file system cache when guests are restored. Note that setting this may effect performance and may cause slower operation for some file systems. Start libvirt-guests service If you have not started the service, start the libvirt-guests service. Do not restart the service as this will cause all running guest virtual machines to shutdown.
[ "vi /etc/sysconfig/libvirt-guests URIs to check for running guests example: URIS='default xen:/// vbox+tcp://host/system lxc:///' #URIS=default action taken on host boot - start all guests which were running on shutdown are started on boot regardless on their autostart settings - ignore libvirt-guests init script won't start any guest on boot, however, guests marked as autostart will still be automatically started by libvirtd #ON_BOOT=start Number of seconds to wait between each guest start. Set to 0 to allow parallel startup. #START_DELAY=0 action taken on host shutdown - suspend all running guests are suspended using virsh managedsave - shutdown all running guests are asked to shutdown. Please be careful with this settings since there is no way to distinguish between a guest which is stuck or ignores shutdown requests and a guest which just needs a long time to shutdown. When setting ON_SHUTDOWN=shutdown, you must also set SHUTDOWN_TIMEOUT to a value suitable for your guests. ON_SHUTDOWN=shutdown If set to non-zero, shutdown will suspend guests concurrently. Number of guests on shutdown at any time will not exceed number set in this variable. #PARALLEL_SHUTDOWN=0 Number of seconds we're willing to wait for a guest to shut down. If parallel shutdown is enabled, this timeout applies as a timeout for shutting down all guests on a single URI defined in the variable URIS. If this is 0, then there is no time out (use with caution, as guests might not respond to a shutdown request). The default value is 300 seconds (5 minutes). #SHUTDOWN_TIMEOUT=300 If non-zero, try to bypass the file system cache when saving and restoring guests, even though this may give slower operation for some file systems. #BYPASS_CACHE=0" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-Shutting_down_rebooting_and_force_shutdown_of_a_guest_virtual_machine-Manipulating_the_libvirt_guests_configuration_settings
Chapter 3. Configuring SSO for Argo CD using Keycloak
Chapter 3. Configuring SSO for Argo CD using Keycloak After the Red Hat OpenShift GitOps Operator is installed, Argo CD automatically creates a user with admin permissions. To manage multiple users, cluster administrators can use Argo CD to configure Single Sign-On (SSO). 3.1. Prerequisites Red Hat SSO is installed on the cluster. The Red Hat OpenShift GitOps Operator is installed on your OpenShift Container Platform cluster. Argo CD is installed on the cluster. 3.2. Configuring a new client in Keycloak Dex is installed by default for all the Argo CD instances created by the Operator. However, you can delete the Dex configuration and add Keycloak instead to log in to Argo CD using your OpenShift credentials. Keycloak acts as an identity broker between Argo CD and OpenShift. Procedure To configure Keycloak, follow these steps: Delete the Dex configuration by removing the .spec.sso.dex parameter from the Argo CD custom resource (CR), and save the CR: dex: openShiftOAuth: true resources: limits: cpu: memory: requests: cpu: memory: Set the value of the provider parameter to keycloak in the Argo CD CR. Configure Keycloak by performing one of the following steps: For a secure connection, set the value of the rootCA parameter as shown in the following example: apiVersion: argoproj.io/v1beta1 kind: ArgoCD metadata: name: example-argocd labels: example: basic spec: sso: provider: keycloak keycloak: rootCA: "<PEM-encoded-root-certificate>" 1 server: route: enabled: true 1 A custom certificate used to verify the Keycloak's TLS certificate. The Operator reconciles changes in the .spec.sso.keycloak.rootCA parameter and updates the oidc.config parameter with the PEM encoded root certificate in the argocd-cm configuration map. For an insecure connection, leave the value of the rootCA parameter empty and use the oidc.tls.insecure.skip.verify parameter as shown below: apiVersion: argoproj.io/v1beta1 kind: ArgoCD metadata: name: example-argocd labels: example: basic spec: extraConfig: oidc.tls.insecure.skip.verify: "true" sso: provider: keycloak keycloak: rootCA: "" Note The Keycloak instance takes 2-3 minutes to install and run. 3.3. Logging in to Keycloak Log in to the Keycloak console to manage identities or roles and define the permissions assigned to the various roles. Prerequisites The default configuration of Dex is removed. Your Argo CD CR must be configured to use the Keycloak SSO provider. Procedure Get the Keycloak route URL for login: USD oc -n argocd get route keycloak NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD keycloak keycloak-default.apps.ci-ln-******.origin-ci-int-aws.dev.**.com keycloak <all> reencrypt None Get the Keycloak pod name that stores the user name and password as environment variables: USD oc -n argocd get pods NAME READY STATUS RESTARTS AGE keycloak-1-2sjcl 1/1 Running 0 45m Get the Keycloak user name: USD oc -n argocd exec keycloak-1-2sjcl -- "env" | grep SSO_ADMIN_USERNAME SSO_ADMIN_USERNAME=Cqid54Ih Get the Keycloak password: USD oc -n argocd exec keycloak-1-2sjcl -- "env" | grep SSO_ADMIN_PASSWORD SSO_ADMIN_PASSWORD=GVXxHifH On the login page, click LOG IN VIA KEYCLOAK . Note You only see the option LOGIN VIA KEYCLOAK after the Keycloak instance is ready. Click Login with OpenShift . Note Login using kubeadmin is not supported. Enter the OpenShift credentials to log in. Optional: By default, any user logged in to Argo CD has read-only access. You can manage the user level access by updating the argocd-rbac-cm config map: policy.csv: <name>, <email>, role:admin 3.4. Uninstalling Keycloak You can delete the Keycloak resources and their relevant configurations by removing the SSO field from the Argo CD Custom Resource (CR) file. After you remove the SSO field, the values in the file look similar to the following: apiVersion: argoproj.io/v1beta1 kind: ArgoCD metadata: name: example-argocd labels: example: basic spec: server: route: enabled: true Note A Keycloak application created by using this method is currently not persistent. Additional configurations created in the Argo CD Keycloak realm are deleted when the server restarts.
[ "dex: openShiftOAuth: true resources: limits: cpu: memory: requests: cpu: memory:", "apiVersion: argoproj.io/v1beta1 kind: ArgoCD metadata: name: example-argocd labels: example: basic spec: sso: provider: keycloak keycloak: rootCA: \"<PEM-encoded-root-certificate>\" 1 server: route: enabled: true", "apiVersion: argoproj.io/v1beta1 kind: ArgoCD metadata: name: example-argocd labels: example: basic spec: extraConfig: oidc.tls.insecure.skip.verify: \"true\" sso: provider: keycloak keycloak: rootCA: \"\"", "oc -n argocd get route keycloak NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD keycloak keycloak-default.apps.ci-ln-******.origin-ci-int-aws.dev.**.com keycloak <all> reencrypt None", "oc -n argocd get pods NAME READY STATUS RESTARTS AGE keycloak-1-2sjcl 1/1 Running 0 45m", "oc -n argocd exec keycloak-1-2sjcl -- \"env\" | grep SSO_ADMIN_USERNAME SSO_ADMIN_USERNAME=Cqid54Ih", "oc -n argocd exec keycloak-1-2sjcl -- \"env\" | grep SSO_ADMIN_PASSWORD SSO_ADMIN_PASSWORD=GVXxHifH", "policy.csv: <name>, <email>, role:admin", "apiVersion: argoproj.io/v1beta1 kind: ArgoCD metadata: name: example-argocd labels: example: basic spec: server: route: enabled: true" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_gitops/1.12/html/access_control_and_user_management/configuring-sso-for-argo-cd-using-keycloak
Chapter 10. Subscription [operators.coreos.com/v1alpha1]
Chapter 10. Subscription [operators.coreos.com/v1alpha1] Description Subscription keeps operators up to date by tracking changes to Catalogs. Type object Required metadata spec 10.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object SubscriptionSpec defines an Application that can be installed status object 10.1.1. .spec Description SubscriptionSpec defines an Application that can be installed Type object Required name source sourceNamespace Property Type Description channel string config object SubscriptionConfig contains configuration specified for a subscription. installPlanApproval string Approval is the user approval policy for an InstallPlan. It must be one of "Automatic" or "Manual". name string source string sourceNamespace string startingCSV string 10.1.2. .spec.config Description SubscriptionConfig contains configuration specified for a subscription. Type object Property Type Description affinity object If specified, overrides the pod's scheduling constraints. nil sub-attributes will not override the original values in the pod.spec for those sub-attributes. Use empty object ({}) to erase original sub-attribute values. env array Env is a list of environment variables to set in the container. Cannot be updated. env[] object EnvVar represents an environment variable present in a Container. envFrom array EnvFrom is a list of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Immutable. envFrom[] object EnvFromSource represents the source of a set of ConfigMaps nodeSelector object (string) NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node's labels for the pod to be scheduled on that node. More info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ resources object Resources represents compute resources required by this container. Immutable. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ selector object Selector is the label selector for pods to be configured. Existing ReplicaSets whose pods are selected by this will be the ones affected by this deployment. It must match the pod template's labels. tolerations array Tolerations are the pod's tolerations. tolerations[] object The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. volumeMounts array List of VolumeMounts to set in the container. volumeMounts[] object VolumeMount describes a mounting of a Volume within a container. volumes array List of Volumes to set in the podSpec. volumes[] object Volume represents a named volume in a pod that may be accessed by any container in the pod. 10.1.3. .spec.config.affinity Description If specified, overrides the pod's scheduling constraints. nil sub-attributes will not override the original values in the pod.spec for those sub-attributes. Use empty object ({}) to erase original sub-attribute values. Type object Property Type Description nodeAffinity object Describes node affinity scheduling rules for the pod. podAffinity object Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)). podAntiAffinity object Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod in the same node, zone, etc. as some other pod(s)). 10.1.4. .spec.config.affinity.nodeAffinity Description Describes node affinity scheduling rules for the pod. Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). requiredDuringSchedulingIgnoredDuringExecution object If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to an update), the system may or may not try to eventually evict the pod from its node. 10.1.5. .spec.config.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. Type array 10.1.6. .spec.config.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). Type object Required preference weight Property Type Description preference object A node selector term, associated with the corresponding weight. weight integer Weight associated with matching the corresponding nodeSelectorTerm, in the range 1-100. 10.1.7. .spec.config.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference Description A node selector term, associated with the corresponding weight. Type object Property Type Description matchExpressions array A list of node selector requirements by node's labels. matchExpressions[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchFields array A list of node selector requirements by node's fields. matchFields[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. 10.1.8. .spec.config.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchExpressions Description A list of node selector requirements by node's labels. Type array 10.1.9. .spec.config.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchExpressions[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 10.1.10. .spec.config.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchFields Description A list of node selector requirements by node's fields. Type array 10.1.11. .spec.config.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchFields[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 10.1.12. .spec.config.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to an update), the system may or may not try to eventually evict the pod from its node. Type object Required nodeSelectorTerms Property Type Description nodeSelectorTerms array Required. A list of node selector terms. The terms are ORed. nodeSelectorTerms[] object A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. 10.1.13. .spec.config.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms Description Required. A list of node selector terms. The terms are ORed. Type array 10.1.14. .spec.config.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[] Description A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. Type object Property Type Description matchExpressions array A list of node selector requirements by node's labels. matchExpressions[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchFields array A list of node selector requirements by node's fields. matchFields[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. 10.1.15. .spec.config.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchExpressions Description A list of node selector requirements by node's labels. Type array 10.1.16. .spec.config.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchExpressions[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 10.1.17. .spec.config.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchFields Description A list of node selector requirements by node's fields. Type array 10.1.18. .spec.config.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchFields[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 10.1.19. .spec.config.affinity.podAffinity Description Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)). Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) requiredDuringSchedulingIgnoredDuringExecution array If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. requiredDuringSchedulingIgnoredDuringExecution[] object Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running 10.1.20. .spec.config.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. Type array 10.1.21. .spec.config.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) Type object Required podAffinityTerm weight Property Type Description podAffinityTerm object Required. A pod affinity term, associated with the corresponding weight. weight integer weight associated with matching the corresponding podAffinityTerm, in the range 1-100. 10.1.22. .spec.config.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm Description Required. A pod affinity term, associated with the corresponding weight. Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 10.1.23. .spec.config.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector Description A label query over a set of resources, in this case pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 10.1.24. .spec.config.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 10.1.25. .spec.config.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 10.1.26. .spec.config.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 10.1.27. .spec.config.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 10.1.28. .spec.config.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 10.1.29. .spec.config.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. Type array 10.1.30. .spec.config.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[] Description Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 10.1.31. .spec.config.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector Description A label query over a set of resources, in this case pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 10.1.32. .spec.config.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 10.1.33. .spec.config.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 10.1.34. .spec.config.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 10.1.35. .spec.config.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 10.1.36. .spec.config.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 10.1.37. .spec.config.affinity.podAntiAffinity Description Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod in the same node, zone, etc. as some other pod(s)). Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) requiredDuringSchedulingIgnoredDuringExecution array If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. requiredDuringSchedulingIgnoredDuringExecution[] object Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running 10.1.38. .spec.config.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. Type array 10.1.39. .spec.config.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) Type object Required podAffinityTerm weight Property Type Description podAffinityTerm object Required. A pod affinity term, associated with the corresponding weight. weight integer weight associated with matching the corresponding podAffinityTerm, in the range 1-100. 10.1.40. .spec.config.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm Description Required. A pod affinity term, associated with the corresponding weight. Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 10.1.41. .spec.config.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector Description A label query over a set of resources, in this case pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 10.1.42. .spec.config.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 10.1.43. .spec.config.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 10.1.44. .spec.config.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 10.1.45. .spec.config.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 10.1.46. .spec.config.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 10.1.47. .spec.config.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. Type array 10.1.48. .spec.config.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[] Description Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 10.1.49. .spec.config.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector Description A label query over a set of resources, in this case pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 10.1.50. .spec.config.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 10.1.51. .spec.config.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 10.1.52. .spec.config.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 10.1.53. .spec.config.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 10.1.54. .spec.config.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 10.1.55. .spec.config.env Description Env is a list of environment variables to set in the container. Cannot be updated. Type array 10.1.56. .spec.config.env[] Description EnvVar represents an environment variable present in a Container. Type object Required name Property Type Description name string Name of the environment variable. Must be a C_IDENTIFIER. value string Variable references USD(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to "". valueFrom object Source for the environment variable's value. Cannot be used if value is not empty. 10.1.57. .spec.config.env[].valueFrom Description Source for the environment variable's value. Cannot be used if value is not empty. Type object Property Type Description configMapKeyRef object Selects a key of a ConfigMap. fieldRef object Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. resourceFieldRef object Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. secretKeyRef object Selects a key of a secret in the pod's namespace 10.1.58. .spec.config.env[].valueFrom.configMapKeyRef Description Selects a key of a ConfigMap. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 10.1.59. .spec.config.env[].valueFrom.fieldRef Description Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 10.1.60. .spec.config.env[].valueFrom.resourceFieldRef Description Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor integer-or-string Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 10.1.61. .spec.config.env[].valueFrom.secretKeyRef Description Selects a key of a secret in the pod's namespace Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 10.1.62. .spec.config.envFrom Description EnvFrom is a list of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Immutable. Type array 10.1.63. .spec.config.envFrom[] Description EnvFromSource represents the source of a set of ConfigMaps Type object Property Type Description configMapRef object The ConfigMap to select from prefix string An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER. secretRef object The Secret to select from 10.1.64. .spec.config.envFrom[].configMapRef Description The ConfigMap to select from Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap must be defined 10.1.65. .spec.config.envFrom[].secretRef Description The Secret to select from Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret must be defined 10.1.66. .spec.config.resources Description Resources represents compute resources required by this container. Immutable. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 10.1.67. .spec.config.resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 10.1.68. .spec.config.resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. 10.1.69. .spec.config.selector Description Selector is the label selector for pods to be configured. Existing ReplicaSets whose pods are selected by this will be the ones affected by this deployment. It must match the pod template's labels. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 10.1.70. .spec.config.selector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 10.1.71. .spec.config.selector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 10.1.72. .spec.config.tolerations Description Tolerations are the pod's tolerations. Type array 10.1.73. .spec.config.tolerations[] Description The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. Type object Property Type Description effect string Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute. key string Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys. operator string Operator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category. tolerationSeconds integer TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system. value string Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string. 10.1.74. .spec.config.volumeMounts Description List of VolumeMounts to set in the container. Type array 10.1.75. .spec.config.volumeMounts[] Description VolumeMount describes a mounting of a Volume within a container. Type object Required mountPath name Property Type Description mountPath string Path within the container at which the volume should be mounted. Must not contain ':'. mountPropagation string mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. name string This must match the Name of a Volume. readOnly boolean Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. subPath string Path within the volume from which the container's volume should be mounted. Defaults to "" (volume's root). subPathExpr string Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references USD(VAR_NAME) are expanded using the container's environment. Defaults to "" (volume's root). SubPathExpr and SubPath are mutually exclusive. 10.1.76. .spec.config.volumes Description List of Volumes to set in the podSpec. Type array 10.1.77. .spec.config.volumes[] Description Volume represents a named volume in a pod that may be accessed by any container in the pod. Type object Required name Property Type Description awsElasticBlockStore object awsElasticBlockStore represents an AWS Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore azureDisk object azureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. azureFile object azureFile represents an Azure File Service mount on the host and bind mount to the pod. cephfs object cephFS represents a Ceph FS mount on the host that shares a pod's lifetime cinder object cinder represents a cinder volume attached and mounted on kubelets host machine. More info: https://examples.k8s.io/mysql-cinder-pd/README.md configMap object configMap represents a configMap that should populate this volume csi object csi (Container Storage Interface) represents ephemeral storage that is handled by certain external CSI drivers (Beta feature). downwardAPI object downwardAPI represents downward API about the pod that should populate this volume emptyDir object emptyDir represents a temporary directory that shares a pod's lifetime. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir ephemeral object ephemeral represents a volume that is handled by a cluster storage driver. The volume's lifecycle is tied to the pod that defines it - it will be created before the pod starts, and deleted when the pod is removed. Use this if: a) the volume is only needed while the pod runs, b) features of normal volumes like restoring from snapshot or capacity tracking are needed, c) the storage driver is specified through a storage class, and d) the storage driver supports dynamic volume provisioning through a PersistentVolumeClaim (see EphemeralVolumeSource for more information on the connection between this volume type and PersistentVolumeClaim). Use PersistentVolumeClaim or one of the vendor-specific APIs for volumes that persist for longer than the lifecycle of an individual pod. Use CSI for light-weight local ephemeral volumes if the CSI driver is meant to be used that way - see the documentation of the driver for more information. A pod can use both types of ephemeral volumes and persistent volumes at the same time. fc object fc represents a Fibre Channel resource that is attached to a kubelet's host machine and then exposed to the pod. flexVolume object flexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. flocker object flocker represents a Flocker volume attached to a kubelet's host machine. This depends on the Flocker control service being running gcePersistentDisk object gcePersistentDisk represents a GCE Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk gitRepo object gitRepo represents a git repository at a particular revision. DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod's container. glusterfs object glusterfs represents a Glusterfs mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/glusterfs/README.md hostPath object hostPath represents a pre-existing file or directory on the host machine that is directly exposed to the container. This is generally used for system agents or other privileged things that are allowed to see the host machine. Most containers will NOT need this. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath --- TODO(jonesdl) We need to restrict who can use host directory mounts and who can/can not mount host directories as read/write. iscsi object iscsi represents an ISCSI Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://examples.k8s.io/volumes/iscsi/README.md name string name of the volume. Must be a DNS_LABEL and unique within the pod. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names nfs object nfs represents an NFS mount on the host that shares a pod's lifetime More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs persistentVolumeClaim object persistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same namespace. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims photonPersistentDisk object photonPersistentDisk represents a PhotonController persistent disk attached and mounted on kubelets host machine portworxVolume object portworxVolume represents a portworx volume attached and mounted on kubelets host machine projected object projected items for all in one resources secrets, configmaps, and downward API quobyte object quobyte represents a Quobyte mount on the host that shares a pod's lifetime rbd object rbd represents a Rados Block Device mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/rbd/README.md scaleIO object scaleIO represents a ScaleIO persistent volume attached and mounted on Kubernetes nodes. secret object secret represents a secret that should populate this volume. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret storageos object storageOS represents a StorageOS volume attached and mounted on Kubernetes nodes. vsphereVolume object vsphereVolume represents a vSphere volume attached and mounted on kubelets host machine 10.1.78. .spec.config.volumes[].awsElasticBlockStore Description awsElasticBlockStore represents an AWS Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore Type object Required volumeID Property Type Description fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore TODO: how do we prevent errors in the filesystem from compromising the machine partition integer partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as "1". Similarly, the volume partition for /dev/sda is "0" (or you can leave the property empty). readOnly boolean readOnly value true will force the readOnly setting in VolumeMounts. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore volumeID string volumeID is unique ID of the persistent disk resource in AWS (Amazon EBS volume). More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore 10.1.79. .spec.config.volumes[].azureDisk Description azureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. Type object Required diskName diskURI Property Type Description cachingMode string cachingMode is the Host Caching mode: None, Read Only, Read Write. diskName string diskName is the Name of the data disk in the blob storage diskURI string diskURI is the URI of data disk in the blob storage fsType string fsType is Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. kind string kind expected values are Shared: multiple blob disks per storage account Dedicated: single blob disk per storage account Managed: azure managed data disk (only in managed availability set). defaults to shared readOnly boolean readOnly Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. 10.1.80. .spec.config.volumes[].azureFile Description azureFile represents an Azure File Service mount on the host and bind mount to the pod. Type object Required secretName shareName Property Type Description readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretName string secretName is the name of secret that contains Azure Storage Account Name and Key shareName string shareName is the azure share Name 10.1.81. .spec.config.volumes[].cephfs Description cephFS represents a Ceph FS mount on the host that shares a pod's lifetime Type object Required monitors Property Type Description monitors array (string) monitors is Required: Monitors is a collection of Ceph monitors More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it path string path is Optional: Used as the mounted root, rather than the full Ceph tree, default is / readOnly boolean readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it secretFile string secretFile is Optional: SecretFile is the path to key ring for User, default is /etc/ceph/user.secret More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it secretRef object secretRef is Optional: SecretRef is reference to the authentication secret for User, default is empty. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it user string user is optional: User is the rados user name, default is admin More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it 10.1.82. .spec.config.volumes[].cephfs.secretRef Description secretRef is Optional: SecretRef is reference to the authentication secret for User, default is empty. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 10.1.83. .spec.config.volumes[].cinder Description cinder represents a cinder volume attached and mounted on kubelets host machine. More info: https://examples.k8s.io/mysql-cinder-pd/README.md Type object Required volumeID Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://examples.k8s.io/mysql-cinder-pd/README.md readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/mysql-cinder-pd/README.md secretRef object secretRef is optional: points to a secret object containing parameters used to connect to OpenStack. volumeID string volumeID used to identify the volume in cinder. More info: https://examples.k8s.io/mysql-cinder-pd/README.md 10.1.84. .spec.config.volumes[].cinder.secretRef Description secretRef is optional: points to a secret object containing parameters used to connect to OpenStack. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 10.1.85. .spec.config.volumes[].configMap Description configMap represents a configMap that should populate this volume Type object Property Type Description defaultMode integer defaultMode is optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean optional specify whether the ConfigMap or its keys must be defined 10.1.86. .spec.config.volumes[].configMap.items Description items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 10.1.87. .spec.config.volumes[].configMap.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 10.1.88. .spec.config.volumes[].csi Description csi (Container Storage Interface) represents ephemeral storage that is handled by certain external CSI drivers (Beta feature). Type object Required driver Property Type Description driver string driver is the name of the CSI driver that handles this volume. Consult with your admin for the correct name as registered in the cluster. fsType string fsType to mount. Ex. "ext4", "xfs", "ntfs". If not provided, the empty value is passed to the associated CSI driver which will determine the default filesystem to apply. nodePublishSecretRef object nodePublishSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodePublishVolume and NodeUnpublishVolume calls. This field is optional, and may be empty if no secret is required. If the secret object contains more than one secret, all secret references are passed. readOnly boolean readOnly specifies a read-only configuration for the volume. Defaults to false (read/write). volumeAttributes object (string) volumeAttributes stores driver-specific properties that are passed to the CSI driver. Consult your driver's documentation for supported values. 10.1.89. .spec.config.volumes[].csi.nodePublishSecretRef Description nodePublishSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodePublishVolume and NodeUnpublishVolume calls. This field is optional, and may be empty if no secret is required. If the secret object contains more than one secret, all secret references are passed. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 10.1.90. .spec.config.volumes[].downwardAPI Description downwardAPI represents downward API about the pod that should populate this volume Type object Property Type Description defaultMode integer Optional: mode bits to use on created files by default. Must be a Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array Items is a list of downward API volume file items[] object DownwardAPIVolumeFile represents information to create the file containing the pod field 10.1.91. .spec.config.volumes[].downwardAPI.items Description Items is a list of downward API volume file Type array 10.1.92. .spec.config.volumes[].downwardAPI.items[] Description DownwardAPIVolumeFile represents information to create the file containing the pod field Type object Required path Property Type Description fieldRef object Required: Selects a field of the pod: only annotations, labels, name and namespace are supported. mode integer Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string Required: Path is the relative path name of the file to be created. Must not be absolute or contain the '..' path. Must be utf-8 encoded. The first item of the relative path must not start with '..' resourceFieldRef object Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. 10.1.93. .spec.config.volumes[].downwardAPI.items[].fieldRef Description Required: Selects a field of the pod: only annotations, labels, name and namespace are supported. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 10.1.94. .spec.config.volumes[].downwardAPI.items[].resourceFieldRef Description Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor integer-or-string Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 10.1.95. .spec.config.volumes[].emptyDir Description emptyDir represents a temporary directory that shares a pod's lifetime. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir Type object Property Type Description medium string medium represents what type of storage medium should back this directory. The default is "" which means to use the node's default medium. Must be an empty string (default) or Memory. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir sizeLimit integer-or-string sizeLimit is the total amount of local storage required for this EmptyDir volume. The size limit is also applicable for memory medium. The maximum usage on memory medium EmptyDir would be the minimum value between the SizeLimit specified here and the sum of memory limits of all containers in a pod. The default is nil which means that the limit is undefined. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir 10.1.96. .spec.config.volumes[].ephemeral Description ephemeral represents a volume that is handled by a cluster storage driver. The volume's lifecycle is tied to the pod that defines it - it will be created before the pod starts, and deleted when the pod is removed. Use this if: a) the volume is only needed while the pod runs, b) features of normal volumes like restoring from snapshot or capacity tracking are needed, c) the storage driver is specified through a storage class, and d) the storage driver supports dynamic volume provisioning through a PersistentVolumeClaim (see EphemeralVolumeSource for more information on the connection between this volume type and PersistentVolumeClaim). Use PersistentVolumeClaim or one of the vendor-specific APIs for volumes that persist for longer than the lifecycle of an individual pod. Use CSI for light-weight local ephemeral volumes if the CSI driver is meant to be used that way - see the documentation of the driver for more information. A pod can use both types of ephemeral volumes and persistent volumes at the same time. Type object Property Type Description volumeClaimTemplate object Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be <pod name>-<volume name> where <volume name> is the name from the PodSpec.Volumes array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long). An existing PVC with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster. This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created. Required, must not be nil. 10.1.97. .spec.config.volumes[].ephemeral.volumeClaimTemplate Description Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be <pod name>-<volume name> where <volume name> is the name from the PodSpec.Volumes array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long). An existing PVC with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster. This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created. Required, must not be nil. Type object Required spec Property Type Description metadata object May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation. spec object The specification for the PersistentVolumeClaim. The entire content is copied unchanged into the PVC that gets created from this template. The same fields as in a PersistentVolumeClaim are also valid here. 10.1.98. .spec.config.volumes[].ephemeral.volumeClaimTemplate.metadata Description May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation. Type object 10.1.99. .spec.config.volumes[].ephemeral.volumeClaimTemplate.spec Description The specification for the PersistentVolumeClaim. The entire content is copied unchanged into the PVC that gets created from this template. The same fields as in a PersistentVolumeClaim are also valid here. Type object Property Type Description accessModes array (string) accessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 dataSource object dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource. dataSourceRef object dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. resources object resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources selector object selector is a label query over volumes to consider for binding. storageClassName string storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 volumeMode string volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec. volumeName string volumeName is the binding reference to the PersistentVolume backing this claim. 10.1.100. .spec.config.volumes[].ephemeral.volumeClaimTemplate.spec.dataSource Description dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced 10.1.101. .spec.config.volumes[].ephemeral.volumeClaimTemplate.spec.dataSourceRef Description dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced namespace string Namespace is the namespace of resource being referenced Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details. (Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled. 10.1.102. .spec.config.volumes[].ephemeral.volumeClaimTemplate.spec.resources Description resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 10.1.103. .spec.config.volumes[].ephemeral.volumeClaimTemplate.spec.resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 10.1.104. .spec.config.volumes[].ephemeral.volumeClaimTemplate.spec.resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. 10.1.105. .spec.config.volumes[].ephemeral.volumeClaimTemplate.spec.selector Description selector is a label query over volumes to consider for binding. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 10.1.106. .spec.config.volumes[].ephemeral.volumeClaimTemplate.spec.selector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 10.1.107. .spec.config.volumes[].ephemeral.volumeClaimTemplate.spec.selector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 10.1.108. .spec.config.volumes[].fc Description fc represents a Fibre Channel resource that is attached to a kubelet's host machine and then exposed to the pod. Type object Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. TODO: how do we prevent errors in the filesystem from compromising the machine lun integer lun is Optional: FC target lun number readOnly boolean readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. targetWWNs array (string) targetWWNs is Optional: FC target worldwide names (WWNs) wwids array (string) wwids Optional: FC volume world wide identifiers (wwids) Either wwids or combination of targetWWNs and lun must be set, but not both simultaneously. 10.1.109. .spec.config.volumes[].flexVolume Description flexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. Type object Required driver Property Type Description driver string driver is the name of the driver to use for this volume. fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". The default filesystem depends on FlexVolume script. options object (string) options is Optional: this field holds extra command options if any. readOnly boolean readOnly is Optional: defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object secretRef is Optional: secretRef is reference to the secret object containing sensitive information to pass to the plugin scripts. This may be empty if no secret object is specified. If the secret object contains more than one secret, all secrets are passed to the plugin scripts. 10.1.110. .spec.config.volumes[].flexVolume.secretRef Description secretRef is Optional: secretRef is reference to the secret object containing sensitive information to pass to the plugin scripts. This may be empty if no secret object is specified. If the secret object contains more than one secret, all secrets are passed to the plugin scripts. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 10.1.111. .spec.config.volumes[].flocker Description flocker represents a Flocker volume attached to a kubelet's host machine. This depends on the Flocker control service being running Type object Property Type Description datasetName string datasetName is Name of the dataset stored as metadata name on the dataset for Flocker should be considered as deprecated datasetUUID string datasetUUID is the UUID of the dataset. This is unique identifier of a Flocker dataset 10.1.112. .spec.config.volumes[].gcePersistentDisk Description gcePersistentDisk represents a GCE Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk Type object Required pdName Property Type Description fsType string fsType is filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk TODO: how do we prevent errors in the filesystem from compromising the machine partition integer partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as "1". Similarly, the volume partition for /dev/sda is "0" (or you can leave the property empty). More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk pdName string pdName is unique name of the PD resource in GCE. Used to identify the disk in GCE. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk 10.1.113. .spec.config.volumes[].gitRepo Description gitRepo represents a git repository at a particular revision. DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod's container. Type object Required repository Property Type Description directory string directory is the target directory name. Must not contain or start with '..'. If '.' is supplied, the volume directory will be the git repository. Otherwise, if specified, the volume will contain the git repository in the subdirectory with the given name. repository string repository is the URL revision string revision is the commit hash for the specified revision. 10.1.114. .spec.config.volumes[].glusterfs Description glusterfs represents a Glusterfs mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/glusterfs/README.md Type object Required endpoints path Property Type Description endpoints string endpoints is the endpoint name that details Glusterfs topology. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod path string path is the Glusterfs volume path. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod readOnly boolean readOnly here will force the Glusterfs volume to be mounted with read-only permissions. Defaults to false. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod 10.1.115. .spec.config.volumes[].hostPath Description hostPath represents a pre-existing file or directory on the host machine that is directly exposed to the container. This is generally used for system agents or other privileged things that are allowed to see the host machine. Most containers will NOT need this. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath --- TODO(jonesdl) We need to restrict who can use host directory mounts and who can/can not mount host directories as read/write. Type object Required path Property Type Description path string path of the directory on the host. If the path is a symlink, it will follow the link to the real path. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath type string type for HostPath Volume Defaults to "" More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath 10.1.116. .spec.config.volumes[].iscsi Description iscsi represents an ISCSI Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://examples.k8s.io/volumes/iscsi/README.md Type object Required iqn lun targetPortal Property Type Description chapAuthDiscovery boolean chapAuthDiscovery defines whether support iSCSI Discovery CHAP authentication chapAuthSession boolean chapAuthSession defines whether support iSCSI Session CHAP authentication fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#iscsi TODO: how do we prevent errors in the filesystem from compromising the machine initiatorName string initiatorName is the custom iSCSI Initiator Name. If initiatorName is specified with iscsiInterface simultaneously, new iSCSI interface <target portal>:<volume name> will be created for the connection. iqn string iqn is the target iSCSI Qualified Name. iscsiInterface string iscsiInterface is the interface Name that uses an iSCSI transport. Defaults to 'default' (tcp). lun integer lun represents iSCSI Target Lun number. portals array (string) portals is the iSCSI Target Portal List. The portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260). readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. secretRef object secretRef is the CHAP Secret for iSCSI target and initiator authentication targetPortal string targetPortal is iSCSI Target Portal. The Portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260). 10.1.117. .spec.config.volumes[].iscsi.secretRef Description secretRef is the CHAP Secret for iSCSI target and initiator authentication Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 10.1.118. .spec.config.volumes[].nfs Description nfs represents an NFS mount on the host that shares a pod's lifetime More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs Type object Required path server Property Type Description path string path that is exported by the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs readOnly boolean readOnly here will force the NFS export to be mounted with read-only permissions. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs server string server is the hostname or IP address of the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs 10.1.119. .spec.config.volumes[].persistentVolumeClaim Description persistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same namespace. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims Type object Required claimName Property Type Description claimName string claimName is the name of a PersistentVolumeClaim in the same namespace as the pod using this volume. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims readOnly boolean readOnly Will force the ReadOnly setting in VolumeMounts. Default false. 10.1.120. .spec.config.volumes[].photonPersistentDisk Description photonPersistentDisk represents a PhotonController persistent disk attached and mounted on kubelets host machine Type object Required pdID Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. pdID string pdID is the ID that identifies Photon Controller persistent disk 10.1.121. .spec.config.volumes[].portworxVolume Description portworxVolume represents a portworx volume attached and mounted on kubelets host machine Type object Required volumeID Property Type Description fsType string fSType represents the filesystem type to mount Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs". Implicitly inferred to be "ext4" if unspecified. readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. volumeID string volumeID uniquely identifies a Portworx volume 10.1.122. .spec.config.volumes[].projected Description projected items for all in one resources secrets, configmaps, and downward API Type object Property Type Description defaultMode integer defaultMode are the mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. sources array sources is the list of volume projections sources[] object Projection that may be projected along with other supported volume types 10.1.123. .spec.config.volumes[].projected.sources Description sources is the list of volume projections Type array 10.1.124. .spec.config.volumes[].projected.sources[] Description Projection that may be projected along with other supported volume types Type object Property Type Description configMap object configMap information about the configMap data to project downwardAPI object downwardAPI information about the downwardAPI data to project secret object secret information about the secret data to project serviceAccountToken object serviceAccountToken is information about the serviceAccountToken data to project 10.1.125. .spec.config.volumes[].projected.sources[].configMap Description configMap information about the configMap data to project Type object Property Type Description items array items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean optional specify whether the ConfigMap or its keys must be defined 10.1.126. .spec.config.volumes[].projected.sources[].configMap.items Description items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 10.1.127. .spec.config.volumes[].projected.sources[].configMap.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 10.1.128. .spec.config.volumes[].projected.sources[].downwardAPI Description downwardAPI information about the downwardAPI data to project Type object Property Type Description items array Items is a list of DownwardAPIVolume file items[] object DownwardAPIVolumeFile represents information to create the file containing the pod field 10.1.129. .spec.config.volumes[].projected.sources[].downwardAPI.items Description Items is a list of DownwardAPIVolume file Type array 10.1.130. .spec.config.volumes[].projected.sources[].downwardAPI.items[] Description DownwardAPIVolumeFile represents information to create the file containing the pod field Type object Required path Property Type Description fieldRef object Required: Selects a field of the pod: only annotations, labels, name and namespace are supported. mode integer Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string Required: Path is the relative path name of the file to be created. Must not be absolute or contain the '..' path. Must be utf-8 encoded. The first item of the relative path must not start with '..' resourceFieldRef object Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. 10.1.131. .spec.config.volumes[].projected.sources[].downwardAPI.items[].fieldRef Description Required: Selects a field of the pod: only annotations, labels, name and namespace are supported. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 10.1.132. .spec.config.volumes[].projected.sources[].downwardAPI.items[].resourceFieldRef Description Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor integer-or-string Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 10.1.133. .spec.config.volumes[].projected.sources[].secret Description secret information about the secret data to project Type object Property Type Description items array items if unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean optional field specify whether the Secret or its key must be defined 10.1.134. .spec.config.volumes[].projected.sources[].secret.items Description items if unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 10.1.135. .spec.config.volumes[].projected.sources[].secret.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 10.1.136. .spec.config.volumes[].projected.sources[].serviceAccountToken Description serviceAccountToken is information about the serviceAccountToken data to project Type object Required path Property Type Description audience string audience is the intended audience of the token. A recipient of a token must identify itself with an identifier specified in the audience of the token, and otherwise should reject the token. The audience defaults to the identifier of the apiserver. expirationSeconds integer expirationSeconds is the requested duration of validity of the service account token. As the token approaches expiration, the kubelet volume plugin will proactively rotate the service account token. The kubelet will start trying to rotate the token if the token is older than 80 percent of its time to live or if the token is older than 24 hours.Defaults to 1 hour and must be at least 10 minutes. path string path is the path relative to the mount point of the file to project the token into. 10.1.137. .spec.config.volumes[].quobyte Description quobyte represents a Quobyte mount on the host that shares a pod's lifetime Type object Required registry volume Property Type Description group string group to map volume access to Default is no group readOnly boolean readOnly here will force the Quobyte volume to be mounted with read-only permissions. Defaults to false. registry string registry represents a single or multiple Quobyte Registry services specified as a string as host:port pair (multiple entries are separated with commas) which acts as the central registry for volumes tenant string tenant owning the given Quobyte volume in the Backend Used with dynamically provisioned Quobyte volumes, value is set by the plugin user string user to map volume access to Defaults to serivceaccount user volume string volume is a string that references an already created Quobyte volume by name. 10.1.138. .spec.config.volumes[].rbd Description rbd represents a Rados Block Device mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/rbd/README.md Type object Required image monitors Property Type Description fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#rbd TODO: how do we prevent errors in the filesystem from compromising the machine image string image is the rados image name. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it keyring string keyring is the path to key ring for RBDUser. Default is /etc/ceph/keyring. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it monitors array (string) monitors is a collection of Ceph monitors. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it pool string pool is the rados pool name. Default is rbd. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it secretRef object secretRef is name of the authentication secret for RBDUser. If provided overrides keyring. Default is nil. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it user string user is the rados user name. Default is admin. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it 10.1.139. .spec.config.volumes[].rbd.secretRef Description secretRef is name of the authentication secret for RBDUser. If provided overrides keyring. Default is nil. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 10.1.140. .spec.config.volumes[].scaleIO Description scaleIO represents a ScaleIO persistent volume attached and mounted on Kubernetes nodes. Type object Required gateway secretRef system Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Default is "xfs". gateway string gateway is the host address of the ScaleIO API Gateway. protectionDomain string protectionDomain is the name of the ScaleIO Protection Domain for the configured storage. readOnly boolean readOnly Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object secretRef references to the secret for ScaleIO user and other sensitive information. If this is not provided, Login operation will fail. sslEnabled boolean sslEnabled Flag enable/disable SSL communication with Gateway, default false storageMode string storageMode indicates whether the storage for a volume should be ThickProvisioned or ThinProvisioned. Default is ThinProvisioned. storagePool string storagePool is the ScaleIO Storage Pool associated with the protection domain. system string system is the name of the storage system as configured in ScaleIO. volumeName string volumeName is the name of a volume already created in the ScaleIO system that is associated with this volume source. 10.1.141. .spec.config.volumes[].scaleIO.secretRef Description secretRef references to the secret for ScaleIO user and other sensitive information. If this is not provided, Login operation will fail. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 10.1.142. .spec.config.volumes[].secret Description secret represents a secret that should populate this volume. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret Type object Property Type Description defaultMode integer defaultMode is Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array items If unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. optional boolean optional field specify whether the Secret or its keys must be defined secretName string secretName is the name of the secret in the pod's namespace to use. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret 10.1.143. .spec.config.volumes[].secret.items Description items If unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 10.1.144. .spec.config.volumes[].secret.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 10.1.145. .spec.config.volumes[].storageos Description storageOS represents a StorageOS volume attached and mounted on Kubernetes nodes. Type object Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object secretRef specifies the secret to use for obtaining the StorageOS API credentials. If not specified, default values will be attempted. volumeName string volumeName is the human-readable name of the StorageOS volume. Volume names are only unique within a namespace. volumeNamespace string volumeNamespace specifies the scope of the volume within StorageOS. If no namespace is specified then the Pod's namespace will be used. This allows the Kubernetes name scoping to be mirrored within StorageOS for tighter integration. Set VolumeName to any name to override the default behaviour. Set to "default" if you are not using namespaces within StorageOS. Namespaces that do not pre-exist within StorageOS will be created. 10.1.146. .spec.config.volumes[].storageos.secretRef Description secretRef specifies the secret to use for obtaining the StorageOS API credentials. If not specified, default values will be attempted. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 10.1.147. .spec.config.volumes[].vsphereVolume Description vsphereVolume represents a vSphere volume attached and mounted on kubelets host machine Type object Required volumePath Property Type Description fsType string fsType is filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. storagePolicyID string storagePolicyID is the storage Policy Based Management (SPBM) profile ID associated with the StoragePolicyName. storagePolicyName string storagePolicyName is the storage Policy Based Management (SPBM) profile name. volumePath string volumePath is the path that identifies vSphere volume vmdk 10.1.148. .status Description Type object Required lastUpdated Property Type Description catalogHealth array CatalogHealth contains the Subscription's view of its relevant CatalogSources' status. It is used to determine SubscriptionStatusConditions related to CatalogSources. catalogHealth[] object SubscriptionCatalogHealth describes the health of a CatalogSource the Subscription knows about. conditions array Conditions is a list of the latest available observations about a Subscription's current state. conditions[] object SubscriptionCondition represents the latest available observations of a Subscription's state. currentCSV string CurrentCSV is the CSV the Subscription is progressing to. installPlanGeneration integer InstallPlanGeneration is the current generation of the installplan installPlanRef object InstallPlanRef is a reference to the latest InstallPlan that contains the Subscription's current CSV. installedCSV string InstalledCSV is the CSV currently installed by the Subscription. installplan object Install is a reference to the latest InstallPlan generated for the Subscription. DEPRECATED: InstallPlanRef lastUpdated string LastUpdated represents the last time that the Subscription status was updated. reason string Reason is the reason the Subscription was transitioned to its current state. state string State represents the current state of the Subscription 10.1.149. .status.catalogHealth Description CatalogHealth contains the Subscription's view of its relevant CatalogSources' status. It is used to determine SubscriptionStatusConditions related to CatalogSources. Type array 10.1.150. .status.catalogHealth[] Description SubscriptionCatalogHealth describes the health of a CatalogSource the Subscription knows about. Type object Required catalogSourceRef healthy lastUpdated Property Type Description catalogSourceRef object CatalogSourceRef is a reference to a CatalogSource. healthy boolean Healthy is true if the CatalogSource is healthy; false otherwise. lastUpdated string LastUpdated represents the last time that the CatalogSourceHealth changed 10.1.151. .status.catalogHealth[].catalogSourceRef Description CatalogSourceRef is a reference to a CatalogSource. Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 10.1.152. .status.conditions Description Conditions is a list of the latest available observations about a Subscription's current state. Type array 10.1.153. .status.conditions[] Description SubscriptionCondition represents the latest available observations of a Subscription's state. Type object Required status type Property Type Description lastHeartbeatTime string LastHeartbeatTime is the last time we got an update on a given condition lastTransitionTime string LastTransitionTime is the last time the condition transit from one status to another message string Message is a human-readable message indicating details about last transition. reason string Reason is a one-word CamelCase reason for the condition's last transition. status string Status is the status of the condition, one of True, False, Unknown. type string Type is the type of Subscription condition. 10.1.154. .status.installPlanRef Description InstallPlanRef is a reference to the latest InstallPlan that contains the Subscription's current CSV. Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 10.1.155. .status.installplan Description Install is a reference to the latest InstallPlan generated for the Subscription. DEPRECATED: InstallPlanRef Type object Required apiVersion kind name uuid Property Type Description apiVersion string kind string name string uuid string UID is a type that holds unique ID values, including UUIDs. Because we don't ONLY use UUIDs, this is an alias to string. Being a type captures intent and helps make sure that UIDs and names do not get conflated. 10.2. API endpoints The following API endpoints are available: /apis/operators.coreos.com/v1alpha1/subscriptions GET : list objects of kind Subscription /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/subscriptions DELETE : delete collection of Subscription GET : list objects of kind Subscription POST : create a Subscription /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/subscriptions/{name} DELETE : delete a Subscription GET : read the specified Subscription PATCH : partially update the specified Subscription PUT : replace the specified Subscription /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/subscriptions/{name}/status GET : read status of the specified Subscription PATCH : partially update status of the specified Subscription PUT : replace status of the specified Subscription 10.2.1. /apis/operators.coreos.com/v1alpha1/subscriptions Table 10.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list objects of kind Subscription Table 10.2. HTTP responses HTTP code Reponse body 200 - OK SubscriptionList schema 401 - Unauthorized Empty 10.2.2. /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/subscriptions Table 10.3. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 10.4. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of Subscription Table 10.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 10.6. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Subscription Table 10.7. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 10.8. HTTP responses HTTP code Reponse body 200 - OK SubscriptionList schema 401 - Unauthorized Empty HTTP method POST Description create a Subscription Table 10.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.10. Body parameters Parameter Type Description body Subscription schema Table 10.11. HTTP responses HTTP code Reponse body 200 - OK Subscription schema 201 - Created Subscription schema 202 - Accepted Subscription schema 401 - Unauthorized Empty 10.2.3. /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/subscriptions/{name} Table 10.12. Global path parameters Parameter Type Description name string name of the Subscription namespace string object name and auth scope, such as for teams and projects Table 10.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a Subscription Table 10.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 10.15. Body parameters Parameter Type Description body DeleteOptions schema Table 10.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Subscription Table 10.17. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 10.18. HTTP responses HTTP code Reponse body 200 - OK Subscription schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Subscription Table 10.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 10.20. Body parameters Parameter Type Description body Patch schema Table 10.21. HTTP responses HTTP code Reponse body 200 - OK Subscription schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Subscription Table 10.22. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.23. Body parameters Parameter Type Description body Subscription schema Table 10.24. HTTP responses HTTP code Reponse body 200 - OK Subscription schema 201 - Created Subscription schema 401 - Unauthorized Empty 10.2.4. /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/subscriptions/{name}/status Table 10.25. Global path parameters Parameter Type Description name string name of the Subscription namespace string object name and auth scope, such as for teams and projects Table 10.26. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified Subscription Table 10.27. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 10.28. HTTP responses HTTP code Reponse body 200 - OK Subscription schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Subscription Table 10.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 10.30. Body parameters Parameter Type Description body Patch schema Table 10.31. HTTP responses HTTP code Reponse body 200 - OK Subscription schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Subscription Table 10.32. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.33. Body parameters Parameter Type Description body Subscription schema Table 10.34. HTTP responses HTTP code Reponse body 200 - OK Subscription schema 201 - Created Subscription schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/operatorhub_apis/subscription-operators-coreos-com-v1alpha1
4.3. Special Object Elements
4.3. Special Object Elements Special object elements define relationships to special fixed resources within the virtualization environment. Table 4.3. Special Objects Relationship Description templates/blank The default blank virtual machine template for your virtualization environment. This template exists in every cluster as opposed to a standard template, which only exists in a single cluster. tags/root The root tag that acts as a base for tag hierarchy in your virtualization environment.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/version_3_rest_api_guide/special_object_elements
Appendix B. Revision History
Appendix B. Revision History Revision History Revision 4-10 Mon Aug 10 2020 Marek Suchanek An asynchronous update Revision 4-09 Mon Jan 7 2019 Marek Suchanek An asynchronous update Revision 4-08 Mon Oct 23 2018 Marek Suchanek Preparing document for 7.6 GA publication Revision 4-07 Thu Sep 13 2018 Marek Suchanek An asynchronous update Revision 4-00 Fri Apr 6 2018 Marek Suchanek Document version for 7.5 GA publication. Revision 3-95 Thu Apr 5 2018 Marek Suchanek An asynchronous update Revision 3-93 Mon Mar 5 2018 Marek Suchanek New chapter: VDO Integration Revision 3-92 Fri Feb 9 2018 Marek Suchanek An asynchronous update Revision 3-90 Wed Dec 6 2017 Marek Suchanek Version for 7.5 Alpha publication. Revision 3-86 Mon Nov 6 2017 Marek Suchanek An asynchronous update. Revision 3-80 Thu Jul 27 2017 Milan Navratil Document version for 7.4 GA publication. Revision 3-77 Wed May 24 2017 Milan Navratil An asynchronous update. Revision 3-68 Fri Oct 21 2016 Milan Navratil Version for 7.3 GA publication. Revision 3-67 Fri Jun 17 2016 Milan Navratil An asynchronous update. Revision 3-64 Wed Nov 11 2015 Jana Heves Version for 7.2 GA release. Revision 3-33 Wed Feb 18 2015 Jacquelynn East Version for 7.1 GA Revision 3-26 Wed Jan 21 2015 Jacquelynn East Added overview of Ceph Revision 3-22 Thu Dec 4 2014 Jacquelynn East 7.1 Beta Revision 3-4 Thu Jul 17 2014 Jacquelynn East Added new chapter on targetcli Revision 3-1 Tue Jun 3 2014 Jacquelynn East Version for 7.0 GA release
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/storage_administration_guide/appe-publican-revision_history
4.9. Encryption
4.9. Encryption 4.9.1. Using LUKS Disk Encryption Linux Unified Key Setup-on-disk-format (or LUKS) allows you to encrypt partitions on your Linux computer. This is particularly important when it comes to mobile computers and removable media. LUKS allows multiple user keys to decrypt a master key, which is used for the bulk encryption of the partition. Overview of LUKS What LUKS does LUKS encrypts entire block devices and is therefore well-suited for protecting the contents of mobile devices such as removable storage media or laptop disk drives. The underlying contents of the encrypted block device are arbitrary. This makes it useful for encrypting swap devices. This can also be useful with certain databases that use specially formatted block devices for data storage. LUKS uses the existing device mapper kernel subsystem. LUKS provides passphrase strengthening which protects against dictionary attacks. LUKS devices contain multiple key slots, allowing users to add backup keys or passphrases. What LUKS does not do: LUKS is not well-suited for scenarios requiring many (more than eight) users to have distinct access keys to the same device. LUKS is not well-suited for applications requiring file-level encryption. Important Disk-encryption solutions like LUKS only protect the data when your system is off. Once the system is on and LUKS has decrypted the disk, the files on that disk are available to anyone who would normally have access to them. 4.9.1.1. LUKS Implementation in Red Hat Enterprise Linux Red Hat Enterprise Linux 7 utilizes LUKS to perform file system encryption. By default, the option to encrypt the file system is unchecked during the installation. If you select the option to encrypt your hard drive, you will be prompted for a passphrase that will be asked every time you boot the computer. This passphrase "unlocks" the bulk encryption key that is used to decrypt your partition. If you choose to modify the default partition table you can choose which partitions you want to encrypt. This is set in the partition table settings. The default cipher used for LUKS (see cryptsetup --help ) is aes-cbc-essiv:sha256 (ESSIV - Encrypted Salt-Sector Initialization Vector). Note that the installation program, Anaconda , uses by default XTS mode (aes-xts-plain64). The default key size for LUKS is 256 bits. The default key size for LUKS with Anaconda (XTS mode) is 512 bits. Ciphers that are available are: AES - Advanced Encryption Standard - FIPS PUB 197 Twofish (a 128-bit block cipher) Serpent cast5 - RFC 2144 cast6 - RFC 2612 4.9.1.2. Manually Encrypting Directories Warning Following this procedure will remove all data on the partition that you are encrypting. You WILL lose all your information! Make sure you backup your data to an external source before beginning this procedure! Enter runlevel 1 by typing the following at a shell prompt as root: Unmount your existing /home : If the command in the step fails, use fuser to find processes hogging /home and kill them: Verify /home is no longer mounted: Fill your partition with random data: This command proceeds at the sequential write speed of your device and may take some time to complete. It is an important step to ensure no unencrypted data is left on a used device, and to obfuscate the parts of the device that contain encrypted data as opposed to just random data. Initialize your partition: Open the newly encrypted device: Make sure the device is present: Create a file system: Mount the file system: Make sure the file system is visible: Add the following to the /etc/crypttab file: Edit the /etc/fstab file, removing the old entry for /home and adding the following line: Restore default SELinux security contexts: Reboot the machine: The entry in the /etc/crypttab makes your computer ask your luks passphrase on boot. Log in as root and restore your backup. You now have an encrypted partition for all of your data to safely rest while the computer is off. 4.9.1.3. Add a New Passphrase to an Existing Device Use the following command to add a new passphrase to an existing device: After being prompted for any one of the existing passprases for authentication, you will be prompted to enter the new passphrase. 4.9.1.4. Remove a Passphrase from an Existing Device Use the following command to remove a passphrase from an existing device: You will be prompted for the passphrase you want to remove and then for any one of the remaining passphrases for authentication. 4.9.1.5. Creating Encrypted Block Devices in Anaconda You can create encrypted devices during system installation. This allows you to easily configure a system with encrypted partitions. To enable block device encryption, check the Encrypt System check box when selecting automatic partitioning or the Encrypt check box when creating an individual partition, software RAID array, or logical volume. After you finish partitioning, you will be prompted for an encryption passphrase. This passphrase will be required to access the encrypted devices. If you have pre-existing LUKS devices and provided correct passphrases for them earlier in the install process the passphrase entry dialog will also contain a check box. Checking this check box indicates that you would like the new passphrase to be added to an available slot in each of the pre-existing encrypted block devices. Note Checking the Encrypt System check box on the Automatic Partitioning screen and then choosing Create custom layout does not cause any block devices to be encrypted automatically. Note You can use kickstart to set a separate passphrase for each new encrypted block device. 4.9.1.6. Additional Resources For additional information on LUKS or encrypting hard drives under Red Hat Enterprise Linux 7 visit one of the following links: LUKS home page LUKS/cryptsetup FAQ LUKS - Linux Unified Key Setup Wikipedia article HOWTO: Creating an encrypted Physical Volume (PV) using a second hard drive and pvmove 4.9.2. Creating GPG Keys GPG is used to identify yourself and authenticate your communications, including those with people you do not know. GPG allows anyone reading a GPG-signed email to verify its authenticity. In other words, GPG allows someone to be reasonably certain that communications signed by you actually are from you. GPG is useful because it helps prevent third parties from altering code or intercepting conversations and altering the message. 4.9.2.1. Creating GPG Keys in GNOME To create a GPG Key in GNOME , follow these steps: Install the Seahorse utility, which makes GPG key management easier: To create a key, from the Applications Accessories menu select Passwords and Encryption Keys , which starts the application Seahorse . From the File menu select New and then PGP Key . Then click Continue . Type your full name, email address, and an optional comment describing who you are (for example: John C. Smith, [email protected] , Software Engineer). Click Create . A dialog is displayed asking for a passphrase for the key. Choose a strong passphrase but also easy to remember. Click OK and the key is created. Warning If you forget your passphrase, you will not be able to decrypt the data. To find your GPG key ID, look in the Key ID column to the newly created key. In most cases, if you are asked for the key ID, prepend 0x to the key ID, as in 0x6789ABCD . You should make a backup of your private key and store it somewhere secure. 4.9.2.2. Creating GPG Keys in KDE To create a GPG Key in KDE , follow these steps: Start the KGpg program from the main menu by selecting Applications Utilities Encryption Tool . If you have never used KGpg before, the program walks you through the process of creating your own GPG keypair. A dialog box appears prompting you to create a new key pair. Enter your name, email address, and an optional comment. You can also choose an expiration time for your key, as well as the key strength (number of bits) and algorithms. Enter your passphrase in the dialog box. At this point, your key appears in the main KGpg window. Warning If you forget your passphrase, you will not be able to decrypt the data. To find your GPG key ID, look in the Key ID column to the newly created key. In most cases, if you are asked for the key ID, prepend 0x to the key ID, as in 0x6789ABCD . You should make a backup of your private key and store it somewhere secure. 4.9.2.3. Creating GPG Keys Using the Command Line Use the following shell command: This command generates a key pair that consists of a public and a private key. Other people use your public key to authenticate and decrypt your communications. Distribute your public key as widely as possible, especially to people who you know will want to receive authentic communications from you, such as a mailing list. A series of prompts directs you through the process. Press the Enter key to assign a default value if desired. The first prompt asks you to select what kind of key you prefer: In almost all cases, the default is the correct choice. An RSA/RSA key allows you not only to sign communications, but also to encrypt files. Choose the key size: Again, the default, 2048, is sufficient for almost all users, and represents an extremely strong level of security. Choose when the key will expire. It is a good idea to choose an expiration date instead of using the default, which is none . If, for example, the email address on the key becomes invalid, an expiration date will remind others to stop using that public key. Entering a value of 1y , for example, makes the key valid for one year. (You may change this expiration date after the key is generated, if you change your mind.) Before the gpg2 application asks for signature information, the following prompt appears: Enter y to finish the process. Enter your name and email address for your GPG key. Remember this process is about authenticating you as a real individual. For this reason, include your real name. If you choose a bogus email address, it will be more difficult for others to find your public key. This makes authenticating your communications difficult. If you are using this GPG key for self-introduction on a mailing list, for example, enter the email address you use on that list. Use the comment field to include aliases or other information. (Some people use different keys for different purposes and identify each key with a comment, such as "Office" or "Open Source Projects.") At the confirmation prompt, enter the letter O to continue if all entries are correct, or use the other options to fix any problems. Finally, enter a passphrase for your secret key. The gpg2 program asks you to enter your passphrase twice to ensure you made no typing errors. Finally, gpg2 generates random data to make your key as unique as possible. Move your mouse, type random keys, or perform other tasks on the system during this step to speed up the process. Once this step is finished, your keys are complete and ready to use: The key fingerprint is a shorthand "signature" for your key. It allows you to confirm to others that they have received your actual public key without any tampering. You do not need to write this fingerprint down. To display the fingerprint at any time, use this command, substituting your email address: Your "GPG key ID" consists of 8 hex digits identifying the public key. In the example above, the GPG key ID is 1B2AFA1C . In most cases, if you are asked for the key ID, prepend 0x to the key ID, as in 0x6789ABCD . Warning If you forget your passphrase, the key cannot be used and any data encrypted using that key will be lost. 4.9.2.4. About Public Key Encryption Wikipedia - Public Key Cryptography HowStuffWorks - Encryption 4.9.3. Using openCryptoki for Public-Key Cryptography openCryptoki is a Linux implementation of PKCS#11 , which is a Public-Key Cryptography Standard that defines an application programming interface ( API ) to cryptographic devices called tokens. Tokens may be implemented in hardware or software. This chapter provides an overview of the way the openCryptoki system is installed, configured, and used in Red Hat Enterprise Linux 7. 4.9.3.1. Installing openCryptoki and Starting the Service To install the basic openCryptoki packages on your system, including a software implementation of a token for testing purposes, enter the following command as root : Depending on the type of hardware tokens you intend to use, you may need to install additional packages that provide support for your specific use case. For example, to obtain support for Trusted Platform Module ( TPM ) devices, you need to install the opencryptoki-tpmtok package. See the Installing Packages section of the Red Hat Enterprise Linux 7 System Administrator's Guide for general information on how to install packages using the Yum package manager. To enable the openCryptoki service, you need to run the pkcsslotd daemon. Start the daemon for the current session by executing the following command as root : To ensure that the service is automatically started at boot time, enter the following command: See the Managing Services with systemd chapter of the Red Hat Enterprise Linux 7 System Administrator's Guide for more information on how to use systemd targets to manage services. 4.9.3.2. Configuring and Using openCryptoki When started, the pkcsslotd daemon reads the /etc/opencryptoki/opencryptoki.conf configuration file, which it uses to collect information about the tokens configured to work with the system and about their slots. The file defines the individual slots using key-value pairs. Each slot definition can contain a description, a specification of the token library to be used, and an ID of the slot's manufacturer. Optionally, the version of the slot's hardware and firmware may be defined. See the opencryptoki.conf (5) manual page for a description of the file's format and for a more detailed description of the individual keys and the values that can be assigned to them. To modify the behavior of the pkcsslotd daemon at run time, use the pkcsconf utility. This tool allows you to show and configure the state of the daemon, as well as to list and modify the currently configured slots and tokens. For example, to display information about tokens, issue the following command (note that all non-root users that need to communicate with the pkcsslotd daemon must be a part of the pkcs11 system group): See the pkcsconf (1) manual page for a list of arguments available with the pkcsconf tool. Warning Keep in mind that only fully trusted users should be assigned membership in the pkcs11 group, as all members of this group have the right to block other users of the openCryptoki service from accessing configured PKCS#11 tokens. All members of this group can also execute arbitrary code with the privileges of any other users of openCryptoki . 4.9.4. Using Smart Cards to Supply Credentials to OpenSSH The smart card is a lightweight hardware security module in a USB stick, MicroSD, or SmartCard form factor. It provides a remotely manageable secure key store. In Red Hat Enterprise Linux 7, OpenSSH supports authentication using smart cards. To use your smart card with OpenSSH, store the public key from the card to the ~/.ssh/authorized_keys file. Install the PKCS#11 library provided by the opensc package on the client. PKCS#11 is a Public-Key Cryptography Standard that defines an application programming interface (API) to cryptographic devices called tokens. Enter the following command as root : 4.9.4.1. Retrieving a Public Key from a Card To list the keys on your card, use the ssh-keygen command. Specify the shared library (OpenSC in the following example) with the -D directive. 4.9.4.2. Storing a Public Key on a Server To enable authentication using a smart card on a remote server, transfer the public key to the remote server. Do it by copying the retrieved string (key) and pasting it to the remote shell, or by storing your key to a file ( smartcard.pub in the following example) and using the ssh-copy-id command: Storing a public key without a private key file requires to use the SSH_COPY_ID_LEGACY=1 environment variable or the -f option. 4.9.4.3. Authenticating to a Server with a Key on a Smart Card OpenSSH can read your public key from a smart card and perform operations with your private key without exposing the key itself. This means that the private key does not leave the card. To connect to a remote server using your smart card for authentication, enter the following command and enter the PIN protecting your card: Replace the hostname with the actual host name to which you want to connect. To save unnecessary typing time you connect to the remote server, store the path to the PKCS#11 library in your ~/.ssh/config file: Connect by running the ssh command without any additional options: 4.9.4.4. Using ssh-agent to Automate PIN Logging In Set up environmental variables to start using ssh-agent . You can skip this step in most cases because ssh-agent is already running in a typical session. Use the following command to check whether you can connect to your authentication agent: To avoid writing your PIN every time you connect using this key, add the card to the agent by running the following command: To remove the card from ssh-agent , use the following command: Note FIPS 201-2 requires explicit user action by the Personal Identity Verification (PIV) cardholder as a condition for use of the digital signature key stored on the card. OpenSC correctly enforces this requirement. However, for some applications it is impractical to require the cardholder to enter the PIN for each signature. To cache the smart card PIN, remove the # character before the pin_cache_ignore_user_consent = true; option in the /etc/opensc-x86_64.conf . See the Cardholder Authentication for the PIV Digital Signature Key (NISTIR 7863) report for more information. 4.9.4.5. Additional Resources Setting up your hardware or software token is described in the Smart Card support in Red Hat Enterprise Linux 7 article. For more information about the pkcs11-tool utility for managing and using smart cards and similar PKCS#11 security tokens, see the pkcs11-tool(1) man page. 4.9.5. Trusted and Encrypted Keys Trusted and encrypted keys are variable-length symmetric keys generated by the kernel that utilize the kernel keyring service. The fact that the keys never appear in user space in an unencrypted form means that their integrity can be verified, which in turn means that they can be used, for example, by the extended verification module ( EVM ) to verify and confirm the integrity of a running system. User-level programs can only ever access the keys in the form of encrypted blobs . Trusted keys need a hardware component: the Trusted Platform Module ( TPM ) chip, which is used to both create and encrypt ( seal ) the keys. The TPM seals the keys using a 2048-bit RSA key called the storage root key ( SRK ). In addition to that, trusted keys may also be sealed using a specific set of the TPM 's platform configuration register ( PCR ) values. The PCR contains a set of integrity-management values that reflect the BIOS , boot loader, and operating system. This means that PCR -sealed keys can only be decrypted by the TPM on the exact same system on which they were encrypted. However, once a PCR -sealed trusted key is loaded (added to a keyring), and thus its associated PCR values are verified, it can be updated with new (or future) PCR values, so that a new kernel, for example, can be booted. A single key can also be saved as multiple blobs, each with different PCR values. Encrypted keys do not require a TPM , as they use the kernel AES encryption, which makes them faster than trusted keys. Encrypted keys are created using kernel-generated random numbers and encrypted by a master key when they are exported into user-space blobs. This master key can be either a trusted key or a user key, which is their main disadvantage - if the master key is not a trusted key, the encrypted key is only as secure as the user key used to encrypt it. 4.9.5.1. Working with keys Before performing any operations with the keys, ensure that the trusted and encrypted-keys kernel modules are loaded in the system. Consider the following points while loading the kernel modules in different RHEL kernel architectures: For RHEL kernels with the x86_64 architecture, the TRUSTED_KEYS and ENCRYPTED_KEYS code is built in as a part of the core kernel code. As a result, the x86_64 system users can use these keys without loading the trusted and encrypted-keys modules. For all other architectures, it is necessary to load the trusted and encrypted-keys kernel modules before performing any operations with the keys. To load the kernel modules, execute the following command: The trusted and encrypted keys can be created, loaded, exported, and updated using the keyctl utility. For detailed information about using keyctl , see keyctl (1) . Note In order to use a TPM (such as for creating and sealing trusted keys), it needs to be enabled and active. This can be usually achieved through a setting in the machine's BIOS or using the tpm_setactive command from the tpm-tools package of utilities. Also, the TrouSers application needs to be installed (the trousers package), and the tcsd daemon, which is a part of the TrouSers suite, running to communicate with the TPM . To create a trusted key using a TPM , execute the keyctl command with the following syntax: ~]USD keyctl add trusted name "new keylength [ options ]" keyring Using the above syntax, an example command can be constructed as follows: The above example creates a trusted key called kmk with the length of 32 bytes (256 bits) and places it in the user keyring ( @u ). The keys may have a length of 32 to 128 bytes (256 to 1024 bits). Use the show subcommand to list the current structure of the kernel keyrings: The print subcommand outputs the encrypted key to the standard output. To export the key to a user-space blob, use the pipe subcommand as follows: To load the trusted key from the user-space blob, use the add command again with the blob as an argument: The TPM -sealed trusted key can then be employed to create secure encrypted keys. The following command syntax is used for generating encrypted keys: ~]USD keyctl add encrypted name "new [ format ] key-type : master-key-name keylength " keyring Based on the above syntax, a command for generating an encrypted key using the already created trusted key can be constructed as follows: To create an encrypted key on systems where a TPM is not available, use a random sequence of numbers to generate a user key, which is then used to seal the actual encrypted keys. Then generate the encrypted key using the random-number user key: The list subcommand can be used to list all keys in the specified kernel keyring: Important Keep in mind that encrypted keys that are not sealed by a master trusted key are only as secure as the user master key (random-number key) used to encrypt them. Therefore, the master user key should be loaded as securely as possible and preferably early during the boot process. 4.9.5.2. Additional Resources The following offline and online resources can be used to acquire additional information pertaining to the use of trusted and encrypted keys. Installed Documentation keyctl (1) - Describes the use of the keyctl utility and its subcommands. Online Documentation Red Hat Enterprise Linux 7 SELinux User's and Administrator's Guide - The SELinux User's and Administrator's Guide for Red Hat Enterprise Linux 7 describes the basic principles of SELinux and documents in detail how to configure and use SELinux with various services, such as the Apache HTTP Server . https://www.kernel.org/doc/Documentation/security/keys-trusted-encrypted.txt - The official documentation about the trusted and encrypted keys feature of the Linux kernel. See Also Section A.1.1, "Advanced Encryption Standard - AES" provides a concise description of the Advanced Encryption Standard . Section A.2, "Public-key Encryption" describes the public-key cryptographic approach and the various cryptographic protocols it uses. 4.9.6. Using the Random Number Generator In order to be able to generate secure cryptographic keys that cannot be easily broken, a source of random numbers is required. Generally, the more random the numbers are, the better the chance of obtaining unique keys. Entropy for generating random numbers is usually obtained from computing environmental "noise" or using a hardware random number generator . The rngd daemon, which is a part of the rng-tools package, is capable of using both environmental noise and hardware random number generators for extracting entropy. The daemon checks whether the data supplied by the source of randomness is sufficiently random and then stores it in the random-number entropy pool of the kernel. The random numbers it generates are made available through the /dev/random and /dev/urandom character devices. The difference between /dev/random and /dev/urandom is that the former is a blocking device, which means it stops supplying numbers when it determines that the amount of entropy is insufficient for generating a properly random output. Conversely, /dev/urandom is a non-blocking source, which reuses the entropy pool of the kernel and is thus able to provide an unlimited supply of pseudo-random numbers, albeit with less entropy. As such, /dev/urandom should not be used for creating long-term cryptographic keys. To install the rng-tools package, issue the following command as the root user: To start the rngd daemon, execute the following command as root : To query the status of the daemon, use the following command: To start the rngd daemon with optional parameters, execute it directly. For example, to specify an alternative source of random-number input (other than /dev/hwrandom ), use the following command: The command starts the rngd daemon with /dev/hwrng as the device from which random numbers are read. Similarly, you can use the -o (or --random-device ) option to choose the kernel device for random-number output (other than the default /dev/random ). See the rngd (8) manual page for a list of all available options. To check which sources of entropy are available in a given system, execute the following command as root : Note After entering the rngd -v command, the according process continues running in background. The -b, --background option (become a daemon) is applied by default. If there is not any TPM device present, you will see only the Intel Digital Random Number Generator (DRNG) as a source of entropy. To check if your CPU supports the RDRAND processor instruction, enter the following command: Note For more information and software code examples, see Intel Digital Random Number Generator (DRNG) Software Implementation Guide. The rng-tools package also contains the rngtest utility, which can be used to check the randomness of data. To test the level of randomness of the output of /dev/random , use the rngtest tool as follows: A high number of failures shown in the output of the rngtest tool indicates that the randomness of the tested data is insufficient and should not be relied upon. See the rngtest (1) manual page for a list of options available for the rngtest utility. Red Hat Enterprise Linux 7 introduced the virtio RNG (Random Number Generator) device that provides KVM virtual machines with access to entropy from the host machine. With the recommended setup, hwrng feeds into the entropy pool of the host Linux kernel (through /dev/random ), and QEMU will use /dev/random as the source for entropy requested by guests. Figure 4.1. The virtio RNG device Previously, Red Hat Enterprise Linux 7.0 and Red Hat Enterprise Linux 6 guests could make use of the entropy from hosts through the rngd user space daemon. Setting up the daemon was a manual step for each Red Hat Enterprise Linux installation. With Red Hat Enterprise Linux 7.1, the manual step has been eliminated, making the entire process seamless and automatic. The use of rngd is now not required and the guest kernel itself fetches entropy from the host when the available entropy falls below a specific threshold. The guest kernel is then in a position to make random numbers available to applications as soon as they request them. The Red Hat Enterprise Linux installer, Anaconda , now provides the virtio-rng module in its installer image, making available host entropy during the Red Hat Enterprise Linux installation. Important To correctly decide which random number generator you should use in your scenario, see the Understanding the Red Hat Enterprise Linux random number generator interface article.
[ "telinit 1", "umount /home", "fuser -mvk /home", "grep home /proc/mounts", "shred -v --iterations=1 /dev/VG00/LV_home", "cryptsetup --verbose --verify-passphrase luksFormat /dev/VG00/LV_home", "cryptsetup luksOpen /dev/VG00/LV_home home", "ls -l /dev/mapper | grep home", "mkfs.ext3 /dev/mapper/home", "mount /dev/mapper/home /home", "df -h | grep home", "home /dev/VG00/LV_home none", "/dev/mapper/home /home ext3 defaults 1 2", "/sbin/restorecon -v -R /home", "shutdown -r now", "cryptsetup luksAddKey device", "cryptsetup luksRemoveKey device", "~]# yum install seahorse", "~]USD gpg2 --gen-key", "Please select what kind of key you want: (1) RSA and RSA (default) (2) DSA and Elgamal (3) DSA (sign only) (4) RSA (sign only) Your selection?", "RSA keys may be between 1024 and 4096 bits long. What keysize do you want? (2048)", "Please specify how long the key should be valid. 0 = key does not expire d = key expires in n days w = key expires in n weeks m = key expires in n months y = key expires in n years key is valid for? (0)", "Is this correct (y/N)?", "pub 1024D/1B2AFA1C 2005-03-31 John Q. Doe <[email protected]> Key fingerprint = 117C FE83 22EA B843 3E86 6486 4320 545E 1B2A FA1C sub 1024g/CEA4B22E 2005-03-31 [expires: 2006-03-31]", "~]USD gpg2 --fingerprint [email protected]", "~]# yum install opencryptoki", "~]# systemctl start pkcsslotd", "~]# systemctl enable pkcsslotd", "~]USD pkcsconf -t", "~]# yum install opensc", "~]USD ssh-keygen -D /usr/lib64/pkcs11/opensc-pkcs11.so ssh-rsa AAAAB3NzaC1yc[...]+g4Mb9", "~]USD ssh-copy-id -f -i smartcard.pub user@hostname user@hostname's password: Number of key(s) added: 1 Now try logging into the machine, with: \"ssh user@hostname\" and check to make sure that only the key(s) you wanted were added.", "[localhost ~]USD ssh -I /usr/lib64/pkcs11/opensc-pkcs11.so hostname Enter PIN for 'Test (UserPIN)': [hostname ~]USD", "Host hostname PKCS11Provider /usr/lib64/pkcs11/opensc-pkcs11.so", "[localhost ~]USD ssh hostname Enter PIN for 'Test (UserPIN)': [hostname ~]USD", "~]USD ssh-add -l Could not open a connection to your authentication agent. ~]USD eval `ssh-agent`", "~]USD ssh-add -s /usr/lib64/pkcs11/opensc-pkcs11.so Enter PIN for 'Test (UserPIN)': Card added: /usr/lib64/pkcs11/opensc-pkcs11.so", "~]USD ssh-add -e /usr/lib64/pkcs11/opensc-pkcs11.so Card removed: /usr/lib64/pkcs11/opensc-pkcs11.so", "~]# modprobe trusted encrypted-keys", "~]USD keyctl add trusted kmk \"new 32\" @u 642500861", "~]USD keyctl show Session Keyring -3 --alswrv 500 500 keyring: _ses 97833714 --alswrv 500 -1 \\_ keyring: _uid.1000 642500861 --alswrv 500 500 \\_ trusted: kmk", "~]USD keyctl pipe 642500861 > kmk.blob", "~]USD keyctl add trusted kmk \"load `cat kmk.blob`\" @u 268728824", "~]USD keyctl add encrypted encr-key \"new trusted:kmk 32\" @u 159771175", "~]USD keyctl add user kmk-user \"`dd if=/dev/urandom bs=1 count=32 2>/dev/null`\" @u 427069434", "~]USD keyctl add encrypted encr-key \"new user:kmk-user 32\" @u 1012412758", "~]USD keyctl list @u 2 keys in keyring: 427069434: --alswrv 1000 1000 user: kmk-user 1012412758: --alswrv 1000 1000 encrypted: encr-key", "~]# yum install rng-tools", "~]# systemctl start rngd", "~]# systemctl status rngd", "~]# rngd --rng-device= /dev/hwrng", "~]# rngd -vf Unable to open file: /dev/tpm0 Available entropy sources: DRNG", "~]USD cat /proc/cpuinfo | grep rdrand", "~]USD cat /dev/random | rngtest -c 1000 rngtest 5 Copyright (c) 2004 by Henrique de Moraes Holschuh This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. rngtest: starting FIPS tests rngtest: bits received from input: 20000032 rngtest: FIPS 140-2 successes: 998 rngtest: FIPS 140-2 failures: 2 rngtest: FIPS 140-2(2001-10-10) Monobit: 0 rngtest: FIPS 140-2(2001-10-10) Poker: 0 rngtest: FIPS 140-2(2001-10-10) Runs: 0 rngtest: FIPS 140-2(2001-10-10) Long run: 2 rngtest: FIPS 140-2(2001-10-10) Continuous run: 0 rngtest: input channel speed: (min=1.171; avg=8.453; max=11.374)Mibits/s rngtest: FIPS tests speed: (min=15.545; avg=143.126; max=157.632)Mibits/s rngtest: Program run time: 2390520 microseconds" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/security_guide/sec-encryption
Chapter 1. Security APIs
Chapter 1. Security APIs 1.1. CertificateSigningRequest [certificates.k8s.io/v1] Description CertificateSigningRequest objects provide a mechanism to obtain x509 certificates by submitting a certificate signing request, and having it asynchronously approved and issued. Kubelets use this API to obtain: 1. client certificates to authenticate to kube-apiserver (with the "kubernetes.io/kube-apiserver-client-kubelet" signerName). 2. serving certificates for TLS endpoints kube-apiserver can connect to securely (with the "kubernetes.io/kubelet-serving" signerName). This API can be used to request client certificates to authenticate to kube-apiserver (with the "kubernetes.io/kube-apiserver-client" signerName), or to obtain certificates from custom non-Kubernetes signers. Type object 1.2. CredentialsRequest [cloudcredential.openshift.io/v1] Description CredentialsRequest is the Schema for the credentialsrequests API Type object 1.3. PodSecurityPolicyReview [security.openshift.io/v1] Description PodSecurityPolicyReview checks which service accounts (not users, since that would be cluster-wide) can create the PodTemplateSpec in question. Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object 1.4. PodSecurityPolicySelfSubjectReview [security.openshift.io/v1] Description PodSecurityPolicySelfSubjectReview checks whether this user/SA tuple can create the PodTemplateSpec Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object 1.5. PodSecurityPolicySubjectReview [security.openshift.io/v1] Description PodSecurityPolicySubjectReview checks whether a particular user/SA tuple can create the PodTemplateSpec. Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object 1.6. RangeAllocation [security.openshift.io/v1] Description RangeAllocation is used so we can easily expose a RangeAllocation typed for security group Compatibility level 4: No compatibility is provided, the API can change at any point for any reason. These capabilities should not be used by applications needing long term support. Type object 1.7. Secret [v1] Description Secret holds secret data of a certain type. The total bytes of the values in the Data field must be less than MaxSecretSize bytes. Type object 1.8. SecurityContextConstraints [security.openshift.io/v1] Description SecurityContextConstraints governs the ability to make requests that affect the SecurityContext that will be applied to a container. For historical reasons SCC was exposed under the core Kubernetes API group. That exposure is deprecated and will be removed in a future release - users should instead use the security.openshift.io group to manage SecurityContextConstraints. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.9. ServiceAccount [v1] Description ServiceAccount binds together: * a name, understood by users, and perhaps by peripheral systems, for an identity * a principal that can be authenticated and authorized * a set of secrets Type object
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/security_apis/security-apis
function::local_clock_ns
function::local_clock_ns Name function::local_clock_ns - Number of nanoseconds on the local cpu's clock Synopsis Arguments None Description This function returns the number of nanoseconds on the local cpu's clock. This is always monotonic comparing on the same cpu, but may have some drift between cpus (within about a jiffy).
[ "local_clock_ns:long()" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-local-clock-ns
Chapter 2. Migrating a Red Hat Single Sign-On 7.6 server
Chapter 2. Migrating a Red Hat Single Sign-On 7.6 server This section provides instructions for migrating a standalone server deployed from the ZIP distribution. Red Hat build of Keycloak 22.0 is built with Quarkus, which replaces the Red Hat JBoss Enterprise Application Platform (JBoss EAP) that was used by Red Hat Single Sign-On 7.6. The main changes to the server are the following: A revamped configuration experience, which is streamlined and supports great flexibility in setting configuration options. An RPM distribution of the server is no longer available 2.1. Prerequisites The instance of Red Hat Single Sign-On 7.6 was shut down so that it does not use the same database instance that will be used by Red Hat build of Keycloak . You backed up the database. OpenJDK17 is installed. You reviewed the Release Notes . 2.2. Migration process overview The following sections provide instructions for these migration steps: Download Red Hat build of Keycloak . Migrate the configuration. Migrate the database. Start the Red Hat build of Keycloak server. 2.3. Downloading Red Hat build of Keycloak The Red Hat build of Keycloak server download ZIP file contains the scripts and binaries to run the Red Hat build of Keycloak server. Download the Red Hat build of Keycloak server distribution file from the Red Hat customer portal . Unpack the ZIP file using the unzip command. 2.4. Migrating the configuration A new unified way to configure the Red Hat build of Keycloak server is through configuration options. The Red Hat Single Sign-On 7.6 configuration mechanism, such as standalone.xml, jboss-cli, and so on, no longer applies. Each option can be defined through the following configuration sources: CLI parameters Environment variables Configuration file Java KeyStore file If the same configuration option is specified through different configuration sources, the first source in the list above is used. All configuration options are available in all the different configuration sources, where the main difference is the format of the key. For example, here are four ways to configure the database hostname: Source Format CLI parameters --db-url-host cliValue Environment variables KC_DB_URL_HOST=envVarValue Configuration file db-url-host=confFileValue Java KeyStore file kc.db-url-host=keystoreValue The kc.sh --help command as well as the Red Hat build of Keycloak documentation provides a complete list of all available configuration options, where options are grouped by categories such as cache, database, and so on. Also, separate chapters exist for each area to configure, such as the chapter for Configuring the database . Additional resources Configuring Keycloak 2.4.1. Migrating the database configuration In contrast to Red Hat Single Sign-On 7.6, Red Hat build of Keycloak has built-in support for the supported databases removing the need to manually install and configure the database drivers. The exception is Oracle and Microsoft SQL Server, which still require manual installation of the drivers. In terms of configuration, consider the datasource subsystem from your existing Red Hat Single Sign-On 7.6 installation and map those configurations to the options available from the Database configuration category in Red Hat build of Keycloak . For example, a configuration appears as follows: <datasource jndi-name="java:jboss/datasources/KeycloakDS" pool-name="KeycloakDS" enabled="true" use-java-context="true" statistics-enabled="true"> <connection-url>jdbc:postgresql://mypostgres:5432/mydb?currentSchema=myschema</connection-url> <driver>postgresql</driver> <pool> <min-pool-size>5</min-pool-size> <max-pool-size>50</max-pool-size> </pool> <security> <user-name>myuser</user-name> <password>myuser</password> </security> </datasource> In Red Hat build of Keycloak , the equivalent configuration using CLI parameters would be: kc.sh start --db postgres --db-url-host mypostgres --db-url-port 5432 --db-url-database mydb --db-schema myschema --db-pool-min-size 5 --db-pool-max-size 50 --db-username myser --db-password myuser Note Consider storing database credentials in a secure KeyStore configuration source. Additional resources Configuring the database , which also includes instructions for installing the Oracle and Microsoft SQL Server JDBC drivers Setting sensitive options using a Java KeyStore file , which provides instructions for how to securely store database credentials. 2.4.2. Migrating HTTP and TLS configuration HTTP is disabled and TLS configuration is required by default, whenever the production mode (represented by the start option) is used. You can enable HTTP with the --http-enabled=true configuration option, but it is not recommended unless the Red Hat build of Keycloak server is within a fully isolated network, and no risk exists of internal or external attackers being able to observe networking traffic. A Red Hat build of Keycloak instance has a different context root (URL path) as it uses the root of the server while Red Hat Single Sign-On 7.6 by default appends /auth . To mimic the old behavior, the --http-relative-path=/auth configuration option can be used. The default ports remain the same, but they can also be changed by the --http-port and --https-port options. Two ways exist to configure TLS, either through files in the PEM format or with a Java Keystore. For example, a configuration by Java Keystore appears as follows: <tls> <key-stores> <key-store name="applicationKS"> <credential-reference clear-text="password"/> <implementation type="JKS"/> <file path="/path/to/application.keystore"/> </key-store> </key-stores> <key-managers> <key-manager name="applicationKM" key-store="applicationKS"> <credential-reference clear-text="password"/> </key-manager> </key-managers> <server-ssl-contexts> <server-ssl-context name="applicationSSC" key-manager="applicationKM"/> </server-ssl-contexts> </tls> In Red Hat build of Keycloak , the equivalent configuration using CLI parameters would be as follows: kc.sh start --https-key-store-file /path/to/application.keystore --https-key-store-password password In Red Hat build of Keycloak , you can configure TLS by providing certificates in PEM format as follows: kc.sh start --https-certificate-file /path/to/certfile.pem --https-certificate-key-file /path/to/keyfile.pem Additional resources Configuring TLS 2.4.3. Migrating clustering and cache configuration Red Hat Single Sign-On 7.6 provided distinct operating modes for running the server as standalone, standalone clustered, and domain clustered. These modes differed in the start script and configuration files. Red Hat build of Keycloak offers a simplified solution with a single start script: kc.sh . To run the server as standalone or standalone clustered, use the kc.sh script: Red Hat build of Keycloak Red Hat Single Sign-On 7.6 ./kc.sh start --cache=local ./standalone.sh ./kc.sh start [--cache=ispn] ./standalone.sh --server-config=standalone-ha.xml The default values for the --cache parameter is start mode aware: local - when the start-dev command is executed ispn - when the start command is executed In Red Hat Single Sign-On 7.6, clustering and cache configuration was done through the Infinispan subsystem, while in Red Hat build of Keycloak the majority of the configuration is done through a separate Infinispan configuration file. For example, a configuration of Infinispan appears as follows: <subsystem xmlns="urn:jboss:domain:infinispan:13.0"> <cache-container name="keycloak" marshaller="JBOSS" modules="org.keycloak.keycloak-model-infinispan"> <local-cache name="realms"> <heap-memory size="10000"/> </local-cache> <local-cache name="users"> <heap-memory size="10000"/> </local-cache> <local-cache name="sessions"/> <local-cache name="authenticationSessions"/> <local-cache name="offlineSessions"/> ... </cache-container> </subsystem> In Red Hat build of Keycloak , the default Infinispan configuration file is located in the conf/cache-ispn.xml file. You can provide your own Infinispan configuration file and specify it using the CLI parameter as follows: kc.sh start --cache-config-file my-cache-file.xml Note Domain clustered mode is not supported with Red Hat build of Keycloak . Transport stacks No migration is needed for default and custom JGroups transport stacks in the Red Hat build of Keycloak. The only improvement is the possibility to override the stack defined in the cache configuration file by providing a CLI option cache-stack, which takes precedence. Consider a part of the Infinispan configuration file my-cache-file.xml , specified above, with the custom JGroups transport stack as follows: You can notice the transport stack for the keycloak cache container is set to tcp, but it can be overridden using the CLI option as follows: <jgroups> <stack name="my-encrypt-udp" extends="udp"> ... </stack> </jgroups> <cache-container name="keycloak"> <transport stack="tcp"/> ... </cache-container> kc.sh start --cache-config-file my-cache-file.xml --cache-stack my-encrypt-udp After executing the above commands, the my-encrypt-udp transport stack is used. Additional resources Configuring distributed caches 2.4.4. Migrating hostname and proxy configuration In Red Hat build of Keycloak, you are now obligated to configure the Hostname SPI in order to set how front and back end URLs are going to be created by the server when redirecting users or communicating with their clients. For example, consider if you have a configuration similar as the follows in your Red Hat Single Sign-On 7.6 installation: <spi name="hostname"> <default-provider>default</default-provider> <provider name="default" enabled="true"> <properties> <property name="frontendUrl" value="myFrontendUrl"/> <property name="forceBackendUrlToFrontendUrl" value="true"/> </properties> </provider> </spi> You can translate it to the following configuration options in Red Hat build of Keycloak: kc.sh start --hostname-url myFrontendUrl --hostname-strict-backchannel true The hostname-url configuration option allows you to set the base URL where the cluster is exposed to the public from an ingress layer running in front of your cluster. You can also set the URL for administration resources by setting the hostname-admin-url configuration option. To make it easier to enable HTTP header forwarding when running a cluster behind a reverse proxy, Red Hat build of Keycloak allows you to set different proxy modes, depending on the TLS termination mode on your proxy: none (default) edge passthrough reencrypt Setting edge or reencrypt will enable Red Hat build of Keycloak to recognize HTTP Forwarded and X-Forwarded-* headers set by your proxy. You need to make sure your proxy is overriding those headers before exposing your cluster to the public. Note The hostname and proxy configuration are used for determining the resource URLs (redirect URIs, CSS and JavaScript links, OIDC well-known endpoints, and so on) and not for actively blocking incoming requests. None of the hostname/proxy options change the actual binding addresses or ports that the Red Hat build of Keycloak server listens on - this is a responsibility of the HTTP/TLS options. In Red Hat Single Sign-On 7.6, setting a hostname was highly recommended, but not enforced. In Red Hat build of Keycloak , when the start command is used this is now required, unless explicitly disabled with the --hostname-strict=false option. Additional resources Using a reverse proxy Configuring the hostname 2.4.5. Migrating truststore configuration The truststore is used for external TLS communication, for example HTTPS requests and LDAP servers. To use a truststore, you import the remote server's or CA's certificate into the trustore. Then, you can start the Red Hat build of Keycloak server specifying the truststore SPI. For example, a configuration appears as follows: <spi name="truststore"> <provider name="file" enabled="true"> <properties> <property name="file" value="path/to/myTrustStore.jks"/> <property name="password" value="password"/> <property name="hostname-verification-policy" value="WILDCARD"/> </properties> </provider> </spi> In Red Hat build of Keycloak , the equivalent configuration using CLI parameters would be: kc.sh start --spi-truststore-file-file /path/to/myTrustStore.jks --spi-truststore-file-password password --spi-truststore-file-hostname-verification-policy WILDCARD Additional resources Configuring trusted certificates for outgoing requests 2.4.6. Migrating vault configuration The Keystore Vault is an implementation of the Vault SPI and it is useful for storing secrets in bare metal installations. This vault is a replacement of the Elytron Credential Store in Red Hat Single Sign-On 7.6.. <spi name="vault"> <provider name="elytron-cs-keystore" enabled="true"> <properties> <property name="location" value="path/to/keystore.p12"/> <property name="secret" value="password"/> </properties> </provider> </spi> In Red Hat build of Keycloak, the equivalent configuration using CLI parameters would be: kc.sh start --vault keystore --vault-file /path/to/keystore.p12 --vault-pass password Secrets stored in the vault can be then accessed at multiple places within the Admin Console. When it comes to the migration from the existing Elytron vault to the new Java KeyStore-based vault, no realm configuration changes are required. If a newly created Java keystore contains the same secrets, your existing realm configuration should work. Given that you use the default REALM_UNDERSCORE_KEY key resolver, the secret can be accessed by USD{vault.realm-name_alias} (for example, in your LDAP User federation configuration) the same way as before. Additional resources Using a vault . 2.4.7. Migrating JVM settings The approach for JVM settings in Red Hat build of Keycloak is similar to the Red Hat Single Sign-On 7.6 approach. You still need to set particular environment variables, however, the /bin folder contains no configuration files, such as standalone.conf . Red Hat build of Keycloak provides various default JVM arguments, which proved to be suitable for the majority of deployments as it provides good throughput and efficiency in memory allocation and CPU overhead. Also, other default JVM arguments ensure a smooth run of the Red Hat build of Keycloak instance, so use caution when you change the arguments for your use case. To change JVM arguments or GC settings, you set particular environment variables, which are specified as Java options. For a complete override of these settings, you specify the JAVA_OPTS environment variable. When only an append of a particular Java property is required, you specify the JAVA_OPTS_APPEND environment variable. When no JAVA_OPTS environment variable is specified, the default Java properties are used and can be found inside the ./kc.sh script. For instance, you can specify a particular Java option as follows: export JAVA_OPTS_APPEND=-XX:+HeapDumpOnOutOfMemoryError kc.sh start 2.4.8. Migrating SPI provider configuration Configuration for SPI providers is available through the new configuration system. This is the old format: <spi name="<spi-id>"> <provider name="<provider-id>" enabled="true"> <properties> <property name="<property>" value="<value>"/> </properties> </provider> </spi> This is the new format: spi-<spi-id>-<provider-id>-<property>=<value> Source Format CLI ./kc.sh start --spi-connections-http-client-default-connection-pool-size 10 Environment Variable KC_SPI_CONNECTIONS_HTTP_CLIENT_DEFAULT_CONNECTION_POOL_SIZE=10 Configuration file spi-connections-http-client-default-connection-pool-size=10 Java Keystore file kc.spi-connections-http-client-default-connection-pool-size=10 Additional resources All Provider Config . 2.4.9. Troubleshooting the configuration Use these commands for troubleshooting: kc.sh show-config - shows you the configuration sources from which particular properties are loaded and what their values are. You can check whether a property and its value is propagated correctly. kc.sh --verbose start - prints out the whole error stack trace, when there is an error. 2.5. Migrating the database Red Hat build of Keycloak can automatically migrate the database schema, or you can choose to do it manually. By default the database is automatically migrated when you start the new installation for the first time. 2.5.1. Automatic relational database migration To perform an automatic migration, start the server connected to the desired database. If the database schema has changed for the new version of the server, it will be migrated. 2.5.2. Manual relational database migration To enable manual upgrading of the database schema, set the migration-strategy property value to manual for the default connections-jpa provider: kc.sh start --spi-connections-jpa-legacy-migration-strategy manual When you start the server with this configuration, it checks if the database needs to be migrated. The required changes are written to the bin/keycloak-database-update.sql SQL file that you can review and manually run against the database. To change the path and name of the exported SQL file, set the migration-export property for the default connections-jpa provider: kc.sh start --spi-connections-jpa-legacy-migration-export <path>/<file.sql> For further details on how to apply this file to the database, see the documentation for your relational database. After the changes have been written to the file, the server exits. 2.6. Starting the Red Hat build of Keycloak Server The difference in starting the distribution of Red Hat Single Sign-On 7.6 and Red Hat build of Keycloak is in the executed script. These scripts live in the /bin folder of the server distribution. 2.6.1. Starting the server in development mode To try out Red Hat build of Keycloak without worrying about supplying any properties, you can start the distribution in the development mode as described in the table below. However, note that this mode is strictly for development and should not be used in production. Red Hat build of Keycloak Red Hat Single Sign-On 7.6 ./kc.sh start-dev ./standalone.sh Warning The development mode should NOT be used in production. 2.6.2. Starting the server in production mode Red Hat build of Keycloak has a dedicated start mode for production: ./kc.sh start . The difference from running with start-dev is different default configuration values. It automatically uses a strict and by-default secured configuration setup. In the production mode, HTTP is disabled, and explicit TLS and hostname configuration is required. Additional resources Configuring Keycloak for production Optimize the Keycloak startup
[ "<datasource jndi-name=\"java:jboss/datasources/KeycloakDS\" pool-name=\"KeycloakDS\" enabled=\"true\" use-java-context=\"true\" statistics-enabled=\"true\"> <connection-url>jdbc:postgresql://mypostgres:5432/mydb?currentSchema=myschema</connection-url> <driver>postgresql</driver> <pool> <min-pool-size>5</min-pool-size> <max-pool-size>50</max-pool-size> </pool> <security> <user-name>myuser</user-name> <password>myuser</password> </security> </datasource>", "kc.sh start --db postgres --db-url-host mypostgres --db-url-port 5432 --db-url-database mydb --db-schema myschema --db-pool-min-size 5 --db-pool-max-size 50 --db-username myser --db-password myuser", "<tls> <key-stores> <key-store name=\"applicationKS\"> <credential-reference clear-text=\"password\"/> <implementation type=\"JKS\"/> <file path=\"/path/to/application.keystore\"/> </key-store> </key-stores> <key-managers> <key-manager name=\"applicationKM\" key-store=\"applicationKS\"> <credential-reference clear-text=\"password\"/> </key-manager> </key-managers> <server-ssl-contexts> <server-ssl-context name=\"applicationSSC\" key-manager=\"applicationKM\"/> </server-ssl-contexts> </tls>", "kc.sh start --https-key-store-file /path/to/application.keystore --https-key-store-password password", "kc.sh start --https-certificate-file /path/to/certfile.pem --https-certificate-key-file /path/to/keyfile.pem", "<subsystem xmlns=\"urn:jboss:domain:infinispan:13.0\"> <cache-container name=\"keycloak\" marshaller=\"JBOSS\" modules=\"org.keycloak.keycloak-model-infinispan\"> <local-cache name=\"realms\"> <heap-memory size=\"10000\"/> </local-cache> <local-cache name=\"users\"> <heap-memory size=\"10000\"/> </local-cache> <local-cache name=\"sessions\"/> <local-cache name=\"authenticationSessions\"/> <local-cache name=\"offlineSessions\"/> </cache-container> </subsystem>", "kc.sh start --cache-config-file my-cache-file.xml", "<jgroups> <stack name=\"my-encrypt-udp\" extends=\"udp\"> ... </stack> </jgroups> <cache-container name=\"keycloak\"> <transport stack=\"tcp\"/> ... </cache-container>", "kc.sh start --cache-config-file my-cache-file.xml --cache-stack my-encrypt-udp", "<spi name=\"hostname\"> <default-provider>default</default-provider> <provider name=\"default\" enabled=\"true\"> <properties> <property name=\"frontendUrl\" value=\"myFrontendUrl\"/> <property name=\"forceBackendUrlToFrontendUrl\" value=\"true\"/> </properties> </provider> </spi>", "kc.sh start --hostname-url myFrontendUrl --hostname-strict-backchannel true", "<spi name=\"truststore\"> <provider name=\"file\" enabled=\"true\"> <properties> <property name=\"file\" value=\"path/to/myTrustStore.jks\"/> <property name=\"password\" value=\"password\"/> <property name=\"hostname-verification-policy\" value=\"WILDCARD\"/> </properties> </provider> </spi>", "kc.sh start --spi-truststore-file-file /path/to/myTrustStore.jks --spi-truststore-file-password password --spi-truststore-file-hostname-verification-policy WILDCARD", "<spi name=\"vault\"> <provider name=\"elytron-cs-keystore\" enabled=\"true\"> <properties> <property name=\"location\" value=\"path/to/keystore.p12\"/> <property name=\"secret\" value=\"password\"/> </properties> </provider> </spi>", "kc.sh start --vault keystore --vault-file /path/to/keystore.p12 --vault-pass password", "export JAVA_OPTS_APPEND=-XX:+HeapDumpOnOutOfMemoryError kc.sh start", "<spi name=\"<spi-id>\"> <provider name=\"<provider-id>\" enabled=\"true\"> <properties> <property name=\"<property>\" value=\"<value>\"/> </properties> </provider> </spi>", "spi-<spi-id>-<provider-id>-<property>=<value>", "kc.sh start --spi-connections-jpa-legacy-migration-strategy manual", "kc.sh start --spi-connections-jpa-legacy-migration-export <path>/<file.sql>" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/22.0/html/migration_guide/migrating-server
5.7. Activating Logical Volumes on Individual Nodes in a Cluster
5.7. Activating Logical Volumes on Individual Nodes in a Cluster If you have LVM installed in a cluster environment, you may at times need to activate logical volumes exclusively on one node. To activate logical volumes exclusively on one node, use the lvchange -aey command. Alternatively, you can use lvchange -aly command to activate logical volumes only on the local node but not exclusively. You can later activate them on additional nodes concurrently. You can also activate logical volumes on individual nodes by using LVM tags, which are described in Appendix D, LVM Object Tags . You can also specify activation of nodes in the configuration file, which is described in Appendix B, The LVM Configuration Files .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/cluster_activation
Chapter 5. Kernel Module Management Operator release notes
Chapter 5. Kernel Module Management Operator release notes 5.1. Kernel Module Management Operator release notes 5.1.1. Release notes for Kernel Module Management Operator 2.2 5.1.1.1. New features KMM is now using the CRI-O container engine to pull container images in the worker pod instead of using HTTP calls directly from the worker container. For more information, see Example Module CR . The Kernel Module Management (KMM) Operator images are now based on rhel-els-minimal container images instead of the rhel-els images. This change results in a greatly reduced image footprint, while still maintaining FIPS compliance. In this release, the firmware search path has been updated to copy the contents of the specified path into the path specified in worker.setFirmwareClassPath (default: /var/lib/firmware). For more information, see Example Module CR . For each node running a kernel matching the regular expression, KMM now checks if you have included a tag or a digest. If you have not specified a tag or digest in the container image, then the validation webhook returns an error and does not apply the module. For more information, see Example Module CR .
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/specialized_hardware_and_driver_enablement/kernel-module-management-operator-release-notes
Chapter 4. Installing managed clusters with RHACM and SiteConfig resources
Chapter 4. Installing managed clusters with RHACM and SiteConfig resources You can provision OpenShift Container Platform clusters at scale with Red Hat Advanced Cluster Management (RHACM) using the assisted service and the GitOps plugin policy generator with core-reduction technology enabled. The GitOps Zero Touch Provisioning (ZTP) pipeline performs the cluster installations. GitOps ZTP can be used in a disconnected environment. Important Using PolicyGenTemplate CRs to manage and deploy policies to managed clusters will be deprecated in an upcoming OpenShift Container Platform release. Equivalent and improved functionality is available using Red Hat Advanced Cluster Management (RHACM) and PolicyGenerator CRs. For more information about PolicyGenerator resources, see the RHACM Policy Generator documentation. Additional resources Configuring managed cluster policies by using PolicyGenerator resources Comparing RHACM PolicyGenerator and PolicyGenTemplate resource patching 4.1. GitOps ZTP and Topology Aware Lifecycle Manager GitOps Zero Touch Provisioning (ZTP) generates installation and configuration CRs from manifests stored in Git. These artifacts are applied to a centralized hub cluster where Red Hat Advanced Cluster Management (RHACM), the assisted service, and the Topology Aware Lifecycle Manager (TALM) use the CRs to install and configure the managed cluster. The configuration phase of the GitOps ZTP pipeline uses the TALM to orchestrate the application of the configuration CRs to the cluster. There are several key integration points between GitOps ZTP and the TALM. Inform policies By default, GitOps ZTP creates all policies with a remediation action of inform . These policies cause RHACM to report on compliance status of clusters relevant to the policies but does not apply the desired configuration. During the GitOps ZTP process, after OpenShift installation, the TALM steps through the created inform policies and enforces them on the target managed cluster(s). This applies the configuration to the managed cluster. Outside of the GitOps ZTP phase of the cluster lifecycle, this allows you to change policies without the risk of immediately rolling those changes out to affected managed clusters. You can control the timing and the set of remediated clusters by using TALM. Automatic creation of ClusterGroupUpgrade CRs To automate the initial configuration of newly deployed clusters, TALM monitors the state of all ManagedCluster CRs on the hub cluster. Any ManagedCluster CR that does not have a ztp-done label applied, including newly created ManagedCluster CRs, causes the TALM to automatically create a ClusterGroupUpgrade CR with the following characteristics: The ClusterGroupUpgrade CR is created and enabled in the ztp-install namespace. ClusterGroupUpgrade CR has the same name as the ManagedCluster CR. The cluster selector includes only the cluster associated with that ManagedCluster CR. The set of managed policies includes all policies that RHACM has bound to the cluster at the time the ClusterGroupUpgrade is created. Pre-caching is disabled. Timeout set to 4 hours (240 minutes). The automatic creation of an enabled ClusterGroupUpgrade ensures that initial zero-touch deployment of clusters proceeds without the need for user intervention. Additionally, the automatic creation of a ClusterGroupUpgrade CR for any ManagedCluster without the ztp-done label allows a failed GitOps ZTP installation to be restarted by simply deleting the ClusterGroupUpgrade CR for the cluster. Waves Each policy generated from a PolicyGenerator or PolicyGentemplate CR includes a ztp-deploy-wave annotation. This annotation is based on the same annotation from each CR which is included in that policy. The wave annotation is used to order the policies in the auto-generated ClusterGroupUpgrade CR. The wave annotation is not used other than for the auto-generated ClusterGroupUpgrade CR. Note All CRs in the same policy must have the same setting for the ztp-deploy-wave annotation. The default value of this annotation for each CR can be overridden in the PolicyGenerator or PolicyGentemplate . The wave annotation in the source CR is used for determining and setting the policy wave annotation. This annotation is removed from each built CR which is included in the generated policy at runtime. The TALM applies the configuration policies in the order specified by the wave annotations. The TALM waits for each policy to be compliant before moving to the policy. It is important to ensure that the wave annotation for each CR takes into account any prerequisites for those CRs to be applied to the cluster. For example, an Operator must be installed before or concurrently with the configuration for the Operator. Similarly, the CatalogSource for an Operator must be installed in a wave before or concurrently with the Operator Subscription. The default wave value for each CR takes these prerequisites into account. Multiple CRs and policies can share the same wave number. Having fewer policies can result in faster deployments and lower CPU usage. It is a best practice to group many CRs into relatively few waves. To check the default wave value in each source CR, run the following command against the out/source-crs directory that is extracted from the ztp-site-generate container image: USD grep -r "ztp-deploy-wave" out/source-crs Phase labels The ClusterGroupUpgrade CR is automatically created and includes directives to annotate the ManagedCluster CR with labels at the start and end of the GitOps ZTP process. When GitOps ZTP configuration postinstallation commences, the ManagedCluster has the ztp-running label applied. When all policies are remediated to the cluster and are fully compliant, these directives cause the TALM to remove the ztp-running label and apply the ztp-done label. For deployments that make use of the informDuValidator policy, the ztp-done label is applied when the cluster is fully ready for deployment of applications. This includes all reconciliation and resulting effects of the GitOps ZTP applied configuration CRs. The ztp-done label affects automatic ClusterGroupUpgrade CR creation by TALM. Do not manipulate this label after the initial GitOps ZTP installation of the cluster. Linked CRs The automatically created ClusterGroupUpgrade CR has the owner reference set as the ManagedCluster from which it was derived. This reference ensures that deleting the ManagedCluster CR causes the instance of the ClusterGroupUpgrade to be deleted along with any supporting resources. 4.2. Overview of deploying managed clusters with GitOps ZTP Red Hat Advanced Cluster Management (RHACM) uses GitOps Zero Touch Provisioning (ZTP) to deploy single-node OpenShift Container Platform clusters, three-node clusters, and standard clusters. You manage site configuration data as OpenShift Container Platform custom resources (CRs) in a Git repository. GitOps ZTP uses a declarative GitOps approach for a develop once, deploy anywhere model to deploy the managed clusters. The deployment of the clusters includes: Installing the host operating system (RHCOS) on a blank server Deploying OpenShift Container Platform Creating cluster policies and site subscriptions Making the necessary network configurations to the server operating system Deploying profile Operators and performing any needed software-related configuration, such as performance profile, PTP, and SR-IOV Overview of the managed site installation process After you apply the managed site custom resources (CRs) on the hub cluster, the following actions happen automatically: A Discovery image ISO file is generated and booted on the target host. When the ISO file successfully boots on the target host it reports the host hardware information to RHACM. After all hosts are discovered, OpenShift Container Platform is installed. When OpenShift Container Platform finishes installing, the hub installs the klusterlet service on the target cluster. The requested add-on services are installed on the target cluster. The Discovery image ISO process is complete when the Agent CR for the managed cluster is created on the hub cluster. Important The target bare-metal host must meet the networking, firmware, and hardware requirements listed in Recommended single-node OpenShift cluster configuration for vDU application workloads . 4.3. Creating the managed bare-metal host secrets Add the required Secret custom resources (CRs) for the managed bare-metal host to the hub cluster. You need a secret for the GitOps Zero Touch Provisioning (ZTP) pipeline to access the Baseboard Management Controller (BMC) and a secret for the assisted installer service to pull cluster installation images from the registry. Note The secrets are referenced from the SiteConfig CR by name. The namespace must match the SiteConfig namespace. Procedure Create a YAML secret file containing credentials for the host Baseboard Management Controller (BMC) and a pull secret required for installing OpenShift and all add-on cluster Operators: Save the following YAML as the file example-sno-secret.yaml : apiVersion: v1 kind: Secret metadata: name: example-sno-bmc-secret namespace: example-sno 1 data: 2 password: <base64_password> username: <base64_username> type: Opaque --- apiVersion: v1 kind: Secret metadata: name: pull-secret namespace: example-sno 3 data: .dockerconfigjson: <pull_secret> 4 type: kubernetes.io/dockerconfigjson 1 Must match the namespace configured in the related SiteConfig CR 2 Base64-encoded values for password and username 3 Must match the namespace configured in the related SiteConfig CR 4 Base64-encoded pull secret Add the relative path to example-sno-secret.yaml to the kustomization.yaml file that you use to install the cluster. 4.4. Configuring Discovery ISO kernel arguments for installations using GitOps ZTP The GitOps Zero Touch Provisioning (ZTP) workflow uses the Discovery ISO as part of the OpenShift Container Platform installation process on managed bare-metal hosts. You can edit the InfraEnv resource to specify kernel arguments for the Discovery ISO. This is useful for cluster installations with specific environmental requirements. For example, configure the rd.net.timeout.carrier kernel argument for the Discovery ISO to facilitate static networking for the cluster or to receive a DHCP address before downloading the root file system during installation. Note In OpenShift Container Platform 4.18, you can only add kernel arguments. You can not replace or delete kernel arguments. Prerequisites You have installed the OpenShift CLI (oc). You have logged in to the hub cluster as a user with cluster-admin privileges. Procedure Create the InfraEnv CR and edit the spec.kernelArguments specification to configure kernel arguments. Save the following YAML in an InfraEnv-example.yaml file: Note The InfraEnv CR in this example uses template syntax such as {{ .Cluster.ClusterName }} that is populated based on values in the SiteConfig CR. The SiteConfig CR automatically populates values for these templates during deployment. Do not edit the templates manually. apiVersion: agent-install.openshift.io/v1beta1 kind: InfraEnv metadata: annotations: argocd.argoproj.io/sync-wave: "1" name: "{{ .Cluster.ClusterName }}" namespace: "{{ .Cluster.ClusterName }}" spec: clusterRef: name: "{{ .Cluster.ClusterName }}" namespace: "{{ .Cluster.ClusterName }}" kernelArguments: - operation: append 1 value: audit=0 2 - operation: append value: trace=1 sshAuthorizedKey: "{{ .Site.SshPublicKey }}" proxy: "{{ .Cluster.ProxySettings }}" pullSecretRef: name: "{{ .Site.PullSecretRef.Name }}" ignitionConfigOverride: "{{ .Cluster.IgnitionConfigOverride }}" nmStateConfigLabelSelector: matchLabels: nmstate-label: "{{ .Cluster.ClusterName }}" additionalNTPSources: "{{ .Cluster.AdditionalNTPSources }}" 1 Specify the append operation to add a kernel argument. 2 Specify the kernel argument you want to configure. This example configures the audit kernel argument and the trace kernel argument. Commit the InfraEnv-example.yaml CR to the same location in your Git repository that has the SiteConfig CR and push your changes. The following example shows a sample Git repository structure: ~/example-ztp/install └── site-install ├── siteconfig-example.yaml ├── InfraEnv-example.yaml ... Edit the spec.clusters.crTemplates specification in the SiteConfig CR to reference the InfraEnv-example.yaml CR in your Git repository: clusters: crTemplates: InfraEnv: "InfraEnv-example.yaml" When you are ready to deploy your cluster by committing and pushing the SiteConfig CR, the build pipeline uses the custom InfraEnv-example CR in your Git repository to configure the infrastructure environment, including the custom kernel arguments. Verification To verify that the kernel arguments are applied, after the Discovery image verifies that OpenShift Container Platform is ready for installation, you can SSH to the target host before the installation process begins. At that point, you can view the kernel arguments for the Discovery ISO in the /proc/cmdline file. Begin an SSH session with the target host: USD ssh -i /path/to/privatekey core@<host_name> View the system's kernel arguments by using the following command: USD cat /proc/cmdline 4.5. Deploying a managed cluster with SiteConfig and GitOps ZTP Use the following procedure to create a SiteConfig custom resource (CR) and related files and initiate the GitOps Zero Touch Provisioning (ZTP) cluster deployment. Important SiteConfig v1 is deprecated starting with OpenShift Container Platform version 4.18. Equivalent and improved functionality is now available through the SiteConfig Operator using the ClusterInstance custom resource. For more information, see Procedure to transition from SiteConfig CRs to the ClusterInstance API . For more information about the SiteConfig Operator, see SiteConfig . Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. You configured the hub cluster for generating the required installation and policy CRs. You created a Git repository where you manage your custom site configuration data. The repository must be accessible from the hub cluster and you must configure it as a source repository for the ArgoCD application. See "Preparing the GitOps ZTP site configuration repository" for more information. Note When you create the source repository, ensure that you patch the ArgoCD application with the argocd/deployment/argocd-openshift-gitops-patch.json patch-file that you extract from the ztp-site-generate container. See "Configuring the hub cluster with ArgoCD". To be ready for provisioning managed clusters, you require the following for each bare-metal host: Network connectivity Your network requires DNS. Managed cluster hosts should be reachable from the hub cluster. Ensure that Layer 3 connectivity exists between the hub cluster and the managed cluster host. Baseboard Management Controller (BMC) details GitOps ZTP uses BMC username and password details to connect to the BMC during cluster installation. The GitOps ZTP plugin manages the ManagedCluster CRs on the hub cluster based on the SiteConfig CR in your site Git repo. You create individual BMCSecret CRs for each host manually. Procedure Create the required managed cluster secrets on the hub cluster. These resources must be in a namespace with a name matching the cluster name. For example, in out/argocd/example/siteconfig/example-sno.yaml , the cluster name and namespace is example-sno . Export the cluster namespace by running the following command: USD export CLUSTERNS=example-sno Create the namespace: USD oc create namespace USDCLUSTERNS Create pull secret and BMC Secret CRs for the managed cluster. The pull secret must contain all the credentials necessary for installing OpenShift Container Platform and all required Operators. See "Creating the managed bare-metal host secrets" for more information. Note The secrets are referenced from the SiteConfig custom resource (CR) by name. The namespace must match the SiteConfig namespace. Create a SiteConfig CR for your cluster in your local clone of the Git repository: Choose the appropriate example for your CR from the out/argocd/example/siteconfig/ folder. The folder includes example files for single node, three-node, and standard clusters: example-sno.yaml example-3node.yaml example-standard.yaml Change the cluster and host details in the example file to match the type of cluster you want. For example: Example single-node OpenShift SiteConfig CR # example-node1-bmh-secret & assisted-deployment-pull-secret need to be created under same namespace example-sno --- apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: "example-sno" namespace: "example-sno" spec: baseDomain: "example.com" pullSecretRef: name: "assisted-deployment-pull-secret" clusterImageSetNameRef: "openshift-4.18" sshPublicKey: "ssh-rsa AAAA..." clusters: - clusterName: "example-sno" networkType: "OVNKubernetes" # installConfigOverrides is a generic way of passing install-config # parameters through the siteConfig. The 'capabilities' field configures # the composable openshift feature. In this 'capabilities' setting, we # remove all the optional set of components. # Notes: # - OperatorLifecycleManager is needed for 4.15 and later # - NodeTuning is needed for 4.13 and later, not for 4.12 and earlier # - Ingress is needed for 4.16 and later installConfigOverrides: | { "capabilities": { "baselineCapabilitySet": "None", "additionalEnabledCapabilities": [ "NodeTuning", "OperatorLifecycleManager", "Ingress" ] } } # It is strongly recommended to include crun manifests as part of the additional install-time manifests for 4.13+. # The crun manifests can be obtained from source-crs/optional-extra-manifest/ and added to the git repo ie.sno-extra-manifest. # extraManifestPath: sno-extra-manifest clusterLabels: # These example cluster labels correspond to the bindingRules in the PolicyGenTemplate examples du-profile: "latest" # These example cluster labels correspond to the bindingRules in the PolicyGenTemplate examples in ../policygentemplates: # ../policygentemplates/common-ranGen.yaml will apply to all clusters with 'common: true' common: true # ../policygentemplates/group-du-sno-ranGen.yaml will apply to all clusters with 'group-du-sno: ""' group-du-sno: "" # ../policygentemplates/example-sno-site.yaml will apply to all clusters with 'sites: "example-sno"' # Normally this should match or contain the cluster name so it only applies to a single cluster sites: "example-sno" clusterNetwork: - cidr: 1001:1::/48 hostPrefix: 64 machineNetwork: - cidr: 1111:2222:3333:4444::/64 serviceNetwork: - 1001:2::/112 additionalNTPSources: - 1111:2222:3333:4444::2 # Initiates the cluster for workload partitioning. Setting specific reserved/isolated CPUSets is done via PolicyTemplate # please see Workload Partitioning Feature for a complete guide. cpuPartitioningMode: AllNodes # Optionally; This can be used to override the KlusterletAddonConfig that is created for this cluster: #crTemplates: # KlusterletAddonConfig: "KlusterletAddonConfigOverride.yaml" nodes: - hostName: "example-node1.example.com" role: "master" # Optionally; This can be used to configure desired BIOS setting on a host: #biosConfigRef: # filePath: "example-hw.profile" bmcAddress: "idrac-virtualmedia+https://[1111:2222:3333:4444::bbbb:1]/redfish/v1/Systems/System.Embedded.1" bmcCredentialsName: name: "example-node1-bmh-secret" bootMACAddress: "AA:BB:CC:DD:EE:11" # Use UEFISecureBoot to enable secure boot. bootMode: "UEFISecureBoot" rootDeviceHints: deviceName: "/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0" # disk partition at `/var/lib/containers` with ignitionConfigOverride. Some values must be updated. See DiskPartitionContainer.md for more details ignitionConfigOverride: | { "ignition": { "version": "3.2.0" }, "storage": { "disks": [ { "device": "/dev/disk/by-id/wwn-0x6b07b250ebb9d0002a33509f24af1f62", "partitions": [ { "label": "var-lib-containers", "sizeMiB": 0, "startMiB": 250000 } ], "wipeTable": false } ], "filesystems": [ { "device": "/dev/disk/by-partlabel/var-lib-containers", "format": "xfs", "mountOptions": [ "defaults", "prjquota" ], "path": "/var/lib/containers", "wipeFilesystem": true } ] }, "systemd": { "units": [ { "contents": "# Generated by Butane\n[Unit]\nRequires=systemd-fsck@dev-disk-by\\x2dpartlabel-var\\x2dlib\\x2dcontainers.service\nAfter=systemd-fsck@dev-disk-by\\x2dpartlabel-var\\x2dlib\\x2dcontainers.service\n\n[Mount]\nWhere=/var/lib/containers\nWhat=/dev/disk/by-partlabel/var-lib-containers\nType=xfs\nOptions=defaults,prjquota\n\n[Install]\nRequiredBy=local-fs.target", "enabled": true, "name": "var-lib-containers.mount" } ] } } nodeNetwork: interfaces: - name: eno1 macAddress: "AA:BB:CC:DD:EE:11" config: interfaces: - name: eno1 type: ethernet state: up ipv4: enabled: false ipv6: enabled: true address: # For SNO sites with static IP addresses, the node-specific, # API and Ingress IPs should all be the same and configured on # the interface - ip: 1111:2222:3333:4444::aaaa:1 prefix-length: 64 dns-resolver: config: search: - example.com server: - 1111:2222:3333:4444::2 routes: config: - destination: ::/0 -hop-interface: eno1 -hop-address: 1111:2222:3333:4444::1 table-id: 254 Note For more information about BMC addressing, see the "Additional resources" section. The installConfigOverrides and ignitionConfigOverride fields are expanded in the example for ease of readability. You can inspect the default set of extra-manifest MachineConfig CRs in out/argocd/extra-manifest . It is automatically applied to the cluster when it is installed. Optional: To provision additional install-time manifests on the provisioned cluster, create a directory in your Git repository, for example, sno-extra-manifest/ , and add your custom manifest CRs to this directory. If your SiteConfig.yaml refers to this directory in the extraManifestPath field, any CRs in this referenced directory are appended to the default set of extra manifests. Enabling the crun OCI container runtime For optimal cluster performance, enable crun for master and worker nodes in single-node OpenShift, single-node OpenShift with additional worker nodes, three-node OpenShift, and standard clusters. Enable crun in a ContainerRuntimeConfig CR as an additional Day 0 install-time manifest to avoid the cluster having to reboot. The enable-crun-master.yaml and enable-crun-worker.yaml CR files are in the out/source-crs/optional-extra-manifest/ folder that you can extract from the ztp-site-generate container. For more information, see "Customizing extra installation manifests in the GitOps ZTP pipeline". Add the SiteConfig CR to the kustomization.yaml file in the generators section, similar to the example shown in out/argocd/example/siteconfig/kustomization.yaml . Commit the SiteConfig CR and associated kustomization.yaml changes in your Git repository and push the changes. The ArgoCD pipeline detects the changes and begins the managed cluster deployment. Verification Verify that the custom roles and labels are applied after the node is deployed: USD oc describe node example-node.example.com Example output Name: example-node.example.com Roles: control-plane,example-label,master,worker Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux custom-label/parameter1=true kubernetes.io/arch=amd64 kubernetes.io/hostname=cnfdf03.telco5gran.eng.rdu2.redhat.com kubernetes.io/os=linux node-role.kubernetes.io/control-plane= node-role.kubernetes.io/example-label= 1 node-role.kubernetes.io/master= node-role.kubernetes.io/worker= node.openshift.io/os_id=rhcos 1 The custom label is applied to the node. Additional resources Single-node OpenShift SiteConfig CR installation reference 4.5.1. Accelerated provisioning of GitOps ZTP Important Accelerated provisioning of GitOps ZTP is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can reduce the time taken for cluster installation by using accelerated provisioning of GitOps ZTP for single-node OpenShift. Accelerated ZTP speeds up installation by applying Day 2 manifests derived from policies at an earlier stage. Important Accelerated provisioning of GitOps ZTP is supported only when installing single-node OpenShift with Assisted Installer. Otherwise this installation method will fail. 4.5.1.1. Activating accelerated ZTP You can activate accelerated ZTP using the spec.clusters.clusterLabels.accelerated-ztp label, as in the following example: Example Accelerated ZTP SiteConfig CR. apiVersion: ran.openshift.io/v2 kind: SiteConfig metadata: name: "example-sno" namespace: "example-sno" spec: baseDomain: "example.com" pullSecretRef: name: "assisted-deployment-pull-secret" clusterImageSetNameRef: "openshift-4.18" sshPublicKey: "ssh-rsa AAAA..." clusters: # ... clusterLabels: common: true group-du-sno: "" sites : "example-sno" accelerated-ztp: full You can use accelerated-ztp: full to fully automate the accelerated process. GitOps ZTP updates the AgentClusterInstall resource with a reference to the accelerated GitOps ZTP ConfigMap , and includes resources extracted from policies by TALM, and accelerated ZTP job manifests. If you use accelerated-ztp: partial , GitOps ZTP does not include the accelerated job manifests, but includes policy-derived objects created during the cluster installation of the following kind types: PerformanceProfile.performance.openshift.io Tuned.tuned.openshift.io Namespace CatalogSource.operators.coreos.com ContainerRuntimeConfig.machineconfiguration.openshift.io This partial acceleration can reduce the number of reboots done by the node when applying resources of the kind Performance Profile , Tuned , and ContainerRuntimeConfig . TALM installs the Operator subscriptions derived from policies after RHACM completes the import of the cluster, following the same flow as standard GitOps ZTP. The benefits of accelerated ZTP increase with the scale of your deployment. Using accelerated-ztp: full gives more benefit on a large number of clusters. With a smaller number of clusters, the reduction in installation time is less significant. Full accelerated ZTP leaves behind a namespace and a completed job on the spoke that need to be manually removed. One benefit of using accelerated-ztp: partial is that you can override the functionality of the on-spoke job if something goes wrong with the stock implementation or if you require a custom functionality. 4.5.1.2. The accelerated ZTP process Accelerated ZTP uses an additional ConfigMap to create the resources derived from policies on the spoke cluster. The standard ConfigMap includes manifests that the GitOps ZTP workflow uses to customize cluster installs. TALM detects that the accelerated-ztp label is set and then creates a second ConfigMap . As part of accelerated ZTP, the SiteConfig generator adds a reference to that second ConfigMap using the naming convention <spoke-cluster-name>-aztp . After TALM creates that second ConfigMap , it finds all policies bound to the managed cluster and extracts the GitOps ZTP profile information. TALM adds the GitOps ZTP profile information to the <spoke-cluster-name>-aztp ConfigMap custom resource (CR) and applies the CR to the hub cluster API. 4.5.2. Configuring IPsec encryption for single-node OpenShift clusters using GitOps ZTP and SiteConfig resources You can enable IPsec encryption in managed single-node OpenShift clusters that you install using GitOps ZTP and Red Hat Advanced Cluster Management (RHACM). You can encrypt traffic between the managed cluster and IPsec endpoints external to the managed cluster. All network traffic between nodes on the OVN-Kubernetes cluster network is encrypted with IPsec in Transport mode. Important You can also configure IPsec encryption for single-node OpenShift clusters with an additional worker node by following this procedure. It is recommended to use the MachineConfig custom resource (CR) to configure IPsec encryption for single-node OpenShift clusters and single-node OpenShift clusters with an additional worker node because of their low resource availability. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. You have configured RHACM and the hub cluster for generating the required installation and policy custom resources (CRs) for managed clusters. You have created a Git repository where you manage your custom site configuration data. The repository must be accessible from the hub cluster and be defined as a source repository for the Argo CD application. You have installed the butane utility version 0.20.0 or later. You have a PKCS#12 certificate for the IPsec endpoint and a CA cert in PEM format. Procedure Extract the latest version of the ztp-site-generate container source and merge it with your repository where you manage your custom site configuration data. Configure optional-extra-manifest/ipsec/ipsec-endpoint-config.yaml with the required values that configure IPsec in the cluster. For example: interfaces: - name: hosta_conn type: ipsec libreswan: left: '%defaultroute' leftid: '%fromcert' leftmodecfgclient: false leftcert: left_server 1 leftrsasigkey: '%cert' right: <external_host> 2 rightid: '%fromcert' rightrsasigkey: '%cert' rightsubnet: <external_address> 3 ikev2: insist 4 type: tunnel 1 The value of this field must match with the name of the certificate used on the remote system. 2 Replace <external_host> with the external host IP address or DNS hostname. 3 Replace <external_address> with the IP subnet of the external host on the other side of the IPsec tunnel. 4 Use the IKEv2 VPN encryption protocol only. Do not use IKEv1, which is deprecated. Add the following certificates to the optional-extra-manifest/ipsec folder: left_server.p12 : The certificate bundle for the IPsec endpoints ca.pem : The certificate authority that you signed your certificates with The certificate files are required for the Network Security Services (NSS) database on each host. These files are imported as part of the Butane configuration in later steps. Open a shell prompt at the optional-extra-manifest/ipsec folder of the Git repository where you maintain your custom site configuration data. Run the optional-extra-manifest/ipsec/build.sh script to generate the required Butane and MachineConfig CRs files. If the PKCS#12 certificate is protected with a password, set the -W argument. Example output out └── argocd └── example └── optional-extra-manifest └── ipsec ├── 99-ipsec-master-endpoint-config.bu 1 ├── 99-ipsec-master-endpoint-config.yaml 2 ├── 99-ipsec-worker-endpoint-config.bu 3 ├── 99-ipsec-worker-endpoint-config.yaml 4 ├── build.sh ├── ca.pem 5 ├── left_server.p12 6 ├── enable-ipsec.yaml ├── ipsec-endpoint-config.yml └── README.md 1 2 3 4 The ipsec/build.sh script generates the Butane and endpoint configuration CRs. 5 6 You provide ca.pem and left_server.p12 certificate files that are relevant to your network. Create a custom-manifest/ folder in the repository where you manage your custom site configuration data. Add the enable-ipsec.yaml and 99-ipsec-* YAML files to the directory. For example: siteconfig ├── site1-sno-du.yaml ├── extra-manifest/ └── custom-manifest ├── enable-ipsec.yaml ├── 99-ipsec-worker-endpoint-config.yaml └── 99-ipsec-master-endpoint-config.yaml In your SiteConfig CR, add the custom-manifest/ directory to the extraManifests.searchPaths field. For example: clusters: - clusterName: "site1-sno-du" networkType: "OVNKubernetes" extraManifests: searchPaths: - extra-manifest/ - custom-manifest/ Commit the SiteConfig CR changes and updated files in your Git repository and push the changes to provision the managed cluster and configure IPsec encryption. The Argo CD pipeline detects the changes and begins the managed cluster deployment. During cluster provisioning, the GitOps ZTP pipeline appends the CRs in the custom-manifest/ directory to the default set of extra manifests stored in the extra-manifest/ directory. Verification For information about verifying the IPsec encryption, see "Verifying the IPsec encryption". Additional resources Verifying the IPsec encryption Configuring IPsec encryption Encryption protocol and IPsec mode Installing managed clusters with RHACM and SiteConfig resources 4.5.3. Configuring IPsec encryption for multi-node clusters using GitOps ZTP and SiteConfig resources You can enable IPsec encryption in managed multi-node clusters that you install using GitOps ZTP and Red Hat Advanced Cluster Management (RHACM). You can encrypt traffic between the managed cluster and IPsec endpoints external to the managed cluster. All network traffic between nodes on the OVN-Kubernetes cluster network is encrypted with IPsec in Transport mode. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. You have configured RHACM and the hub cluster for generating the required installation and policy custom resources (CRs) for managed clusters. You have created a Git repository where you manage your custom site configuration data. The repository must be accessible from the hub cluster and be defined as a source repository for the Argo CD application. You have installed the butane utility version 0.20.0 or later. You have a PKCS#12 certificate for the IPsec endpoint and a CA cert in PEM format. You have installed the NMState Operator. Procedure Extract the latest version of the ztp-site-generate container source and merge it with your repository where you manage your custom site configuration data. Configure the optional-extra-manifest/ipsec/ipsec-config-policy.yaml file with the required values that configure IPsec in the cluster. ConfigurationPolicy object for creating an IPsec configuration apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: policy-config spec: namespaceSelector: include: ["default"] exclude: [] matchExpressions: [] matchLabels: {} remediationAction: inform severity: low evaluationInterval: compliant: noncompliant: object-templates-raw: | {{- range (lookup "v1" "Node" "" "").items }} - complianceType: musthave objectDefinition: kind: NodeNetworkConfigurationPolicy apiVersion: nmstate.io/v1 metadata: name: {{ .metadata.name }}-ipsec-policy spec: nodeSelector: kubernetes.io/hostname: {{ .metadata.name }} desiredState: interfaces: - name: hosta_conn type: ipsec libreswan: left: '%defaultroute' leftid: '%fromcert' leftmodecfgclient: false leftcert: left_server 1 leftrsasigkey: '%cert' right: <external_host> 2 rightid: '%fromcert' rightrsasigkey: '%cert' rightsubnet: <external_address> 3 ikev2: insist 4 type: tunnel 1 The value of this field must match with the name of the certificate used on the remote system. 2 Replace <external_host> with the external host IP address or DNS hostname. 3 Replace <external_address> with the IP subnet of the external host on the other side of the IPsec tunnel. 4 Use the IKEv2 VPN encryption protocol only. Do not use IKEv1, which is deprecated. Add the following certificates to the optional-extra-manifest/ipsec folder: left_server.p12 : The certificate bundle for the IPsec endpoints ca.pem : The certificate authority that you signed your certificates with The certificate files are required for the Network Security Services (NSS) database on each host. These files are imported as part of the Butane configuration in later steps. Open a shell prompt at the optional-extra-manifest/ipsec folder of the Git repository where you maintain your custom site configuration data. Run the optional-extra-manifest/ipsec/import-certs.sh script to generate the required Butane and MachineConfig CRs to import the external certs. If the PKCS#12 certificate is protected with a password, set the -W argument. Example output out └── argocd └── example └── optional-extra-manifest └── ipsec ├── 99-ipsec-master-import-certs.bu 1 ├── 99-ipsec-master-import-certs.yaml 2 ├── 99-ipsec-worker-import-certs.bu 3 ├── 99-ipsec-worker-import-certs.yaml 4 ├── import-certs.sh ├── ca.pem 5 ├── left_server.p12 6 ├── enable-ipsec.yaml ├── ipsec-config-policy.yaml └── README.md 1 2 3 4 The ipsec/import-certs.sh script generates the Butane and endpoint configuration CRs. 5 6 Add the ca.pem and left_server.p12 certificate files that are relevant to your network. Create a custom-manifest/ folder in the repository where you manage your custom site configuration data and add the enable-ipsec.yaml and 99-ipsec-* YAML files to the directory. Example siteconfig directory siteconfig ├── site1-mno-du.yaml ├── extra-manifest/ └── custom-manifest ├── enable-ipsec.yaml ├── 99-ipsec-master-import-certs.yaml └── 99-ipsec-worker-import-certs.yaml In your SiteConfig CR, add the custom-manifest/ directory to the extraManifests.searchPaths field, as in the following example: clusters: - clusterName: "site1-mno-du" networkType: "OVNKubernetes" extraManifests: searchPaths: - extra-manifest/ - custom-manifest/ Include the ipsec-config-policy.yaml config policy file in the source-crs directory in GitOps and reference the file in one of the PolicyGenerator CRs. Commit the SiteConfig CR changes and updated files in your Git repository and push the changes to provision the managed cluster and configure IPsec encryption. The Argo CD pipeline detects the changes and begins the managed cluster deployment. During cluster provisioning, the GitOps ZTP pipeline appends the CRs in the custom-manifest/ directory to the default set of extra manifests stored in the extra-manifest/ directory. Verification For information about verifying the IPsec encryption, see "Verifying the IPsec encryption". Additional resources Verifying the IPsec encryption Configuring IPsec encryption Encryption protocol and IPsec mode Installing managed clusters with RHACM and SiteConfig resources 4.5.4. Verifying the IPsec encryption You can verify that the IPsec encryption is successfully applied in a managed OpenShift Container Platform cluster. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. You have configured the IPsec encryption. Procedure Start a debug pod for the managed cluster by running the following command: USD oc debug node/<node_name> Check that the IPsec policy is applied in the cluster node by running the following command: sh-5.1# ip xfrm policy Example output src 172.16.123.0/24 dst 10.1.232.10/32 dir out priority 1757377 ptype main tmpl src 10.1.28.190 dst 10.1.232.10 proto esp reqid 16393 mode tunnel src 10.1.232.10/32 dst 172.16.123.0/24 dir fwd priority 1757377 ptype main tmpl src 10.1.232.10 dst 10.1.28.190 proto esp reqid 16393 mode tunnel src 10.1.232.10/32 dst 172.16.123.0/24 dir in priority 1757377 ptype main tmpl src 10.1.232.10 dst 10.1.28.190 proto esp reqid 16393 mode tunnel Check that the IPsec tunnel is up and connected by running the following command: sh-5.1# ip xfrm state Example output src 10.1.232.10 dst 10.1.28.190 proto esp spi 0xa62a05aa reqid 16393 mode tunnel replay-window 0 flag af-unspec esn auth-trunc hmac(sha1) 0x8c59f680c8ea1e667b665d8424e2ab749cec12dc 96 enc cbc(aes) 0x2818a489fe84929c8ab72907e9ce2f0eac6f16f2258bd22240f4087e0326badb anti-replay esn context: seq-hi 0x0, seq 0x0, oseq-hi 0x0, oseq 0x0 replay_window 128, bitmap-length 4 00000000 00000000 00000000 00000000 src 10.1.28.190 dst 10.1.232.10 proto esp spi 0x8e96e9f9 reqid 16393 mode tunnel replay-window 0 flag af-unspec esn auth-trunc hmac(sha1) 0xd960ddc0a6baaccb343396a51295e08cfd8aaddd 96 enc cbc(aes) 0x0273c02e05b4216d5e652de3fc9b3528fea94648bc2b88fa01139fdf0beb27ab anti-replay esn context: seq-hi 0x0, seq 0x0, oseq-hi 0x0, oseq 0x0 replay_window 128, bitmap-length 4 00000000 00000000 00000000 00000000 Ping a known IP in the external host subnet by running the following command: For example, ping an IP address in the rightsubnet range that you set in the ipsec/ipsec-endpoint-config.yaml file: sh-5.1# ping 172.16.110.8 Example output PING 172.16.110.8 (172.16.110.8) 56(84) bytes of data. 64 bytes from 172.16.110.8: icmp_seq=1 ttl=64 time=153 ms 64 bytes from 172.16.110.8: icmp_seq=2 ttl=64 time=155 ms 4.5.5. Single-node OpenShift SiteConfig CR installation reference Table 4.1. SiteConfig CR installation options for single-node OpenShift clusters SiteConfig CR field Description spec.cpuPartitioningMode Configure workload partitioning by setting the value for cpuPartitioningMode to AllNodes . To complete the configuration, specify the isolated and reserved CPUs in the PerformanceProfile CR. metadata.name Set name to assisted-deployment-pull-secret and create the assisted-deployment-pull-secret CR in the same namespace as the SiteConfig CR. spec.clusterImageSetNameRef Configure the image set available on the hub cluster for all the clusters in the site. To see the list of supported versions on your hub cluster, run oc get clusterimagesets . installConfigOverrides Set the installConfigOverrides field to enable or disable optional components prior to cluster installation. Important Use the reference configuration as specified in the example SiteConfig CR. Adding additional components back into the system might require additional reserved CPU capacity. spec.clusters.clusterImageSetNameRef Specifies the cluster image set used to deploy an individual cluster. If defined, it overrides the spec.clusterImageSetNameRef at site level. spec.clusters.clusterLabels Configure cluster labels to correspond to the binding rules in the PolicyGenerator or PolicyGentemplate CRs that you define. PolicyGenerator CRs use the policyDefaults.placement.labelSelector field. PolicyGentemplate CRs use the spec.bindingRules field. For example, acmpolicygenerator/acm-common-ranGen.yaml applies to all clusters with common: true set, acmpolicygenerator/acm-group-du-sno-ranGen.yaml applies to all clusters with group-du-sno: "" set. spec.clusters.crTemplates.KlusterletAddonConfig Optional. Set KlusterletAddonConfig to KlusterletAddonConfigOverride.yaml to override the default `KlusterletAddonConfig that is created for the cluster. spec.clusters.diskEncryption Configure this field to enable disk encryption with Trusted Platform Module (TPM) and Platform Configuration Registers (PCRs) protection. For more information, see "About disk encryption with TPM and PCR protection". Note Configuring disk encryption by using the diskEncryption field in the SiteConfig CR is a Technology Preview feature in OpenShift Container Platform 4.18. spec.clusters.diskEncryption.type Set the disk encryption type to tpm2 . spec.clusters.diskEncryption.tpm2 Configure the Platform Configuration Registers (PCRs) protection for disk encryption. spec.clusters.diskEncryption.tpm2.pcrList Configure the list of Platform Configuration Registers (PCRs) to be used for disk encryption. You must use PCR registers 1 and 7. spec.clusters.nodes.hostName For single-node deployments, define a single host. For three-node deployments, define three hosts. For standard deployments, define three hosts with role: master and two or more hosts defined with role: worker . spec.clusters.nodes.nodeLabels Specify custom roles for your nodes in your managed clusters. These are additional roles are not used by any OpenShift Container Platform components, only by the user. When you add a custom role, it can be associated with a custom machine config pool that references a specific configuration for that role. Adding custom labels or roles during installation makes the deployment process more effective and prevents the need for additional reboots after the installation is complete. spec.clusters.nodes.automatedCleaningMode Optional. Uncomment and set the value to metadata to enable the removal of the disk's partitioning table only, without fully wiping the disk. The default value is disabled . spec.clusters.nodes.bmcAddress BMC address that you use to access the host. Applies to all cluster types. GitOps ZTP supports iPXE and virtual media booting by using Redfish or IPMI protocols. To use iPXE booting, you must use RHACM 2.8 or later. For more information about BMC addressing, see the "Additional resources" section. spec.clusters.nodes.bmcAddress BMC address that you use to access the host. Applies to all cluster types. GitOps ZTP supports iPXE and virtual media booting by using Redfish or IPMI protocols. To use iPXE booting, you must use RHACM 2.8 or later. For more information about BMC addressing, see the "Additional resources" section. Note In far edge Telco use cases, only virtual media is supported for use with GitOps ZTP. spec.clusters.nodes.bmcCredentialsName Configure the bmh-secret CR that you separately create with the host BMC credentials. When creating the bmh-secret CR, use the same namespace as the SiteConfig CR that provisions the host. spec.clusters.nodes.bootMode Set the boot mode for the host to UEFI . The default value is UEFI . Use UEFISecureBoot to enable secure boot on the host. spec.clusters.nodes.rootDeviceHints Specifies the device for deployment. Identifiers that are stable across reboots are recommended. For example, wwn: <disk_wwn> or deviceName: /dev/disk/by-path/<device_path> . <by-path> values are preferred. For a detailed list of stable identifiers, see the "About root device hints" section. spec.clusters.nodes.ignitionConfigOverride Optional. Use this field to assign partitions for persistent storage. Adjust disk ID and size to the specific hardware. spec.clusters.nodes.nodeNetwork Configure the network settings for the node. spec.clusters.nodes.nodeNetwork.config.interfaces.ipv6 Configure the IPv6 address for the host. For single-node OpenShift clusters with static IP addresses, the node-specific API and Ingress IPs should be the same. Additional resources About disk encryption with TPM and PCR protection . Customizing extra installation manifests in the GitOps ZTP pipeline Preparing the GitOps ZTP site configuration repository Configuring the hub cluster with ArgoCD Signalling GitOps ZTP cluster deployment completion with validator inform policies Creating the managed bare-metal host secrets BMC addressing About root device hints 4.6. Managing host firmware settings with GitOps ZTP Hosts require the correct firmware configuration to ensure high performance and optimal efficiency. You can deploy custom host firmware configurations for managed clusters with GitOps ZTP. Tune hosts with specific hardware profiles in your lab and ensure they are optimized for your requirements. When you have completed host tuning to your satisfaction, you extract the host profile and save it in your GitOps ZTP repository. Then, you use the host profile to configure firmware settings in the managed cluster hosts that you deploy with GitOps ZTP. You specify the required hardware profiles in SiteConfig custom resources (CRs) that you use to deploy the managed clusters. The GitOps ZTP pipeline generates the required HostFirmwareSettings ( HFS ) and BareMetalHost ( BMH ) CRs that are applied to the hub cluster. Use the following best practices to manage your host firmware profiles. Identify critical firmware settings with hardware vendors Work with hardware vendors to identify and document critical host firmware settings required for optimal performance and compatibility with the deployed host platform. Use common firmware configurations across similar hardware platforms Where possible, use a standardized host firmware configuration across similar hardware platforms to reduce complexity and potential errors during deployment. Test firmware configurations in a lab environment Test host firmware configurations in a controlled lab environment before deploying in production to ensure that settings are compatible with hardware, firmware, and software. Manage firmware profiles in source control Manage host firmware profiles in Git repositories to track changes, ensure consistency, and facilitate collaboration with vendors. Additional resources Recommended firmware configuration for vDU cluster hosts 4.6.1. Retrieving the host firmware schema for a managed cluster You can discover the host firmware schema for managed clusters. The host firmware schema for bare-metal hosts is populated with information that the Ironic API returns. The API returns information about host firmware interfaces, including firmware setting types, allowable values, ranges, and flags. Prerequisites You have installed the OpenShift CLI ( oc ). You have installed Red Hat Advanced Cluster Management (RHACM) and logged in to the hub cluster as a user with cluster-admin privileges. You have provisioned a cluster that is managed by RHACM. Procedure Discover the host firmware schema for the managed cluster. Run the following command: USD oc get firmwareschema -n <managed_cluster_namespace> -o yaml Example output apiVersion: v1 items: - apiVersion: metal3.io/v1alpha1 kind: FirmwareSchema metadata: creationTimestamp: "2024-09-11T10:29:43Z" generation: 1 name: schema-40562318 namespace: compute-1 ownerReferences: - apiVersion: metal3.io/v1alpha1 kind: HostFirmwareSettings name: compute-1.example.com uid: 65d0e89b-1cd8-4317-966d-2fbbbe033fe9 resourceVersion: "280057624" uid: 511ad25d-f1c9-457b-9a96-776605c7b887 spec: schema: AccessControlService: allowable_values: - Enabled - Disabled attribute_type: Enumeration read_only: false # ... 4.6.2. Retrieving the host firmware settings for a managed cluster You can retrieve the host firmware settings for managed clusters. This is useful when you have deployed changes to the host firmware and you want to monitor the changes and ensure that they are applied successfully. Prerequisites You have installed the OpenShift CLI ( oc ). You have installed Red Hat Advanced Cluster Management (RHACM) and logged in to the hub cluster as a user with cluster-admin privileges. You have provisioned a cluster that is managed by RHACM. Procedure Retrieve the host firmware settings for the managed cluster. Run the following command: USD oc get hostfirmwaresettings -n <cluster_namespace> <node_name> -o yaml Example output apiVersion: v1 items: - apiVersion: metal3.io/v1alpha1 kind: HostFirmwareSettings metadata: creationTimestamp: "2024-09-11T10:29:43Z" generation: 1 name: compute-1.example.com namespace: kni-qe-24 ownerReferences: - apiVersion: metal3.io/v1alpha1 blockOwnerDeletion: true controller: true kind: BareMetalHost name: compute-1.example.com uid: 0baddbb7-bb34-4224-8427-3d01d91c9287 resourceVersion: "280057626" uid: 65d0e89b-1cd8-4317-966d-2fbbbe033fe9 spec: settings: {} status: conditions: - lastTransitionTime: "2024-09-11T10:29:43Z" message: "" observedGeneration: 1 reason: Success status: "True" 1 type: ChangeDetected - lastTransitionTime: "2024-09-11T10:29:43Z" message: Invalid BIOS setting observedGeneration: 1 reason: ConfigurationError status: "False" 2 type: Valid lastUpdated: "2024-09-11T10:29:43Z" schema: name: schema-40562318 namespace: compute-1 settings: 3 AccessControlService: Enabled AcpiHpet: Enabled AcpiRootBridgePxm: Enabled # ... 1 Indicates that a change in the host firmware settings has been detected 2 Indicates that the host has an invalid firmware setting 3 The complete list of configured host firmware settings is returned under the status.settings field Optional: Check the status of the HostFirmwareSettings ( hfs ) custom resource in the cluster: USD oc get hfs -n <managed_cluster_namespace> <managed_cluster_name> -o jsonpath='{.status.conditions[?(@.type=="ChangeDetected")].status}' Example output True Optional: Check for invalid firmware settings in the cluster host. Run the following command: USD oc get hfs -n <managed_cluster_namespace> <managed_cluster_name> -o jsonpath='{.status.conditions[?(@.type=="Valid")].status}' Example output False 4.6.3. Deploying user-defined firmware to cluster hosts with GitOps ZTP You can deploy user-defined firmware settings to cluster hosts by configuring the SiteConfig custom resource (CR) to include a hardware profile that you want to apply during cluster host provisioning. You can configure hardware profiles to apply to hosts in the following scenarios: All hosts site-wide Only cluster hosts that meet certain criteria Individual cluster hosts Important You can configure host hardware profiles to be applied in a hierarchy. Cluster-level settings override site-wide settings. Node level profiles override cluster and site-wide settings. Prerequisites You have installed the OpenShift CLI ( oc ). You have installed Red Hat Advanced Cluster Management (RHACM) and logged in to the hub cluster as a user with cluster-admin privileges. You have provisioned a cluster that is managed by RHACM. You created a Git repository where you manage your custom site configuration data. The repository must be accessible from the hub cluster and be defined as a source repository for the Argo CD application. Procedure Create the host firmware profile that contain the firmware settings you want to apply. For example, create the following YAML file: host-firmware.profile BootMode: Uefi LogicalProc: Enabled ProcVirtualization: Enabled Save the hardware profile YAML file relative to the kustomization.yaml file that you use to define how to provision the cluster, for example: example-ztp/install └── site-install ├── siteconfig-example.yaml ├── kustomization.yaml └── host-firmware.profile Edit the SiteConfig CR to include the firmware profile that you want to apply in the cluster. For example: apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: "site-plan-cluster" namespace: "example-cluster-namespace" spec: baseDomain: "example.com" # ... biosConfigRef: filePath: "./host-firmware.profile" 1 1 Applies the hardware profile to all cluster hosts site-wide Note Where possible, use a single SiteConfig CR per cluster. Optional. To apply a hardware profile to hosts in a specific cluster, update clusters.biosConfigRef.filePath with the hardware profile that you want to apply. For example: clusters: - clusterName: "cluster-1" # ... biosConfigRef: filePath: "./host-firmware.profile" 1 1 Applies to all hosts in the cluster-1 cluster Optional. To apply a hardware profile to a specific host in the cluster, update clusters.nodes.biosConfigRef.filePath with the hardware profile that you want to apply. For example: clusters: - clusterName: "cluster-1" # ... nodes: - hostName: "compute-1.example.com" # ... bootMode: "UEFI" biosConfigRef: filePath: "./host-firmware.profile" 1 1 Applies the firmware profile to the compute-1.example.com host in the cluster Commit the SiteConfig CR and associated kustomization.yaml changes in your Git repository and push the changes. The ArgoCD pipeline detects the changes and begins the managed cluster deployment. Note Cluster deployment proceeds even if an invalid firmware setting is detected. To apply a correction using GitOps ZTP, re-deploy the cluster with the corrected hardware profile. Verification Check that the firmware settings have been applied in the managed cluster host. For example, run the following command: USD oc get hfs -n <managed_cluster_namespace> <managed_cluster_name> -o jsonpath='{.status.conditions[?(@.type=="Valid")].status}' Example output True 4.7. Monitoring managed cluster installation progress The ArgoCD pipeline uses the SiteConfig CR to generate the cluster configuration CRs and syncs it with the hub cluster. You can monitor the progress of the synchronization in the ArgoCD dashboard. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. Procedure When the synchronization is complete, the installation generally proceeds as follows: The Assisted Service Operator installs OpenShift Container Platform on the cluster. You can monitor the progress of cluster installation from the RHACM dashboard or from the command line by running the following commands: Export the cluster name: USD export CLUSTER=<clusterName> Query the AgentClusterInstall CR for the managed cluster: USD oc get agentclusterinstall -n USDCLUSTER USDCLUSTER -o jsonpath='{.status.conditions[?(@.type=="Completed")]}' | jq Get the installation events for the cluster: USD curl -sk USD(oc get agentclusterinstall -n USDCLUSTER USDCLUSTER -o jsonpath='{.status.debugInfo.eventsURL}') | jq '.[-2,-1]' 4.8. Troubleshooting GitOps ZTP by validating the installation CRs The ArgoCD pipeline uses the SiteConfig and PolicyGenerator or PolicyGentemplate custom resources (CRs) to generate the cluster configuration CRs and Red Hat Advanced Cluster Management (RHACM) policies. Use the following steps to troubleshoot issues that might occur during this process. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. Procedure Check that the installation CRs were created by using the following command: USD oc get AgentClusterInstall -n <cluster_name> If no object is returned, use the following steps to troubleshoot the ArgoCD pipeline flow from SiteConfig files to the installation CRs. Verify that the ManagedCluster CR was generated using the SiteConfig CR on the hub cluster: USD oc get managedcluster If the ManagedCluster is missing, check if the clusters application failed to synchronize the files from the Git repository to the hub cluster: USD oc get applications.argoproj.io -n openshift-gitops clusters -o yaml To identify error logs for the managed cluster, inspect the status.operationState.syncResult.resources field. For example, if an invalid value is assigned to the extraManifestPath in the SiteConfig CR, an error similar to the following is generated: syncResult: resources: - group: ran.openshift.io kind: SiteConfig message: The Kubernetes API could not find ran.openshift.io/SiteConfig for requested resource spoke-sno/spoke-sno. Make sure the "SiteConfig" CRD is installed on the destination cluster To see a more detailed SiteConfig error, complete the following steps: In the Argo CD dashboard, click the SiteConfig resource that Argo CD is trying to sync. Check the DESIRED MANIFEST tab to find the siteConfigError field. siteConfigError: >- Error: could not build the entire SiteConfig defined by /tmp/kust-plugin-config-1081291903: stat sno-extra-manifest: no such file or directory Check the Status.Sync field. If there are log errors, the Status.Sync field could indicate an Unknown error: Status: Sync: Compared To: Destination: Namespace: clusters-sub Server: https://kubernetes.default.svc Source: Path: sites-config Repo URL: https://git.com/ran-sites/siteconfigs/.git Target Revision: master Status: Unknown 4.9. Troubleshooting GitOps ZTP virtual media booting on SuperMicro servers SuperMicro X11 servers do not support virtual media installations when the image is served using the https protocol. As a result, single-node OpenShift deployments for this environment fail to boot on the target node. To avoid this issue, log in to the hub cluster and disable Transport Layer Security (TLS) in the Provisioning resource. This ensures the image is not served with TLS even though the image address uses the https scheme. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. Procedure Disable TLS in the Provisioning resource by running the following command: USD oc patch provisioning provisioning-configuration --type merge -p '{"spec":{"disableVirtualMediaTLS": true}}' Continue the steps to deploy your single-node OpenShift cluster. 4.10. Removing a managed cluster site from the GitOps ZTP pipeline You can remove a managed site and the associated installation and configuration policy CRs from the GitOps Zero Touch Provisioning (ZTP) pipeline. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. Procedure Remove a site and the associated CRs by removing the associated SiteConfig and PolicyGenerator or PolicyGentemplate files from the kustomization.yaml file. Add the following syncOptions field to your SiteConfig application. kind: Application spec: syncPolicy: syncOptions: - PrunePropagationPolicy=background When you run the GitOps ZTP pipeline again, the generated CRs are removed. Optional: If you want to permanently remove a site, you should also remove the SiteConfig and site-specific PolicyGenerator or PolicyGentemplate files from the Git repository. Optional: If you want to remove a site temporarily, for example when redeploying a site, you can leave the SiteConfig and site-specific PolicyGenerator or PolicyGentemplate CRs in the Git repository. Additional resources For information about removing a cluster, see Removing a cluster from management . 4.11. Removing obsolete content from the GitOps ZTP pipeline If a change to the PolicyGenerator or PolicyGentemplate configuration results in obsolete policies, for example, if you rename policies, use the following procedure to remove the obsolete policies. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. Procedure Remove the affected PolicyGenerator or PolicyGentemplate files from the Git repository, commit and push to the remote repository. Wait for the changes to synchronize through the application and the affected policies to be removed from the hub cluster. Add the updated PolicyGenerator or PolicyGentemplate files back to the Git repository, and then commit and push to the remote repository. Note Removing GitOps Zero Touch Provisioning (ZTP) policies from the Git repository, and as a result also removing them from the hub cluster, does not affect the configuration of the managed cluster. The policy and CRs managed by that policy remains in place on the managed cluster. Optional: As an alternative, after making changes to PolicyGenerator or PolicyGentemplate CRs that result in obsolete policies, you can remove these policies from the hub cluster manually. You can delete policies from the RHACM console using the Governance tab or by running the following command: USD oc delete policy -n <namespace> <policy_name> 4.12. Tearing down the GitOps ZTP pipeline You can remove the ArgoCD pipeline and all generated GitOps Zero Touch Provisioning (ZTP) artifacts. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. Procedure Detach all clusters from Red Hat Advanced Cluster Management (RHACM) on the hub cluster. Delete the kustomization.yaml file in the deployment directory using the following command: USD oc delete -k out/argocd/deployment Commit and push your changes to the site repository.
[ "grep -r \"ztp-deploy-wave\" out/source-crs", "apiVersion: v1 kind: Secret metadata: name: example-sno-bmc-secret namespace: example-sno 1 data: 2 password: <base64_password> username: <base64_username> type: Opaque --- apiVersion: v1 kind: Secret metadata: name: pull-secret namespace: example-sno 3 data: .dockerconfigjson: <pull_secret> 4 type: kubernetes.io/dockerconfigjson", "apiVersion: agent-install.openshift.io/v1beta1 kind: InfraEnv metadata: annotations: argocd.argoproj.io/sync-wave: \"1\" name: \"{{ .Cluster.ClusterName }}\" namespace: \"{{ .Cluster.ClusterName }}\" spec: clusterRef: name: \"{{ .Cluster.ClusterName }}\" namespace: \"{{ .Cluster.ClusterName }}\" kernelArguments: - operation: append 1 value: audit=0 2 - operation: append value: trace=1 sshAuthorizedKey: \"{{ .Site.SshPublicKey }}\" proxy: \"{{ .Cluster.ProxySettings }}\" pullSecretRef: name: \"{{ .Site.PullSecretRef.Name }}\" ignitionConfigOverride: \"{{ .Cluster.IgnitionConfigOverride }}\" nmStateConfigLabelSelector: matchLabels: nmstate-label: \"{{ .Cluster.ClusterName }}\" additionalNTPSources: \"{{ .Cluster.AdditionalNTPSources }}\"", "~/example-ztp/install └── site-install ├── siteconfig-example.yaml ├── InfraEnv-example.yaml", "clusters: crTemplates: InfraEnv: \"InfraEnv-example.yaml\"", "ssh -i /path/to/privatekey core@<host_name>", "cat /proc/cmdline", "export CLUSTERNS=example-sno", "oc create namespace USDCLUSTERNS", "example-node1-bmh-secret & assisted-deployment-pull-secret need to be created under same namespace example-sno --- apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: \"example-sno\" namespace: \"example-sno\" spec: baseDomain: \"example.com\" pullSecretRef: name: \"assisted-deployment-pull-secret\" clusterImageSetNameRef: \"openshift-4.18\" sshPublicKey: \"ssh-rsa AAAA...\" clusters: - clusterName: \"example-sno\" networkType: \"OVNKubernetes\" # installConfigOverrides is a generic way of passing install-config # parameters through the siteConfig. The 'capabilities' field configures # the composable openshift feature. In this 'capabilities' setting, we # remove all the optional set of components. # Notes: # - OperatorLifecycleManager is needed for 4.15 and later # - NodeTuning is needed for 4.13 and later, not for 4.12 and earlier # - Ingress is needed for 4.16 and later installConfigOverrides: | { \"capabilities\": { \"baselineCapabilitySet\": \"None\", \"additionalEnabledCapabilities\": [ \"NodeTuning\", \"OperatorLifecycleManager\", \"Ingress\" ] } } # It is strongly recommended to include crun manifests as part of the additional install-time manifests for 4.13+. # The crun manifests can be obtained from source-crs/optional-extra-manifest/ and added to the git repo ie.sno-extra-manifest. # extraManifestPath: sno-extra-manifest clusterLabels: # These example cluster labels correspond to the bindingRules in the PolicyGenTemplate examples du-profile: \"latest\" # These example cluster labels correspond to the bindingRules in the PolicyGenTemplate examples in ../policygentemplates: # ../policygentemplates/common-ranGen.yaml will apply to all clusters with 'common: true' common: true # ../policygentemplates/group-du-sno-ranGen.yaml will apply to all clusters with 'group-du-sno: \"\"' group-du-sno: \"\" # ../policygentemplates/example-sno-site.yaml will apply to all clusters with 'sites: \"example-sno\"' # Normally this should match or contain the cluster name so it only applies to a single cluster sites: \"example-sno\" clusterNetwork: - cidr: 1001:1::/48 hostPrefix: 64 machineNetwork: - cidr: 1111:2222:3333:4444::/64 serviceNetwork: - 1001:2::/112 additionalNTPSources: - 1111:2222:3333:4444::2 # Initiates the cluster for workload partitioning. Setting specific reserved/isolated CPUSets is done via PolicyTemplate # please see Workload Partitioning Feature for a complete guide. cpuPartitioningMode: AllNodes # Optionally; This can be used to override the KlusterletAddonConfig that is created for this cluster: #crTemplates: # KlusterletAddonConfig: \"KlusterletAddonConfigOverride.yaml\" nodes: - hostName: \"example-node1.example.com\" role: \"master\" # Optionally; This can be used to configure desired BIOS setting on a host: #biosConfigRef: # filePath: \"example-hw.profile\" bmcAddress: \"idrac-virtualmedia+https://[1111:2222:3333:4444::bbbb:1]/redfish/v1/Systems/System.Embedded.1\" bmcCredentialsName: name: \"example-node1-bmh-secret\" bootMACAddress: \"AA:BB:CC:DD:EE:11\" # Use UEFISecureBoot to enable secure boot. bootMode: \"UEFISecureBoot\" rootDeviceHints: deviceName: \"/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0\" # disk partition at `/var/lib/containers` with ignitionConfigOverride. Some values must be updated. See DiskPartitionContainer.md for more details ignitionConfigOverride: | { \"ignition\": { \"version\": \"3.2.0\" }, \"storage\": { \"disks\": [ { \"device\": \"/dev/disk/by-id/wwn-0x6b07b250ebb9d0002a33509f24af1f62\", \"partitions\": [ { \"label\": \"var-lib-containers\", \"sizeMiB\": 0, \"startMiB\": 250000 } ], \"wipeTable\": false } ], \"filesystems\": [ { \"device\": \"/dev/disk/by-partlabel/var-lib-containers\", \"format\": \"xfs\", \"mountOptions\": [ \"defaults\", \"prjquota\" ], \"path\": \"/var/lib/containers\", \"wipeFilesystem\": true } ] }, \"systemd\": { \"units\": [ { \"contents\": \"# Generated by Butane\\n[Unit]\\nRequires=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\nAfter=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\n\\n[Mount]\\nWhere=/var/lib/containers\\nWhat=/dev/disk/by-partlabel/var-lib-containers\\nType=xfs\\nOptions=defaults,prjquota\\n\\n[Install]\\nRequiredBy=local-fs.target\", \"enabled\": true, \"name\": \"var-lib-containers.mount\" } ] } } nodeNetwork: interfaces: - name: eno1 macAddress: \"AA:BB:CC:DD:EE:11\" config: interfaces: - name: eno1 type: ethernet state: up ipv4: enabled: false ipv6: enabled: true address: # For SNO sites with static IP addresses, the node-specific, # API and Ingress IPs should all be the same and configured on # the interface - ip: 1111:2222:3333:4444::aaaa:1 prefix-length: 64 dns-resolver: config: search: - example.com server: - 1111:2222:3333:4444::2 routes: config: - destination: ::/0 next-hop-interface: eno1 next-hop-address: 1111:2222:3333:4444::1 table-id: 254", "oc describe node example-node.example.com", "Name: example-node.example.com Roles: control-plane,example-label,master,worker Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux custom-label/parameter1=true kubernetes.io/arch=amd64 kubernetes.io/hostname=cnfdf03.telco5gran.eng.rdu2.redhat.com kubernetes.io/os=linux node-role.kubernetes.io/control-plane= node-role.kubernetes.io/example-label= 1 node-role.kubernetes.io/master= node-role.kubernetes.io/worker= node.openshift.io/os_id=rhcos", "apiVersion: ran.openshift.io/v2 kind: SiteConfig metadata: name: \"example-sno\" namespace: \"example-sno\" spec: baseDomain: \"example.com\" pullSecretRef: name: \"assisted-deployment-pull-secret\" clusterImageSetNameRef: \"openshift-4.18\" sshPublicKey: \"ssh-rsa AAAA...\" clusters: # clusterLabels: common: true group-du-sno: \"\" sites : \"example-sno\" accelerated-ztp: full", "interfaces: - name: hosta_conn type: ipsec libreswan: left: '%defaultroute' leftid: '%fromcert' leftmodecfgclient: false leftcert: left_server 1 leftrsasigkey: '%cert' right: <external_host> 2 rightid: '%fromcert' rightrsasigkey: '%cert' rightsubnet: <external_address> 3 ikev2: insist 4 type: tunnel", "out └── argocd └── example └── optional-extra-manifest └── ipsec ├── 99-ipsec-master-endpoint-config.bu 1 ├── 99-ipsec-master-endpoint-config.yaml 2 ├── 99-ipsec-worker-endpoint-config.bu 3 ├── 99-ipsec-worker-endpoint-config.yaml 4 ├── build.sh ├── ca.pem 5 ├── left_server.p12 6 ├── enable-ipsec.yaml ├── ipsec-endpoint-config.yml └── README.md", "siteconfig ├── site1-sno-du.yaml ├── extra-manifest/ └── custom-manifest ├── enable-ipsec.yaml ├── 99-ipsec-worker-endpoint-config.yaml └── 99-ipsec-master-endpoint-config.yaml", "clusters: - clusterName: \"site1-sno-du\" networkType: \"OVNKubernetes\" extraManifests: searchPaths: - extra-manifest/ - custom-manifest/", "apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: policy-config spec: namespaceSelector: include: [\"default\"] exclude: [] matchExpressions: [] matchLabels: {} remediationAction: inform severity: low evaluationInterval: compliant: noncompliant: object-templates-raw: | {{- range (lookup \"v1\" \"Node\" \"\" \"\").items }} - complianceType: musthave objectDefinition: kind: NodeNetworkConfigurationPolicy apiVersion: nmstate.io/v1 metadata: name: {{ .metadata.name }}-ipsec-policy spec: nodeSelector: kubernetes.io/hostname: {{ .metadata.name }} desiredState: interfaces: - name: hosta_conn type: ipsec libreswan: left: '%defaultroute' leftid: '%fromcert' leftmodecfgclient: false leftcert: left_server 1 leftrsasigkey: '%cert' right: <external_host> 2 rightid: '%fromcert' rightrsasigkey: '%cert' rightsubnet: <external_address> 3 ikev2: insist 4 type: tunnel", "out └── argocd └── example └── optional-extra-manifest └── ipsec ├── 99-ipsec-master-import-certs.bu 1 ├── 99-ipsec-master-import-certs.yaml 2 ├── 99-ipsec-worker-import-certs.bu 3 ├── 99-ipsec-worker-import-certs.yaml 4 ├── import-certs.sh ├── ca.pem 5 ├── left_server.p12 6 ├── enable-ipsec.yaml ├── ipsec-config-policy.yaml └── README.md", "siteconfig ├── site1-mno-du.yaml ├── extra-manifest/ └── custom-manifest ├── enable-ipsec.yaml ├── 99-ipsec-master-import-certs.yaml └── 99-ipsec-worker-import-certs.yaml", "clusters: - clusterName: \"site1-mno-du\" networkType: \"OVNKubernetes\" extraManifests: searchPaths: - extra-manifest/ - custom-manifest/", "oc debug node/<node_name>", "sh-5.1# ip xfrm policy", "src 172.16.123.0/24 dst 10.1.232.10/32 dir out priority 1757377 ptype main tmpl src 10.1.28.190 dst 10.1.232.10 proto esp reqid 16393 mode tunnel src 10.1.232.10/32 dst 172.16.123.0/24 dir fwd priority 1757377 ptype main tmpl src 10.1.232.10 dst 10.1.28.190 proto esp reqid 16393 mode tunnel src 10.1.232.10/32 dst 172.16.123.0/24 dir in priority 1757377 ptype main tmpl src 10.1.232.10 dst 10.1.28.190 proto esp reqid 16393 mode tunnel", "sh-5.1# ip xfrm state", "src 10.1.232.10 dst 10.1.28.190 proto esp spi 0xa62a05aa reqid 16393 mode tunnel replay-window 0 flag af-unspec esn auth-trunc hmac(sha1) 0x8c59f680c8ea1e667b665d8424e2ab749cec12dc 96 enc cbc(aes) 0x2818a489fe84929c8ab72907e9ce2f0eac6f16f2258bd22240f4087e0326badb anti-replay esn context: seq-hi 0x0, seq 0x0, oseq-hi 0x0, oseq 0x0 replay_window 128, bitmap-length 4 00000000 00000000 00000000 00000000 src 10.1.28.190 dst 10.1.232.10 proto esp spi 0x8e96e9f9 reqid 16393 mode tunnel replay-window 0 flag af-unspec esn auth-trunc hmac(sha1) 0xd960ddc0a6baaccb343396a51295e08cfd8aaddd 96 enc cbc(aes) 0x0273c02e05b4216d5e652de3fc9b3528fea94648bc2b88fa01139fdf0beb27ab anti-replay esn context: seq-hi 0x0, seq 0x0, oseq-hi 0x0, oseq 0x0 replay_window 128, bitmap-length 4 00000000 00000000 00000000 00000000", "sh-5.1# ping 172.16.110.8", "PING 172.16.110.8 (172.16.110.8) 56(84) bytes of data. 64 bytes from 172.16.110.8: icmp_seq=1 ttl=64 time=153 ms 64 bytes from 172.16.110.8: icmp_seq=2 ttl=64 time=155 ms", "oc get firmwareschema -n <managed_cluster_namespace> -o yaml", "apiVersion: v1 items: - apiVersion: metal3.io/v1alpha1 kind: FirmwareSchema metadata: creationTimestamp: \"2024-09-11T10:29:43Z\" generation: 1 name: schema-40562318 namespace: compute-1 ownerReferences: - apiVersion: metal3.io/v1alpha1 kind: HostFirmwareSettings name: compute-1.example.com uid: 65d0e89b-1cd8-4317-966d-2fbbbe033fe9 resourceVersion: \"280057624\" uid: 511ad25d-f1c9-457b-9a96-776605c7b887 spec: schema: AccessControlService: allowable_values: - Enabled - Disabled attribute_type: Enumeration read_only: false #", "oc get hostfirmwaresettings -n <cluster_namespace> <node_name> -o yaml", "apiVersion: v1 items: - apiVersion: metal3.io/v1alpha1 kind: HostFirmwareSettings metadata: creationTimestamp: \"2024-09-11T10:29:43Z\" generation: 1 name: compute-1.example.com namespace: kni-qe-24 ownerReferences: - apiVersion: metal3.io/v1alpha1 blockOwnerDeletion: true controller: true kind: BareMetalHost name: compute-1.example.com uid: 0baddbb7-bb34-4224-8427-3d01d91c9287 resourceVersion: \"280057626\" uid: 65d0e89b-1cd8-4317-966d-2fbbbe033fe9 spec: settings: {} status: conditions: - lastTransitionTime: \"2024-09-11T10:29:43Z\" message: \"\" observedGeneration: 1 reason: Success status: \"True\" 1 type: ChangeDetected - lastTransitionTime: \"2024-09-11T10:29:43Z\" message: Invalid BIOS setting observedGeneration: 1 reason: ConfigurationError status: \"False\" 2 type: Valid lastUpdated: \"2024-09-11T10:29:43Z\" schema: name: schema-40562318 namespace: compute-1 settings: 3 AccessControlService: Enabled AcpiHpet: Enabled AcpiRootBridgePxm: Enabled #", "oc get hfs -n <managed_cluster_namespace> <managed_cluster_name> -o jsonpath='{.status.conditions[?(@.type==\"ChangeDetected\")].status}'", "True", "oc get hfs -n <managed_cluster_namespace> <managed_cluster_name> -o jsonpath='{.status.conditions[?(@.type==\"Valid\")].status}'", "False", "BootMode: Uefi LogicalProc: Enabled ProcVirtualization: Enabled", "example-ztp/install └── site-install ├── siteconfig-example.yaml ├── kustomization.yaml └── host-firmware.profile", "apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: \"site-plan-cluster\" namespace: \"example-cluster-namespace\" spec: baseDomain: \"example.com\" # biosConfigRef: filePath: \"./host-firmware.profile\" 1", "clusters: - clusterName: \"cluster-1\" # biosConfigRef: filePath: \"./host-firmware.profile\" 1", "clusters: - clusterName: \"cluster-1\" # nodes: - hostName: \"compute-1.example.com\" # bootMode: \"UEFI\" biosConfigRef: filePath: \"./host-firmware.profile\" 1", "oc get hfs -n <managed_cluster_namespace> <managed_cluster_name> -o jsonpath='{.status.conditions[?(@.type==\"Valid\")].status}'", "True", "export CLUSTER=<clusterName>", "oc get agentclusterinstall -n USDCLUSTER USDCLUSTER -o jsonpath='{.status.conditions[?(@.type==\"Completed\")]}' | jq", "curl -sk USD(oc get agentclusterinstall -n USDCLUSTER USDCLUSTER -o jsonpath='{.status.debugInfo.eventsURL}') | jq '.[-2,-1]'", "oc get AgentClusterInstall -n <cluster_name>", "oc get managedcluster", "oc get applications.argoproj.io -n openshift-gitops clusters -o yaml", "syncResult: resources: - group: ran.openshift.io kind: SiteConfig message: The Kubernetes API could not find ran.openshift.io/SiteConfig for requested resource spoke-sno/spoke-sno. Make sure the \"SiteConfig\" CRD is installed on the destination cluster", "siteConfigError: >- Error: could not build the entire SiteConfig defined by /tmp/kust-plugin-config-1081291903: stat sno-extra-manifest: no such file or directory", "Status: Sync: Compared To: Destination: Namespace: clusters-sub Server: https://kubernetes.default.svc Source: Path: sites-config Repo URL: https://git.com/ran-sites/siteconfigs/.git Target Revision: master Status: Unknown", "oc patch provisioning provisioning-configuration --type merge -p '{\"spec\":{\"disableVirtualMediaTLS\": true}}'", "kind: Application spec: syncPolicy: syncOptions: - PrunePropagationPolicy=background", "oc delete policy -n <namespace> <policy_name>", "oc delete -k out/argocd/deployment" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/edge_computing/ztp-deploying-far-edge-sites
Installation configuration
Installation configuration OpenShift Container Platform 4.12 Cluster-wide configuration during installations Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/installation_configuration/index
Installing IBM Cloud Bare Metal (Classic)
Installing IBM Cloud Bare Metal (Classic) OpenShift Container Platform 4.16 Installing OpenShift Container Platform on IBM Cloud Bare Metal (Classic) Red Hat OpenShift Documentation Team
[ "<cluster_name>.<domain>", "test-cluster.example.com", "ipmi://<IP>:<port>?privilegelevel=OPERATOR", "ibmcloud sl hardware create --hostname <SERVERNAME> --domain <DOMAIN> --size <SIZE> --os <OS-TYPE> --datacenter <DC-NAME> --port-speed <SPEED> --billing <BILLING>", "useradd kni", "passwd kni", "echo \"kni ALL=(root) NOPASSWD:ALL\" | tee -a /etc/sudoers.d/kni", "chmod 0440 /etc/sudoers.d/kni", "su - kni -c \"ssh-keygen -f /home/kni/.ssh/id_rsa -N ''\"", "su - kni", "sudo subscription-manager register --username=<user> --password=<pass> --auto-attach", "sudo subscription-manager repos --enable=rhel-8-for-x86_64-appstream-rpms --enable=rhel-8-for-x86_64-baseos-rpms", "sudo dnf install -y libvirt qemu-kvm mkisofs python3-devel jq ipmitool", "sudo usermod --append --groups libvirt kni", "sudo systemctl start firewalld", "sudo systemctl enable firewalld", "sudo firewall-cmd --zone=public --add-service=http --permanent", "sudo firewall-cmd --reload", "sudo systemctl enable libvirtd --now", "PRVN_HOST_ID=<ID>", "ibmcloud sl hardware list", "PUBLICSUBNETID=<ID>", "ibmcloud sl subnet list", "PRIVSUBNETID=<ID>", "ibmcloud sl subnet list", "PRVN_PUB_IP=USD(ibmcloud sl hardware detail USDPRVN_HOST_ID --output JSON | jq .primaryIpAddress -r)", "PUBLICCIDR=USD(ibmcloud sl subnet detail USDPUBLICSUBNETID --output JSON | jq .cidr)", "PUB_IP_CIDR=USDPRVN_PUB_IP/USDPUBLICCIDR", "PUB_GATEWAY=USD(ibmcloud sl subnet detail USDPUBLICSUBNETID --output JSON | jq .gateway -r)", "PRVN_PRIV_IP=USD(ibmcloud sl hardware detail USDPRVN_HOST_ID --output JSON | jq .primaryBackendIpAddress -r)", "PRIVCIDR=USD(ibmcloud sl subnet detail USDPRIVSUBNETID --output JSON | jq .cidr)", "PRIV_IP_CIDR=USDPRVN_PRIV_IP/USDPRIVCIDR", "PRIV_GATEWAY=USD(ibmcloud sl subnet detail USDPRIVSUBNETID --output JSON | jq .gateway -r)", "sudo nohup bash -c \" nmcli --get-values UUID con show | xargs -n 1 nmcli con delete nmcli connection add ifname provisioning type bridge con-name provisioning nmcli con add type bridge-slave ifname eth1 master provisioning nmcli connection add ifname baremetal type bridge con-name baremetal nmcli con add type bridge-slave ifname eth2 master baremetal nmcli connection modify baremetal ipv4.addresses USDPUB_IP_CIDR ipv4.method manual ipv4.gateway USDPUB_GATEWAY nmcli connection modify provisioning ipv4.addresses 172.22.0.1/24,USDPRIV_IP_CIDR ipv4.method manual nmcli connection modify provisioning +ipv4.routes \\\"10.0.0.0/8 USDPRIV_GATEWAY\\\" nmcli con down baremetal nmcli con up baremetal nmcli con down provisioning nmcli con up provisioning init 6 \"", "ssh kni@provisioner.<cluster-name>.<domain>", "sudo nmcli con show", "NAME UUID TYPE DEVICE baremetal 4d5133a5-8351-4bb9-bfd4-3af264801530 bridge baremetal provisioning 43942805-017f-4d7d-a2c2-7cb3324482ed bridge provisioning virbr0 d9bca40f-eee1-410b-8879-a2d4bb0465e7 bridge virbr0 bridge-slave-eth1 76a8ed50-c7e5-4999-b4f6-6d9014dd0812 ethernet eth1 bridge-slave-eth2 f31c3353-54b7-48de-893a-02d2b34c4736 ethernet eth2", "vim pull-secret.txt", "sudo dnf install dnsmasq", "sudo vi /etc/dnsmasq.conf", "interface=baremetal except-interface=lo bind-dynamic log-dhcp dhcp-range=<ip_addr>,<ip_addr>,<pub_cidr> 1 dhcp-option=baremetal,121,0.0.0.0/0,<pub_gateway>,<prvn_priv_ip>,<prvn_pub_ip> 2 dhcp-hostsfile=/var/lib/dnsmasq/dnsmasq.hostsfile", "ibmcloud sl subnet detail <publicsubnetid> --output JSON | jq .cidr", "ibmcloud sl subnet detail <publicsubnetid> --output JSON | jq .gateway -r", "ibmcloud sl hardware detail <id> --output JSON | jq .primaryBackendIpAddress -r", "ibmcloud sl hardware detail <id> --output JSON | jq .primaryIpAddress -r", "ibmcloud sl hardware list", "ibmcloud sl hardware detail <id> --output JSON | jq '.networkComponents[] | \"\\(.primaryIpAddress) \\(.macAddress)\"' | grep -v null", "\"10.196.130.144 00:e0:ed:6a:ca:b4\" \"141.125.65.215 00:e0:ed:6a:ca:b5\"", "sudo vim /var/lib/dnsmasq/dnsmasq.hostsfile", "00:e0:ed:6a:ca:b5,141.125.65.215,master-0 <mac>,<ip>,master-1 <mac>,<ip>,master-2 <mac>,<ip>,worker-0 <mac>,<ip>,worker-1", "sudo systemctl start dnsmasq", "sudo systemctl enable dnsmasq", "sudo systemctl status dnsmasq", "● dnsmasq.service - DNS caching server. Loaded: loaded (/usr/lib/systemd/system/dnsmasq.service; enabled; vendor preset: disabled) Active: active (running) since Tue 2021-10-05 05:04:14 CDT; 49s ago Main PID: 3101 (dnsmasq) Tasks: 1 (limit: 204038) Memory: 732.0K CGroup: /system.slice/dnsmasq.service └─3101 /usr/sbin/dnsmasq -k", "sudo firewall-cmd --add-port 53/udp --permanent", "sudo firewall-cmd --add-port 67/udp --permanent", "sudo firewall-cmd --change-zone=provisioning --zone=external --permanent", "sudo firewall-cmd --reload", "export VERSION=stable-4.16", "export RELEASE_ARCH=<architecture>", "export RELEASE_IMAGE=USD(curl -s https://mirror.openshift.com/pub/openshift-v4/USDRELEASE_ARCH/clients/ocp/USDVERSION/release.txt | grep 'Pull From: quay.io' | awk -F ' ' '{print USD3}')", "export cmd=openshift-baremetal-install", "export pullsecret_file=~/pull-secret.txt", "export extract_dir=USD(pwd)", "curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDVERSION/openshift-client-linux.tar.gz | tar zxvf - oc", "sudo cp oc /usr/local/bin", "oc adm release extract --registry-config \"USD{pullsecret_file}\" --command=USDcmd --to \"USD{extract_dir}\" USD{RELEASE_IMAGE}", "sudo cp openshift-baremetal-install /usr/local/bin", "apiVersion: v1 baseDomain: <domain> metadata: name: <cluster_name> networking: machineNetwork: - cidr: <public-cidr> networkType: OVNKubernetes compute: - name: worker replicas: 2 controlPlane: name: master replicas: 3 platform: baremetal: {} platform: baremetal: apiVIP: <api_ip> ingressVIP: <wildcard_ip> provisioningNetworkInterface: <NIC1> provisioningNetworkCIDR: <CIDR> hosts: - name: openshift-master-0 role: master bmc: address: ipmi://10.196.130.145?privilegelevel=OPERATOR 1 username: root password: <password> bootMACAddress: 00:e0:ed:6a:ca:b4 2 rootDeviceHints: deviceName: \"/dev/sda\" - name: openshift-worker-0 role: worker bmc: address: ipmi://<out-of-band-ip>?privilegelevel=OPERATOR 3 username: <user> password: <password> bootMACAddress: <NIC1_mac_address> 4 rootDeviceHints: deviceName: \"/dev/sda\" pullSecret: '<pull_secret>' sshKey: '<ssh_pub_key>'", "ibmcloud sl hardware detail <id> --output JSON | jq '\"(.networkManagementIpAddress) (.remoteManagementAccounts[0].password)\"'", "mkdir ~/clusterconfigs", "cp install-config.yaml ~/clusterconfigs", "ipmitool -I lanplus -U <user> -P <password> -H <management_server_ip> power off", "for i in USD(sudo virsh list | tail -n +3 | grep bootstrap | awk {'print USD2'}); do sudo virsh destroy USDi; sudo virsh undefine USDi; sudo virsh vol-delete USDi --pool USDi; sudo virsh vol-delete USDi.ign --pool USDi; sudo virsh pool-destroy USDi; sudo virsh pool-undefine USDi; done", "metadata: name:", "networking: machineNetwork: - cidr:", "compute: - name: worker", "compute: replicas: 2", "controlPlane: name: master", "controlPlane: replicas: 3", "- name: master-0 role: master bmc: address: ipmi://10.10.0.3:6203 username: admin password: redhat bootMACAddress: de:ad:be:ef:00:40 rootDeviceHints: deviceName: \"/dev/sda\"", "./openshift-baremetal-install --dir ~/clusterconfigs create manifests", "INFO Consuming Install Config from target directory WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings WARNING Discarding the OpenShift Manifest that was provided in the target directory because its dependencies are dirty and it needs to be regenerated", "./openshift-baremetal-install --dir ~/clusterconfigs --log-level debug create cluster", "tail -f /path/to/install-dir/.openshift_install.log" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html-single/installing_ibm_cloud_bare_metal_classic/index
17.4. Configuration Examples
17.4. Configuration Examples 17.4.1. Dynamic DNS BIND allows hosts to update their records in DNS and zone files dynamically. This is used when a host computer's IP address changes frequently and the DNS record requires real-time modification. Use the /var/named/dynamic/ directory for zone files you want updated by dynamic DNS. Files created in or copied into this directory inherit Linux permissions that allow named to write to them. As such files are labeled with the named_cache_t type, SELinux allows named to write to them. If a zone file in /var/named/dynamic/ is labeled with the named_zone_t type, dynamic DNS updates may not be successful for a certain period of time as the update needs to be written to a journal first before being merged. If the zone file is labeled with the named_zone_t type when the journal attempts to be merged, an error such as the following is logged: Also, the following SELinux denial message is logged: To resolve this labeling issue, use the restorecon utility as root:
[ "named[PID]: dumping master file: rename: /var/named/dynamic/zone-name: permission denied", "setroubleshoot: SELinux is preventing named (named_t) \"unlink\" to zone-name (named_zone_t)", "~]# restorecon -R -v /var/named/dynamic" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/selinux_users_and_administrators_guide/sect-managing_confined_services-bind-configuration_examples
Chapter 6. Renewing the AMQ Interconnect certificate
Chapter 6. Renewing the AMQ Interconnect certificate Periodically, you must renew the CA certificate that secures the AMQ Interconnect connection between Red Hat OpenStack Platform (RHOSP) and Service Telemetry Framework (STF) when the certificate expires. The renewal is handled automatically by the cert-manager component in Red Hat OpenShift Container Platform, but you must manually copy the renewed certificate to your RHOSP nodes. 6.1. Checking for an expired AMQ Interconnect CA certificate When the CA certificate expires, the AMQ Interconnect connections remain up, but cannot reconnect if they are interrupted. Eventually, some or all of the connections from your Red Hat OpenStack Platform (RHOSP) dispatch routers fail, showing errors on both sides, and the expiry or Not After field in your CA certificate is in the past. Procedure Log in to Red Hat OpenShift Container Platform. Change to the service-telemetry namespace: USD oc project service-telemetry Verify that some or all dispatch router connections have failed: USD oc exec -it USD(oc get po -l application=default-interconnect -o jsonpath='{.items[0].metadata.name}') -- qdstat --connections | grep Router | wc 0 0 0 Check for this error in the Red Hat OpenShift Container Platform-hosted AMQ Interconnect logs: USD oc logs -l application=default-interconnect | tail [...] 2022-11-10 20:51:22.863466 +0000 SERVER (info) [C261] Connection from 10.10.10.10:34570 (to 0.0.0.0:5671) failed: amqp:connection:framing-error SSL Failure: error:140940E5:SSL routines:ssl3_read_bytes:ssl handshake failure Log into your RHOSP undercloud. Check for this error in the RHOSP-hosted AMQ Interconnect logs of a node with a failed connection: USD ssh controller-0.ctlplane -- sudo tail /var/log/containers/metrics_qdr/metrics_qdr.log [...] 2022-11-10 20:50:44.311646 +0000 SERVER (info) [C137] Connection to default-interconnect-5671-service-telemetry.apps.mycluster.com:443 failed: amqp:connection:framing-error SSL Failure: error:0A000086:SSL routines::certificate verify failed Confirm that the CA certificate has expired by examining the file on an RHOSP node: USD ssh controller-0.ctlplane -- cat /var/lib/config-data/puppet-generated/metrics_qdr/etc/pki/tls/certs/CA_sslProfile.pem | openssl x509 -text | grep "Not After" Not After : Nov 10 20:31:16 2022 GMT USD date Mon Nov 14 11:10:40 EST 2022 6.2. Updating the AMQ Interconnect CA certificate To update the AMQ Interconnect certificate, you must export it from Red Hat OpenShift Container Platform and copy it to your Red Hat OpenStack Platform (RHOSP) nodes. Procedure Log in to Red Hat OpenShift Container Platform. Change to the service-telemetry namespace: USD oc project service-telemetry Export the CA certificate to STFCA.pem : USD oc get secret/default-interconnect-selfsigned -o jsonpath='{.data.ca\.crt}' | base64 -d > STFCA.pem Copy STFCA.pem to your RHOSP undercloud. Log into your RHOSP undercloud. Edit the stf-connectors.yaml file to contain the new caCertFileContent. For more information, see Section 4.1.4, "Configuring the STF connection for the overcloud" . Copy the STFCA.pem file to each RHOSP overcloud node: [stack@undercloud-0 ~]USD ansible -i overcloud-deploy/overcloud/tripleo-ansible-inventory.yaml allovercloud -b -m copy -a "src=STFCA.pem dest=/var/lib/config-data/puppet-generated/metrics_qdr/etc/pki/tls/certs/CA_sslProfile.pem" Restart the metrics_qdr container on each RHOSP overcloud node: [stack@undercloud-0 ~]USD ansible -i overcloud-deploy/overcloud/tripleo-ansible-inventory.yaml allovercloud -m shell -a "sudo podman restart metrics_qdr" Note You do not need to deploy the overcloud after you copy the STFCA.pem file and restart the metrics_qdr container. You edit the stf-connectors.yaml file so that future deployments do not overwrite the new CA certificate.
[ "oc project service-telemetry", "oc exec -it USD(oc get po -l application=default-interconnect -o jsonpath='{.items[0].metadata.name}') -- qdstat --connections | grep Router | wc 0 0 0", "oc logs -l application=default-interconnect | tail [...] 2022-11-10 20:51:22.863466 +0000 SERVER (info) [C261] Connection from 10.10.10.10:34570 (to 0.0.0.0:5671) failed: amqp:connection:framing-error SSL Failure: error:140940E5:SSL routines:ssl3_read_bytes:ssl handshake failure", "ssh controller-0.ctlplane -- sudo tail /var/log/containers/metrics_qdr/metrics_qdr.log [...] 2022-11-10 20:50:44.311646 +0000 SERVER (info) [C137] Connection to default-interconnect-5671-service-telemetry.apps.mycluster.com:443 failed: amqp:connection:framing-error SSL Failure: error:0A000086:SSL routines::certificate verify failed", "ssh controller-0.ctlplane -- cat /var/lib/config-data/puppet-generated/metrics_qdr/etc/pki/tls/certs/CA_sslProfile.pem | openssl x509 -text | grep \"Not After\" Not After : Nov 10 20:31:16 2022 GMT date Mon Nov 14 11:10:40 EST 2022", "oc project service-telemetry", "oc get secret/default-interconnect-selfsigned -o jsonpath='{.data.ca\\.crt}' | base64 -d > STFCA.pem", "[stack@undercloud-0 ~]USD ansible -i overcloud-deploy/overcloud/tripleo-ansible-inventory.yaml allovercloud -b -m copy -a \"src=STFCA.pem dest=/var/lib/config-data/puppet-generated/metrics_qdr/etc/pki/tls/certs/CA_sslProfile.pem\"", "[stack@undercloud-0 ~]USD ansible -i overcloud-deploy/overcloud/tripleo-ansible-inventory.yaml allovercloud -m shell -a \"sudo podman restart metrics_qdr\"" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/service_telemetry_framework_1.5/assembly-renewing-the-amq-interconnect-certificate_assembly
22.4. Understanding the Drift File
22.4. Understanding the Drift File The drift file is used to store the frequency offset between the system clock running at its nominal frequency and the frequency required to remain in synchronization with UTC. If present, the value contained in the drift file is read at system start and used to correct the clock source. Use of the drift file reduces the time required to achieve a stable and accurate time. The value is calculated, and the drift file replaced, once per hour by ntpd . The drift file is replaced, rather than just updated, and for this reason the drift file must be in a directory for which ntpd has write permissions.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s1-understanding_the_drift_file
1.2. Digital Signatures
1.2. Digital Signatures Tamper detection relies on a mathematical function called a one-way hash (also called a message digest ). A one-way hash is a number of fixed length with the following characteristics: The value of the hash is unique for the hashed data. Any change in the data, even deleting or altering a single character, results in a different value. The content of the hashed data cannot be deduced from the hash. As mentioned in Section 1.1.2, "Public-Key Encryption" , it is possible to use a private key for encryption and the corresponding public key for decryption. Although not recommended when encrypting sensitive information, it is a crucial part of digitally signing any data. Instead of encrypting the data itself, the signing software creates a one-way hash of the data, then uses the private key to encrypt the hash. The encrypted hash, along with other information such as the hashing algorithm, is known as a digital signature. Figure 1.3, "Using a Digital Signature to Validate Data Integrity" illustrates the way a digital signature can be used to validate the integrity of signed data. Figure 1.3. Using a Digital Signature to Validate Data Integrity Figure 1.3, "Using a Digital Signature to Validate Data Integrity" shows two items transferred to the recipient of some signed data: the original data and the digital signature, which is a one-way hash of the original data encrypted with the signer's private key. To validate the integrity of the data, the receiving software first uses the public key to decrypt the hash. It then uses the same hashing algorithm that generated the original hash to generate a new one-way hash of the same data. (Information about the hashing algorithm used is sent with the digital signature.) Finally, the receiving software compares the new hash against the original hash. If the two hashes match, the data has not changed since it was signed. If they do not match, the data may have been tampered with since it was signed, or the signature may have been created with a private key that does not correspond to the public key presented by the signer. If the two hashes match, the recipient can be certain that the public key used to decrypt the digital signature corresponds to the private key used to create the digital signature. Confirming the identity of the signer also requires some way of confirming that the public key belongs to a particular entity. For more information on authenticating users, see Section 1.3, "Certificates and Authentication" . A digital signature is similar to a handwritten signature. Once data have been signed, it is difficult to deny doing so later, assuming the private key has not been compromised. This quality of digital signatures provides a high degree of nonrepudiation; digital signatures make it difficult for the signer to deny having signed the data. In some situations, a digital signature is as legally binding as a handwritten signature.
null
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide/Introduction_to_Public_Key_Cryptography-Digital_Signatures
Chapter 3. Package Namespace Change for JBoss EAP 8.0
Chapter 3. Package Namespace Change for JBoss EAP 8.0 This section provides additional information for the package namespace changes in JBoss EAP 8.0. JBoss EAP 8.0 provides full support for Jakarta EE 10 and many other implementations of the Jakarta EE 10 APIs. An important change supported by Jakarta EE 10 for JBoss EAP 8.0 is the package namespace change. 3.1. javax to jakarta Namespace change A key difference between Jakarta EE 8 and EE 10 is the renaming of the EE API Java packages from javax.* to jakarta.* . This follows the move of Java EE to the Eclipse Foundation and the establishment of Jakarta EE. Adapting to this namespace change is the biggest task of migrating an application from JBoss EAP 7 to JBoss EAP 8. To migrate applications to Jakarta EE 10, you must complete the following steps: Update any import statements or other source code uses of EE API classes from the javax package to the jakarta package. Update the names of any EE-specified system properties or other configuration properties that begin with javax to begin with jakarta . For any application-provided implementations of EE interfaces or abstract classes that are bootstrapped using the java.util.ServiceLoader mechanism, change the name of the resource that identifies the implementation class from META-INF/services/javax.[rest_of_name] to META-INF/services/jakarta.[rest_of_name] . Note The Red Hat Migration Toolkit can assist in updating the namespaces in the application source code. For more information, see How to use Red Hat Migration Toolkit for Auto-Migration of an Application to the Jakarta EE 10 Namespace . In cases where source code migration is not an option, the Open Source Eclipse Transformer project provides bytecode transformation tooling to transform existing Java archives from the javax namespace to the jakarta namespace. Note This change does not affect javax packages that are part of Java SE. Additional resources For more information, see The javax to jakarta Package Namespace Change . Revised on 2024-02-21 14:02:49 UTC
null
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/getting_started_with_red_hat_jboss_enterprise_application_platform/package-namespace-change-for-jboss-eap-8-0_assembly-getting-started
Chapter 11. Preparing for users
Chapter 11. Preparing for users After installing OpenShift Container Platform, you can further expand and customize your cluster to your requirements, including taking steps to prepare for users. 11.1. Understanding identity provider configuration The OpenShift Container Platform control plane includes a built-in OAuth server. Developers and administrators obtain OAuth access tokens to authenticate themselves to the API. As an administrator, you can configure OAuth to specify an identity provider after you install your cluster. 11.1.1. About identity providers in OpenShift Container Platform By default, only a kubeadmin user exists on your cluster. To specify an identity provider, you must create a custom resource (CR) that describes that identity provider and add it to the cluster. Note OpenShift Container Platform user names containing / , : , and % are not supported. 11.1.2. Supported identity providers You can configure the following types of identity providers: Identity provider Description htpasswd Configure the htpasswd identity provider to validate user names and passwords against a flat file generated using htpasswd . Keystone Configure the keystone identity provider to integrate your OpenShift Container Platform cluster with Keystone to enable shared authentication with an OpenStack Keystone v3 server configured to store users in an internal database. LDAP Configure the ldap identity provider to validate user names and passwords against an LDAPv3 server, using simple bind authentication. Basic authentication Configure a basic-authentication identity provider for users to log in to OpenShift Container Platform with credentials validated against a remote identity provider. Basic authentication is a generic backend integration mechanism. Request header Configure a request-header identity provider to identify users from request header values, such as X-Remote-User . It is typically used in combination with an authenticating proxy, which sets the request header value. GitHub or GitHub Enterprise Configure a github identity provider to validate user names and passwords against GitHub or GitHub Enterprise's OAuth authentication server. GitLab Configure a gitlab identity provider to use GitLab.com or any other GitLab instance as an identity provider. Google Configure a google identity provider using Google's OpenID Connect integration . OpenID Connect Configure an oidc identity provider to integrate with an OpenID Connect identity provider using an Authorization Code Flow . After you define an identity provider, you can use RBAC to define and apply permissions . 11.1.3. Identity provider parameters The following parameters are common to all identity providers: Parameter Description name The provider name is prefixed to provider user names to form an identity name. mappingMethod Defines how new identities are mapped to users when they log in. Enter one of the following values: claim The default value. Provisions a user with the identity's preferred user name. Fails if a user with that user name is already mapped to another identity. lookup Looks up an existing identity, user identity mapping, and user, but does not automatically provision users or identities. This allows cluster administrators to set up identities and users manually, or using an external process. Using this method requires you to manually provision users. add Provisions a user with the identity's preferred user name. If a user with that user name already exists, the identity is mapped to the existing user, adding to any existing identity mappings for the user. Required when multiple identity providers are configured that identify the same set of users and map to the same user names. Note When adding or changing identity providers, you can map identities from the new provider to existing users by setting the mappingMethod parameter to add . 11.1.4. Sample identity provider CR The following custom resource (CR) shows the parameters and default values that you use to configure an identity provider. This example uses the htpasswd identity provider. Sample identity provider CR apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: my_identity_provider 1 mappingMethod: claim 2 type: HTPasswd htpasswd: fileData: name: htpass-secret 3 1 This provider name is prefixed to provider user names to form an identity name. 2 Controls how mappings are established between this provider's identities and User objects. 3 An existing secret containing a file generated using htpasswd . 11.2. Using RBAC to define and apply permissions Understand and apply role-based access control. 11.2.1. RBAC overview Role-based access control (RBAC) objects determine whether a user is allowed to perform a given action within a project. Cluster administrators can use the cluster roles and bindings to control who has various access levels to the OpenShift Container Platform platform itself and all projects. Developers can use local roles and bindings to control who has access to their projects. Note that authorization is a separate step from authentication, which is more about determining the identity of who is taking the action. Authorization is managed using: Authorization object Description Rules Sets of permitted verbs on a set of objects. For example, whether a user or service account can create pods. Roles Collections of rules. You can associate, or bind, users and groups to multiple roles. Bindings Associations between users and/or groups with a role. There are two levels of RBAC roles and bindings that control authorization: RBAC level Description Cluster RBAC Roles and bindings that are applicable across all projects. Cluster roles exist cluster-wide, and cluster role bindings can reference only cluster roles. Local RBAC Roles and bindings that are scoped to a given project. While local roles exist only in a single project, local role bindings can reference both cluster and local roles. A cluster role binding is a binding that exists at the cluster level. A role binding exists at the project level. The cluster role view must be bound to a user using a local role binding for that user to view the project. Create local roles only if a cluster role does not provide the set of permissions needed for a particular situation. This two-level hierarchy allows reuse across multiple projects through the cluster roles while allowing customization inside of individual projects through local roles. During evaluation, both the cluster role bindings and the local role bindings are used. For example: Cluster-wide "allow" rules are checked. Locally-bound "allow" rules are checked. Deny by default. 11.2.1.1. Default cluster roles OpenShift Container Platform includes a set of default cluster roles that you can bind to users and groups cluster-wide or locally. Important It is not recommended to manually modify the default cluster roles. Modifications to these system roles can prevent a cluster from functioning properly. Default cluster role Description admin A project manager. If used in a local binding, an admin has rights to view any resource in the project and modify any resource in the project except for quota. basic-user A user that can get basic information about projects and users. cluster-admin A super-user that can perform any action in any project. When bound to a user with a local binding, they have full control over quota and every action on every resource in the project. cluster-status A user that can get basic cluster status information. cluster-reader A user that can get or view most of the objects but cannot modify them. edit A user that can modify most objects in a project but does not have the power to view or modify roles or bindings. self-provisioner A user that can create their own projects. view A user who cannot make any modifications, but can see most objects in a project. They cannot view or modify roles or bindings. Be mindful of the difference between local and cluster bindings. For example, if you bind the cluster-admin role to a user by using a local role binding, it might appear that this user has the privileges of a cluster administrator. This is not the case. Binding the cluster-admin to a user in a project grants super administrator privileges for only that project to the user. That user has the permissions of the cluster role admin , plus a few additional permissions like the ability to edit rate limits, for that project. This binding can be confusing via the web console UI, which does not list cluster role bindings that are bound to true cluster administrators. However, it does list local role bindings that you can use to locally bind cluster-admin . The relationships between cluster roles, local roles, cluster role bindings, local role bindings, users, groups and service accounts are illustrated below. Warning The get pods/exec , get pods/* , and get * rules grant execution privileges when they are applied to a role. Apply the principle of least privilege and assign only the minimal RBAC rights required for users and agents. For more information, see RBAC rules allow execution privileges . 11.2.1.2. Evaluating authorization OpenShift Container Platform evaluates authorization by using: Identity The user name and list of groups that the user belongs to. Action The action you perform. In most cases, this consists of: Project : The project you access. A project is a Kubernetes namespace with additional annotations that allows a community of users to organize and manage their content in isolation from other communities. Verb : The action itself: get , list , create , update , delete , deletecollection , or watch . Resource name : The API endpoint that you access. Bindings The full list of bindings, the associations between users or groups with a role. OpenShift Container Platform evaluates authorization by using the following steps: The identity and the project-scoped action is used to find all bindings that apply to the user or their groups. Bindings are used to locate all the roles that apply. Roles are used to find all the rules that apply. The action is checked against each rule to find a match. If no matching rule is found, the action is then denied by default. Tip Remember that users and groups can be associated with, or bound to, multiple roles at the same time. Project administrators can use the CLI to view local roles and bindings, including a matrix of the verbs and resources each are associated with. Important The cluster role bound to the project administrator is limited in a project through a local binding. It is not bound cluster-wide like the cluster roles granted to the cluster-admin or system:admin . Cluster roles are roles defined at the cluster level but can be bound either at the cluster level or at the project level. 11.2.1.2.1. Cluster role aggregation The default admin, edit, view, and cluster-reader cluster roles support cluster role aggregation , where the cluster rules for each role are dynamically updated as new rules are created. This feature is relevant only if you extend the Kubernetes API by creating custom resources. 11.2.2. Projects and namespaces A Kubernetes namespace provides a mechanism to scope resources in a cluster. The Kubernetes documentation has more information on namespaces. Namespaces provide a unique scope for: Named resources to avoid basic naming collisions. Delegated management authority to trusted users. The ability to limit community resource consumption. Most objects in the system are scoped by namespace, but some are excepted and have no namespace, including nodes and users. A project is a Kubernetes namespace with additional annotations and is the central vehicle by which access to resources for regular users is managed. A project allows a community of users to organize and manage their content in isolation from other communities. Users must be given access to projects by administrators, or if allowed to create projects, automatically have access to their own projects. Projects can have a separate name , displayName , and description . The mandatory name is a unique identifier for the project and is most visible when using the CLI tools or API. The maximum name length is 63 characters. The optional displayName is how the project is displayed in the web console (defaults to name ). The optional description can be a more detailed description of the project and is also visible in the web console. Each project scopes its own set of: Object Description Objects Pods, services, replication controllers, etc. Policies Rules for which users can or cannot perform actions on objects. Constraints Quotas for each kind of object that can be limited. Service accounts Service accounts act automatically with designated access to objects in the project. Cluster administrators can create projects and delegate administrative rights for the project to any member of the user community. Cluster administrators can also allow developers to create their own projects. Developers and administrators can interact with projects by using the CLI or the web console. 11.2.3. Default projects OpenShift Container Platform comes with a number of default projects, and projects starting with openshift- are the most essential to users. These projects host master components that run as pods and other infrastructure components. The pods created in these namespaces that have a critical pod annotation are considered critical, and the have guaranteed admission by kubelet. Pods created for master components in these namespaces are already marked as critical. Important Do not run workloads in or share access to default projects. Default projects are reserved for running core cluster components. The following default projects are considered highly privileged: default , kube-public , kube-system , openshift , openshift-infra , openshift-node , and other system-created projects that have the openshift.io/run-level label set to 0 or 1 . Functionality that relies on admission plugins, such as pod security admission, security context constraints, cluster resource quotas, and image reference resolution, does not work in highly privileged projects. 11.2.4. Viewing cluster roles and bindings You can use the oc CLI to view cluster roles and bindings by using the oc describe command. Prerequisites Install the oc CLI. Obtain permission to view the cluster roles and bindings. Users with the cluster-admin default cluster role bound cluster-wide can perform any action on any resource, including viewing cluster roles and bindings. Procedure To view the cluster roles and their associated rule sets: USD oc describe clusterrole.rbac Example output Name: admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- .packages.apps.redhat.com [] [] [* create update patch delete get list watch] imagestreams [] [] [create delete deletecollection get list patch update watch create get list watch] imagestreams.image.openshift.io [] [] [create delete deletecollection get list patch update watch create get list watch] secrets [] [] [create delete deletecollection get list patch update watch get list watch create delete deletecollection patch update] buildconfigs/webhooks [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs [] [] [create delete deletecollection get list patch update watch get list watch] buildlogs [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs/scale [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamimages [] [] [create delete deletecollection get list patch update watch get list watch] imagestreammappings [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamtags [] [] [create delete deletecollection get list patch update watch get list watch] processedtemplates [] [] [create delete deletecollection get list patch update watch get list watch] routes [] [] [create delete deletecollection get list patch update watch get list watch] templateconfigs [] [] [create delete deletecollection get list patch update watch get list watch] templateinstances [] [] [create delete deletecollection get list patch update watch get list watch] templates [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs.apps.openshift.io/scale [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs.apps.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs.build.openshift.io/webhooks [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs.build.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] buildlogs.build.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamimages.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreammappings.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamtags.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] routes.route.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] processedtemplates.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templateconfigs.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templateinstances.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templates.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] serviceaccounts [] [] [create delete deletecollection get list patch update watch impersonate create delete deletecollection patch update get list watch] imagestreams/secrets [] [] [create delete deletecollection get list patch update watch] rolebindings [] [] [create delete deletecollection get list patch update watch] roles [] [] [create delete deletecollection get list patch update watch] rolebindings.authorization.openshift.io [] [] [create delete deletecollection get list patch update watch] roles.authorization.openshift.io [] [] [create delete deletecollection get list patch update watch] imagestreams.image.openshift.io/secrets [] [] [create delete deletecollection get list patch update watch] rolebindings.rbac.authorization.k8s.io [] [] [create delete deletecollection get list patch update watch] roles.rbac.authorization.k8s.io [] [] [create delete deletecollection get list patch update watch] networkpolicies.extensions [] [] [create delete deletecollection patch update create delete deletecollection get list patch update watch get list watch] networkpolicies.networking.k8s.io [] [] [create delete deletecollection patch update create delete deletecollection get list patch update watch get list watch] configmaps [] [] [create delete deletecollection patch update get list watch] endpoints [] [] [create delete deletecollection patch update get list watch] persistentvolumeclaims [] [] [create delete deletecollection patch update get list watch] pods [] [] [create delete deletecollection patch update get list watch] replicationcontrollers/scale [] [] [create delete deletecollection patch update get list watch] replicationcontrollers [] [] [create delete deletecollection patch update get list watch] services [] [] [create delete deletecollection patch update get list watch] daemonsets.apps [] [] [create delete deletecollection patch update get list watch] deployments.apps/scale [] [] [create delete deletecollection patch update get list watch] deployments.apps [] [] [create delete deletecollection patch update get list watch] replicasets.apps/scale [] [] [create delete deletecollection patch update get list watch] replicasets.apps [] [] [create delete deletecollection patch update get list watch] statefulsets.apps/scale [] [] [create delete deletecollection patch update get list watch] statefulsets.apps [] [] [create delete deletecollection patch update get list watch] horizontalpodautoscalers.autoscaling [] [] [create delete deletecollection patch update get list watch] cronjobs.batch [] [] [create delete deletecollection patch update get list watch] jobs.batch [] [] [create delete deletecollection patch update get list watch] daemonsets.extensions [] [] [create delete deletecollection patch update get list watch] deployments.extensions/scale [] [] [create delete deletecollection patch update get list watch] deployments.extensions [] [] [create delete deletecollection patch update get list watch] ingresses.extensions [] [] [create delete deletecollection patch update get list watch] replicasets.extensions/scale [] [] [create delete deletecollection patch update get list watch] replicasets.extensions [] [] [create delete deletecollection patch update get list watch] replicationcontrollers.extensions/scale [] [] [create delete deletecollection patch update get list watch] poddisruptionbudgets.policy [] [] [create delete deletecollection patch update get list watch] deployments.apps/rollback [] [] [create delete deletecollection patch update] deployments.extensions/rollback [] [] [create delete deletecollection patch update] catalogsources.operators.coreos.com [] [] [create update patch delete get list watch] clusterserviceversions.operators.coreos.com [] [] [create update patch delete get list watch] installplans.operators.coreos.com [] [] [create update patch delete get list watch] packagemanifests.operators.coreos.com [] [] [create update patch delete get list watch] subscriptions.operators.coreos.com [] [] [create update patch delete get list watch] buildconfigs/instantiate [] [] [create] buildconfigs/instantiatebinary [] [] [create] builds/clone [] [] [create] deploymentconfigrollbacks [] [] [create] deploymentconfigs/instantiate [] [] [create] deploymentconfigs/rollback [] [] [create] imagestreamimports [] [] [create] localresourceaccessreviews [] [] [create] localsubjectaccessreviews [] [] [create] podsecuritypolicyreviews [] [] [create] podsecuritypolicyselfsubjectreviews [] [] [create] podsecuritypolicysubjectreviews [] [] [create] resourceaccessreviews [] [] [create] routes/custom-host [] [] [create] subjectaccessreviews [] [] [create] subjectrulesreviews [] [] [create] deploymentconfigrollbacks.apps.openshift.io [] [] [create] deploymentconfigs.apps.openshift.io/instantiate [] [] [create] deploymentconfigs.apps.openshift.io/rollback [] [] [create] localsubjectaccessreviews.authorization.k8s.io [] [] [create] localresourceaccessreviews.authorization.openshift.io [] [] [create] localsubjectaccessreviews.authorization.openshift.io [] [] [create] resourceaccessreviews.authorization.openshift.io [] [] [create] subjectaccessreviews.authorization.openshift.io [] [] [create] subjectrulesreviews.authorization.openshift.io [] [] [create] buildconfigs.build.openshift.io/instantiate [] [] [create] buildconfigs.build.openshift.io/instantiatebinary [] [] [create] builds.build.openshift.io/clone [] [] [create] imagestreamimports.image.openshift.io [] [] [create] routes.route.openshift.io/custom-host [] [] [create] podsecuritypolicyreviews.security.openshift.io [] [] [create] podsecuritypolicyselfsubjectreviews.security.openshift.io [] [] [create] podsecuritypolicysubjectreviews.security.openshift.io [] [] [create] jenkins.build.openshift.io [] [] [edit view view admin edit view] builds [] [] [get create delete deletecollection get list patch update watch get list watch] builds.build.openshift.io [] [] [get create delete deletecollection get list patch update watch get list watch] projects [] [] [get delete get delete get patch update] projects.project.openshift.io [] [] [get delete get delete get patch update] namespaces [] [] [get get list watch] pods/attach [] [] [get list watch create delete deletecollection patch update] pods/exec [] [] [get list watch create delete deletecollection patch update] pods/portforward [] [] [get list watch create delete deletecollection patch update] pods/proxy [] [] [get list watch create delete deletecollection patch update] services/proxy [] [] [get list watch create delete deletecollection patch update] routes/status [] [] [get list watch update] routes.route.openshift.io/status [] [] [get list watch update] appliedclusterresourcequotas [] [] [get list watch] bindings [] [] [get list watch] builds/log [] [] [get list watch] deploymentconfigs/log [] [] [get list watch] deploymentconfigs/status [] [] [get list watch] events [] [] [get list watch] imagestreams/status [] [] [get list watch] limitranges [] [] [get list watch] namespaces/status [] [] [get list watch] pods/log [] [] [get list watch] pods/status [] [] [get list watch] replicationcontrollers/status [] [] [get list watch] resourcequotas/status [] [] [get list watch] resourcequotas [] [] [get list watch] resourcequotausages [] [] [get list watch] rolebindingrestrictions [] [] [get list watch] deploymentconfigs.apps.openshift.io/log [] [] [get list watch] deploymentconfigs.apps.openshift.io/status [] [] [get list watch] controllerrevisions.apps [] [] [get list watch] rolebindingrestrictions.authorization.openshift.io [] [] [get list watch] builds.build.openshift.io/log [] [] [get list watch] imagestreams.image.openshift.io/status [] [] [get list watch] appliedclusterresourcequotas.quota.openshift.io [] [] [get list watch] imagestreams/layers [] [] [get update get] imagestreams.image.openshift.io/layers [] [] [get update get] builds/details [] [] [update] builds.build.openshift.io/details [] [] [update] Name: basic-user Labels: <none> Annotations: openshift.io/description: A user that can get basic information about projects. rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- selfsubjectrulesreviews [] [] [create] selfsubjectaccessreviews.authorization.k8s.io [] [] [create] selfsubjectrulesreviews.authorization.openshift.io [] [] [create] clusterroles.rbac.authorization.k8s.io [] [] [get list watch] clusterroles [] [] [get list] clusterroles.authorization.openshift.io [] [] [get list] storageclasses.storage.k8s.io [] [] [get list] users [] [~] [get] users.user.openshift.io [] [~] [get] projects [] [] [list watch] projects.project.openshift.io [] [] [list watch] projectrequests [] [] [list] projectrequests.project.openshift.io [] [] [list] Name: cluster-admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- *.* [] [] [*] [*] [] [*] ... To view the current set of cluster role bindings, which shows the users and groups that are bound to various roles: USD oc describe clusterrolebinding.rbac Example output Name: alertmanager-main Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: alertmanager-main Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount alertmanager-main openshift-monitoring Name: basic-users Labels: <none> Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: basic-user Subjects: Kind Name Namespace ---- ---- --------- Group system:authenticated Name: cloud-credential-operator-rolebinding Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: cloud-credential-operator-role Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount default openshift-cloud-credential-operator Name: cluster-admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: cluster-admin Subjects: Kind Name Namespace ---- ---- --------- Group system:masters Name: cluster-admins Labels: <none> Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: cluster-admin Subjects: Kind Name Namespace ---- ---- --------- Group system:cluster-admins User system:admin Name: cluster-api-manager-rolebinding Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: cluster-api-manager-role Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount default openshift-machine-api ... 11.2.5. Viewing local roles and bindings You can use the oc CLI to view local roles and bindings by using the oc describe command. Prerequisites Install the oc CLI. Obtain permission to view the local roles and bindings: Users with the cluster-admin default cluster role bound cluster-wide can perform any action on any resource, including viewing local roles and bindings. Users with the admin default cluster role bound locally can view and manage roles and bindings in that project. Procedure To view the current set of local role bindings, which show the users and groups that are bound to various roles for the current project: USD oc describe rolebinding.rbac To view the local role bindings for a different project, add the -n flag to the command: USD oc describe rolebinding.rbac -n joe-project Example output Name: admin Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User kube:admin Name: system:deployers Labels: <none> Annotations: openshift.io/description: Allows deploymentconfigs in this namespace to rollout pods in this namespace. It is auto-managed by a controller; remove subjects to disa... Role: Kind: ClusterRole Name: system:deployer Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount deployer joe-project Name: system:image-builders Labels: <none> Annotations: openshift.io/description: Allows builds in this namespace to push images to this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-builder Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount builder joe-project Name: system:image-pullers Labels: <none> Annotations: openshift.io/description: Allows all pods in this namespace to pull images from this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-puller Subjects: Kind Name Namespace ---- ---- --------- Group system:serviceaccounts:joe-project 11.2.6. Adding roles to users You can use the oc adm administrator CLI to manage the roles and bindings. Binding, or adding, a role to users or groups gives the user or group the access that is granted by the role. You can add and remove roles to and from users and groups using oc adm policy commands. You can bind any of the default cluster roles to local users or groups in your project. Procedure Add a role to a user in a specific project: USD oc adm policy add-role-to-user <role> <user> -n <project> For example, you can add the admin role to the alice user in joe project by running: USD oc adm policy add-role-to-user admin alice -n joe Tip You can alternatively apply the following YAML to add the role to the user: apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: admin-0 namespace: joe roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: admin subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: alice View the local role bindings and verify the addition in the output: USD oc describe rolebinding.rbac -n <project> For example, to view the local role bindings for the joe project: USD oc describe rolebinding.rbac -n joe Example output Name: admin Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User kube:admin Name: admin-0 Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User alice 1 Name: system:deployers Labels: <none> Annotations: openshift.io/description: Allows deploymentconfigs in this namespace to rollout pods in this namespace. It is auto-managed by a controller; remove subjects to disa... Role: Kind: ClusterRole Name: system:deployer Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount deployer joe Name: system:image-builders Labels: <none> Annotations: openshift.io/description: Allows builds in this namespace to push images to this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-builder Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount builder joe Name: system:image-pullers Labels: <none> Annotations: openshift.io/description: Allows all pods in this namespace to pull images from this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-puller Subjects: Kind Name Namespace ---- ---- --------- Group system:serviceaccounts:joe 1 The alice user has been added to the admins RoleBinding . 11.2.7. Creating a local role You can create a local role for a project and then bind it to a user. Procedure To create a local role for a project, run the following command: USD oc create role <name> --verb=<verb> --resource=<resource> -n <project> In this command, specify: <name> , the local role's name <verb> , a comma-separated list of the verbs to apply to the role <resource> , the resources that the role applies to <project> , the project name For example, to create a local role that allows a user to view pods in the blue project, run the following command: USD oc create role podview --verb=get --resource=pod -n blue To bind the new role to a user, run the following command: USD oc adm policy add-role-to-user podview user2 --role-namespace=blue -n blue 11.2.8. Creating a cluster role You can create a cluster role. Procedure To create a cluster role, run the following command: USD oc create clusterrole <name> --verb=<verb> --resource=<resource> In this command, specify: <name> , the local role's name <verb> , a comma-separated list of the verbs to apply to the role <resource> , the resources that the role applies to For example, to create a cluster role that allows a user to view pods, run the following command: USD oc create clusterrole podviewonly --verb=get --resource=pod 11.2.9. Local role binding commands When you manage a user or group's associated roles for local role bindings using the following operations, a project may be specified with the -n flag. If it is not specified, then the current project is used. You can use the following commands for local RBAC management. Table 11.1. Local role binding operations Command Description USD oc adm policy who-can <verb> <resource> Indicates which users can perform an action on a resource. USD oc adm policy add-role-to-user <role> <username> Binds a specified role to specified users in the current project. USD oc adm policy remove-role-from-user <role> <username> Removes a given role from specified users in the current project. USD oc adm policy remove-user <username> Removes specified users and all of their roles in the current project. USD oc adm policy add-role-to-group <role> <groupname> Binds a given role to specified groups in the current project. USD oc adm policy remove-role-from-group <role> <groupname> Removes a given role from specified groups in the current project. USD oc adm policy remove-group <groupname> Removes specified groups and all of their roles in the current project. 11.2.10. Cluster role binding commands You can also manage cluster role bindings using the following operations. The -n flag is not used for these operations because cluster role bindings use non-namespaced resources. Table 11.2. Cluster role binding operations Command Description USD oc adm policy add-cluster-role-to-user <role> <username> Binds a given role to specified users for all projects in the cluster. USD oc adm policy remove-cluster-role-from-user <role> <username> Removes a given role from specified users for all projects in the cluster. USD oc adm policy add-cluster-role-to-group <role> <groupname> Binds a given role to specified groups for all projects in the cluster. USD oc adm policy remove-cluster-role-from-group <role> <groupname> Removes a given role from specified groups for all projects in the cluster. 11.2.11. Creating a cluster admin The cluster-admin role is required to perform administrator level tasks on the OpenShift Container Platform cluster, such as modifying cluster resources. Prerequisites You must have created a user to define as the cluster admin. Procedure Define the user as a cluster admin: USD oc adm policy add-cluster-role-to-user cluster-admin <user> 11.3. The kubeadmin user OpenShift Container Platform creates a cluster administrator, kubeadmin , after the installation process completes. This user has the cluster-admin role automatically applied and is treated as the root user for the cluster. The password is dynamically generated and unique to your OpenShift Container Platform environment. After installation completes the password is provided in the installation program's output. For example: INFO Install complete! INFO Run 'export KUBECONFIG=<your working directory>/auth/kubeconfig' to manage the cluster with 'oc', the OpenShift CLI. INFO The cluster is ready when 'oc login -u kubeadmin -p <provided>' succeeds (wait a few minutes). INFO Access the OpenShift web-console here: https://console-openshift-console.apps.demo1.openshift4-beta-abcorp.com INFO Login to the console with user: kubeadmin, password: <provided> 11.3.1. Removing the kubeadmin user After you define an identity provider and create a new cluster-admin user, you can remove the kubeadmin to improve cluster security. Warning If you follow this procedure before another user is a cluster-admin , then OpenShift Container Platform must be reinstalled. It is not possible to undo this command. Prerequisites You must have configured at least one identity provider. You must have added the cluster-admin role to a user. You must be logged in as an administrator. Procedure Remove the kubeadmin secrets: USD oc delete secrets kubeadmin -n kube-system 11.4. Populating OperatorHub from mirrored Operator catalogs If you mirrored Operator catalogs for use with disconnected clusters, you can populate OperatorHub with the Operators from your mirrored catalogs. You can use the generated manifests from the mirroring process to create the required ImageContentSourcePolicy and CatalogSource objects. 11.4.1. Prerequisites Mirroring Operator catalogs for use with disconnected clusters 11.4.1.1. Creating the ImageContentSourcePolicy object After mirroring Operator catalog content to your mirror registry, create the required ImageContentSourcePolicy (ICSP) object. The ICSP object configures nodes to translate between the image references stored in Operator manifests and the mirrored registry. Procedure On a host with access to the disconnected cluster, create the ICSP by running the following command to specify the imageContentSourcePolicy.yaml file in your manifests directory: USD oc create -f <path/to/manifests/dir>/imageContentSourcePolicy.yaml where <path/to/manifests/dir> is the path to the manifests directory for your mirrored content. You can now create a CatalogSource object to reference your mirrored index image and Operator content. 11.4.1.2. Adding a catalog source to a cluster Adding a catalog source to an OpenShift Container Platform cluster enables the discovery and installation of Operators for users. Cluster administrators can create a CatalogSource object that references an index image. OperatorHub uses catalog sources to populate the user interface. Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. Prerequisites You built and pushed an index image to a registry. You have access to the cluster as a user with the cluster-admin role. Procedure Create a CatalogSource object that references your index image. If you used the oc adm catalog mirror command to mirror your catalog to a target registry, you can use the generated catalogSource.yaml file in your manifests directory as a starting point. Modify the following to your specifications and save it as a catalogSource.yaml file: apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-operator-catalog 1 namespace: openshift-marketplace 2 spec: sourceType: grpc grpcPodConfig: securityContextConfig: <security_mode> 3 image: <registry>/<namespace>/redhat-operator-index:v4.15 4 displayName: My Operator Catalog publisher: <publisher_name> 5 updateStrategy: registryPoll: 6 interval: 30m 1 If you mirrored content to local files before uploading to a registry, remove any backslash ( / ) characters from the metadata.name field to avoid an "invalid resource name" error when you create the object. 2 If you want the catalog source to be available globally to users in all namespaces, specify the openshift-marketplace namespace. Otherwise, you can specify a different namespace for the catalog to be scoped and available only for that namespace. 3 Specify the value of legacy or restricted . If the field is not set, the default value is legacy . In a future OpenShift Container Platform release, it is planned that the default value will be restricted . If your catalog cannot run with restricted permissions, it is recommended that you manually set this field to legacy . 4 Specify your index image. If you specify a tag after the image name, for example :v4.15 , the catalog source pod uses an image pull policy of Always , meaning the pod always pulls the image prior to starting the container. If you specify a digest, for example @sha256:<id> , the image pull policy is IfNotPresent , meaning the pod pulls the image only if it does not already exist on the node. 5 Specify your name or an organization name publishing the catalog. 6 Catalog sources can automatically check for new versions to keep up to date. Use the file to create the CatalogSource object: USD oc apply -f catalogSource.yaml Verify the following resources are created successfully. Check the pods: USD oc get pods -n openshift-marketplace Example output NAME READY STATUS RESTARTS AGE my-operator-catalog-6njx6 1/1 Running 0 28s marketplace-operator-d9f549946-96sgr 1/1 Running 0 26h Check the catalog source: USD oc get catalogsource -n openshift-marketplace Example output NAME DISPLAY TYPE PUBLISHER AGE my-operator-catalog My Operator Catalog grpc 5s Check the package manifest: USD oc get packagemanifest -n openshift-marketplace Example output NAME CATALOG AGE jaeger-product My Operator Catalog 93s You can now install the Operators from the OperatorHub page on your OpenShift Container Platform web console. Additional resources Accessing images for Operators from private registries Image template for custom catalog sources Image pull policy 11.5. About Operator installation with OperatorHub OperatorHub is a user interface for discovering Operators; it works in conjunction with Operator Lifecycle Manager (OLM), which installs and manages Operators on a cluster. As a cluster administrator, you can install an Operator from OperatorHub by using the OpenShift Container Platform web console or CLI. Subscribing an Operator to one or more namespaces makes the Operator available to developers on your cluster. During installation, you must determine the following initial settings for the Operator: Installation Mode Choose All namespaces on the cluster (default) to have the Operator installed on all namespaces or choose individual namespaces, if available, to only install the Operator on selected namespaces. This example chooses All namespaces... to make the Operator available to all users and projects. Update Channel If an Operator is available through multiple channels, you can choose which channel you want to subscribe to. For example, to deploy from the stable channel, if available, select it from the list. Approval Strategy You can choose automatic or manual updates. If you choose automatic updates for an installed Operator, when a new version of that Operator is available in the selected channel, Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without human intervention. If you select manual updates, when a newer version of an Operator is available, OLM creates an update request. As a cluster administrator, you must then manually approve that update request to have the Operator updated to the new version. 11.5.1. Installing from OperatorHub by using the web console You can install and subscribe to an Operator from OperatorHub by using the OpenShift Container Platform web console. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. Procedure Navigate in the web console to the Operators OperatorHub page. Scroll or type a keyword into the Filter by keyword box to find the Operator you want. For example, type jaeger to find the Jaeger Operator. You can also filter options by Infrastructure Features . For example, select Disconnected if you want to see Operators that work in disconnected environments, also known as restricted network environments. Select the Operator to display additional information. Note Choosing a Community Operator warns that Red Hat does not certify Community Operators; you must acknowledge the warning before continuing. Read the information about the Operator and click Install . On the Install Operator page, configure your Operator installation: If you want to install a specific version of an Operator, select an Update channel and Version from the lists. You can browse the various versions of an Operator across any channels it might have, view the metadata for that channel and version, and select the exact version you want to install. Note The version selection defaults to the latest version for the channel selected. If the latest version for the channel is selected, the Automatic approval strategy is enabled by default. Otherwise, Manual approval is required when not installing the latest version for the selected channel. Installing an Operator with Manual approval causes all Operators installed within the namespace to function with the Manual approval strategy and all Operators are updated together. If you want to update Operators independently, install Operators into separate namespaces. Confirm the installation mode for the Operator: All namespaces on the cluster (default) installs the Operator in the default openshift-operators namespace to watch and be made available to all namespaces in the cluster. This option is not always available. A specific namespace on the cluster allows you to choose a specific, single namespace in which to install the Operator. The Operator will only watch and be made available for use in this single namespace. For clusters on cloud providers with token authentication enabled: If the cluster uses AWS STS ( STS Mode in the web console), enter the Amazon Resource Name (ARN) of the AWS IAM role of your service account in the role ARN field. To create the role's ARN, follow the procedure described in Preparing AWS account . If the cluster uses Microsoft Entra Workload ID ( Workload Identity / Federated Identity Mode in the web console), add the client ID, tenant ID, and subscription ID in the appropriate field. For Update approval , select either the Automatic or Manual approval strategy. Important If the web console shows that the cluster uses AWS STS or Microsoft Entra Workload ID, you must set Update approval to Manual . Subscriptions with automatic update approvals are not recommended because there might be permission changes to make prior to updating. Subscriptions with manual update approvals ensure that administrators have the opportunity to verify the permissions of the later version and take any necessary steps prior to update. Click Install to make the Operator available to the selected namespaces on this OpenShift Container Platform cluster: If you selected a Manual approval strategy, the upgrade status of the subscription remains Upgrading until you review and approve the install plan. After approving on the Install Plan page, the subscription upgrade status moves to Up to date . If you selected an Automatic approval strategy, the upgrade status should resolve to Up to date without intervention. Verification After the upgrade status of the subscription is Up to date , select Operators Installed Operators to verify that the cluster service version (CSV) of the installed Operator eventually shows up. The Status should eventually resolve to Succeeded in the relevant namespace. Note For the All namespaces... installation mode, the status resolves to Succeeded in the openshift-operators namespace, but the status is Copied if you check in other namespaces. If it does not: Check the logs in any pods in the openshift-operators project (or other relevant namespace if A specific namespace... installation mode was selected) on the Workloads Pods page that are reporting issues to troubleshoot further. When the Operator is installed, the metadata indicates which channel and version are installed. Note The Channel and Version dropdown menus are still available for viewing other version metadata in this catalog context. 11.5.2. Installing from OperatorHub by using the CLI Instead of using the OpenShift Container Platform web console, you can install an Operator from OperatorHub by using the CLI. Use the oc command to create or update a Subscription object. For SingleNamespace install mode, you must also ensure an appropriate Operator group exists in the related namespace. An Operator group, defined by an OperatorGroup object, selects target namespaces in which to generate required RBAC access for all Operators in the same namespace as the Operator group. Tip In most cases, the web console method of this procedure is preferred because it automates tasks in the background, such as handling the creation of OperatorGroup and Subscription objects automatically when choosing SingleNamespace mode. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. You have installed the OpenShift CLI ( oc ). Procedure View the list of Operators available to the cluster from OperatorHub: USD oc get packagemanifests -n openshift-marketplace Example 11.1. Example output NAME CATALOG AGE 3scale-operator Red Hat Operators 91m advanced-cluster-management Red Hat Operators 91m amq7-cert-manager Red Hat Operators 91m # ... couchbase-enterprise-certified Certified Operators 91m crunchy-postgres-operator Certified Operators 91m mongodb-enterprise Certified Operators 91m # ... etcd Community Operators 91m jaeger Community Operators 91m kubefed Community Operators 91m # ... Note the catalog for your desired Operator. Inspect your desired Operator to verify its supported install modes and available channels: USD oc describe packagemanifests <operator_name> -n openshift-marketplace Example 11.2. Example output # ... Kind: PackageManifest # ... Install Modes: 1 Supported: true Type: OwnNamespace Supported: true Type: SingleNamespace Supported: false Type: MultiNamespace Supported: true Type: AllNamespaces # ... Entries: Name: example-operator.v3.7.11 Version: 3.7.11 Name: example-operator.v3.7.10 Version: 3.7.10 Name: stable-3.7 2 # ... Entries: Name: example-operator.v3.8.5 Version: 3.8.5 Name: example-operator.v3.8.4 Version: 3.8.4 Name: stable-3.8 3 Default Channel: stable-3.8 4 1 Indicates which install modes are supported. 2 3 Example channel names. 4 The channel selected by default if one is not specified. Tip You can print an Operator's version and channel information in YAML format by running the following command: USD oc get packagemanifests <operator_name> -n <catalog_namespace> -o yaml If more than one catalog is installed in a namespace, run the following command to look up the available versions and channels of an Operator from a specific catalog: USD oc get packagemanifest \ --selector=catalog=<catalogsource_name> \ --field-selector metadata.name=<operator_name> \ -n <catalog_namespace> -o yaml Important If you do not specify the Operator's catalog, running the oc get packagemanifest and oc describe packagemanifest commands might return a package from an unexpected catalog if the following conditions are met: Multiple catalogs are installed in the same namespace. The catalogs contain the same Operators or Operators with the same name. If the Operator you intend to install supports the AllNamespaces install mode, and you choose to use this mode, skip this step, because the openshift-operators namespace already has an appropriate Operator group in place by default, called global-operators . If the Operator you intend to install supports the SingleNamespace install mode, and you choose to use this mode, you must ensure an appropriate Operator group exists in the related namespace. If one does not exist, you can create create one by following these steps: Important You can only have one Operator group per namespace. For more information, see "Operator groups". Create an OperatorGroup object YAML file, for example operatorgroup.yaml , for SingleNamespace install mode: Example OperatorGroup object for SingleNamespace install mode apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: <operatorgroup_name> namespace: <namespace> 1 spec: targetNamespaces: - <namespace> 2 1 2 For SingleNamespace install mode, use the same <namespace> value for both the metadata.namespace and spec.targetNamespaces fields. Create the OperatorGroup object: USD oc apply -f operatorgroup.yaml Create a Subscription object to subscribe a namespace to an Operator: Create a YAML file for the Subscription object, for example subscription.yaml : Note If you want to subscribe to a specific version of an Operator, set the startingCSV field to the desired version and set the installPlanApproval field to Manual to prevent the Operator from automatically upgrading if a later version exists in the catalog. For details, see the following "Example Subscription object with a specific starting Operator version". Example 11.3. Example Subscription object apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: <subscription_name> namespace: <namespace_per_install_mode> 1 spec: channel: <channel_name> 2 name: <operator_name> 3 source: <catalog_name> 4 sourceNamespace: <catalog_source_namespace> 5 config: env: 6 - name: ARGS value: "-v=10" envFrom: 7 - secretRef: name: license-secret volumes: 8 - name: <volume_name> configMap: name: <configmap_name> volumeMounts: 9 - mountPath: <directory_name> name: <volume_name> tolerations: 10 - operator: "Exists" resources: 11 requests: memory: "64Mi" cpu: "250m" limits: memory: "128Mi" cpu: "500m" nodeSelector: 12 foo: bar 1 For default AllNamespaces install mode usage, specify the openshift-operators namespace. Alternatively, you can specify a custom global namespace, if you have created one. For SingleNamespace install mode usage, specify the relevant single namespace. 2 Name of the channel to subscribe to. 3 Name of the Operator to subscribe to. 4 Name of the catalog source that provides the Operator. 5 Namespace of the catalog source. Use openshift-marketplace for the default OperatorHub catalog sources. 6 The env parameter defines a list of environment variables that must exist in all containers in the pod created by OLM. 7 The envFrom parameter defines a list of sources to populate environment variables in the container. 8 The volumes parameter defines a list of volumes that must exist on the pod created by OLM. 9 The volumeMounts parameter defines a list of volume mounts that must exist in all containers in the pod created by OLM. If a volumeMount references a volume that does not exist, OLM fails to deploy the Operator. 10 The tolerations parameter defines a list of tolerations for the pod created by OLM. 11 The resources parameter defines resource constraints for all the containers in the pod created by OLM. 12 The nodeSelector parameter defines a NodeSelector for the pod created by OLM. Example 11.4. Example Subscription object with a specific starting Operator version apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: example-operator namespace: example-operator spec: channel: stable-3.7 installPlanApproval: Manual 1 name: example-operator source: custom-operators sourceNamespace: openshift-marketplace startingCSV: example-operator.v3.7.10 2 1 Set the approval strategy to Manual in case your specified version is superseded by a later version in the catalog. This plan prevents an automatic upgrade to a later version and requires manual approval before the starting CSV can complete the installation. 2 Set a specific version of an Operator CSV. For clusters on cloud providers with token authentication enabled, configure your Subscription object by following these steps: Ensure the Subscription object is set to manual update approvals: kind: Subscription # ... spec: installPlanApproval: Manual 1 1 Subscriptions with automatic update approvals are not recommended because there might be permission changes to make prior to updating. Subscriptions with manual update approvals ensure that administrators have the opportunity to verify the permissions of the later version and take any necessary steps prior to update. Include the relevant cloud provider-specific fields in the Subscription object's config section: If the cluster is in AWS STS mode, include the following fields: kind: Subscription # ... spec: config: env: - name: ROLEARN value: "<role_arn>" 1 1 Include the role ARN details. If the cluster is in Microsoft Entra Workload ID mode, include the following fields: kind: Subscription # ... spec: config: env: - name: CLIENTID value: "<client_id>" 1 - name: TENANTID value: "<tenant_id>" 2 - name: SUBSCRIPTIONID value: "<subscription_id>" 3 1 Include the client ID. 2 Include the tenant ID. 3 Include the subscription ID. Create the Subscription object by running the following command: USD oc apply -f subscription.yaml If you set the installPlanApproval field to Manual , manually approve the pending install plan to complete the Operator installation. For more information, see "Manually approving a pending Operator update". At this point, OLM is now aware of the selected Operator. A cluster service version (CSV) for the Operator should appear in the target namespace, and APIs provided by the Operator should be available for creation. Verification Check the status of the Subscription object for your installed Operator by running the following command: USD oc describe subscription <subscription_name> -n <namespace> If you created an Operator group for SingleNamespace install mode, check the status of the OperatorGroup object by running the following command: USD oc describe operatorgroup <operatorgroup_name> -n <namespace> Additional resources About OperatorGroups
[ "apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: my_identity_provider 1 mappingMethod: claim 2 type: HTPasswd htpasswd: fileData: name: htpass-secret 3", "oc describe clusterrole.rbac", "Name: admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- .packages.apps.redhat.com [] [] [* create update patch delete get list watch] imagestreams [] [] [create delete deletecollection get list patch update watch create get list watch] imagestreams.image.openshift.io [] [] [create delete deletecollection get list patch update watch create get list watch] secrets [] [] [create delete deletecollection get list patch update watch get list watch create delete deletecollection patch update] buildconfigs/webhooks [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs [] [] [create delete deletecollection get list patch update watch get list watch] buildlogs [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs/scale [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamimages [] [] [create delete deletecollection get list patch update watch get list watch] imagestreammappings [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamtags [] [] [create delete deletecollection get list patch update watch get list watch] processedtemplates [] [] [create delete deletecollection get list patch update watch get list watch] routes [] [] [create delete deletecollection get list patch update watch get list watch] templateconfigs [] [] [create delete deletecollection get list patch update watch get list watch] templateinstances [] [] [create delete deletecollection get list patch update watch get list watch] templates [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs.apps.openshift.io/scale [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs.apps.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs.build.openshift.io/webhooks [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs.build.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] buildlogs.build.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamimages.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreammappings.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamtags.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] routes.route.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] processedtemplates.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templateconfigs.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templateinstances.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templates.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] serviceaccounts [] [] [create delete deletecollection get list patch update watch impersonate create delete deletecollection patch update get list watch] imagestreams/secrets [] [] [create delete deletecollection get list patch update watch] rolebindings [] [] [create delete deletecollection get list patch update watch] roles [] [] [create delete deletecollection get list patch update watch] rolebindings.authorization.openshift.io [] [] [create delete deletecollection get list patch update watch] roles.authorization.openshift.io [] [] [create delete deletecollection get list patch update watch] imagestreams.image.openshift.io/secrets [] [] [create delete deletecollection get list patch update watch] rolebindings.rbac.authorization.k8s.io [] [] [create delete deletecollection get list patch update watch] roles.rbac.authorization.k8s.io [] [] [create delete deletecollection get list patch update watch] networkpolicies.extensions [] [] [create delete deletecollection patch update create delete deletecollection get list patch update watch get list watch] networkpolicies.networking.k8s.io [] [] [create delete deletecollection patch update create delete deletecollection get list patch update watch get list watch] configmaps [] [] [create delete deletecollection patch update get list watch] endpoints [] [] [create delete deletecollection patch update get list watch] persistentvolumeclaims [] [] [create delete deletecollection patch update get list watch] pods [] [] [create delete deletecollection patch update get list watch] replicationcontrollers/scale [] [] [create delete deletecollection patch update get list watch] replicationcontrollers [] [] [create delete deletecollection patch update get list watch] services [] [] [create delete deletecollection patch update get list watch] daemonsets.apps [] [] [create delete deletecollection patch update get list watch] deployments.apps/scale [] [] [create delete deletecollection patch update get list watch] deployments.apps [] [] [create delete deletecollection patch update get list watch] replicasets.apps/scale [] [] [create delete deletecollection patch update get list watch] replicasets.apps [] [] [create delete deletecollection patch update get list watch] statefulsets.apps/scale [] [] [create delete deletecollection patch update get list watch] statefulsets.apps [] [] [create delete deletecollection patch update get list watch] horizontalpodautoscalers.autoscaling [] [] [create delete deletecollection patch update get list watch] cronjobs.batch [] [] [create delete deletecollection patch update get list watch] jobs.batch [] [] [create delete deletecollection patch update get list watch] daemonsets.extensions [] [] [create delete deletecollection patch update get list watch] deployments.extensions/scale [] [] [create delete deletecollection patch update get list watch] deployments.extensions [] [] [create delete deletecollection patch update get list watch] ingresses.extensions [] [] [create delete deletecollection patch update get list watch] replicasets.extensions/scale [] [] [create delete deletecollection patch update get list watch] replicasets.extensions [] [] [create delete deletecollection patch update get list watch] replicationcontrollers.extensions/scale [] [] [create delete deletecollection patch update get list watch] poddisruptionbudgets.policy [] [] [create delete deletecollection patch update get list watch] deployments.apps/rollback [] [] [create delete deletecollection patch update] deployments.extensions/rollback [] [] [create delete deletecollection patch update] catalogsources.operators.coreos.com [] [] [create update patch delete get list watch] clusterserviceversions.operators.coreos.com [] [] [create update patch delete get list watch] installplans.operators.coreos.com [] [] [create update patch delete get list watch] packagemanifests.operators.coreos.com [] [] [create update patch delete get list watch] subscriptions.operators.coreos.com [] [] [create update patch delete get list watch] buildconfigs/instantiate [] [] [create] buildconfigs/instantiatebinary [] [] [create] builds/clone [] [] [create] deploymentconfigrollbacks [] [] [create] deploymentconfigs/instantiate [] [] [create] deploymentconfigs/rollback [] [] [create] imagestreamimports [] [] [create] localresourceaccessreviews [] [] [create] localsubjectaccessreviews [] [] [create] podsecuritypolicyreviews [] [] [create] podsecuritypolicyselfsubjectreviews [] [] [create] podsecuritypolicysubjectreviews [] [] [create] resourceaccessreviews [] [] [create] routes/custom-host [] [] [create] subjectaccessreviews [] [] [create] subjectrulesreviews [] [] [create] deploymentconfigrollbacks.apps.openshift.io [] [] [create] deploymentconfigs.apps.openshift.io/instantiate [] [] [create] deploymentconfigs.apps.openshift.io/rollback [] [] [create] localsubjectaccessreviews.authorization.k8s.io [] [] [create] localresourceaccessreviews.authorization.openshift.io [] [] [create] localsubjectaccessreviews.authorization.openshift.io [] [] [create] resourceaccessreviews.authorization.openshift.io [] [] [create] subjectaccessreviews.authorization.openshift.io [] [] [create] subjectrulesreviews.authorization.openshift.io [] [] [create] buildconfigs.build.openshift.io/instantiate [] [] [create] buildconfigs.build.openshift.io/instantiatebinary [] [] [create] builds.build.openshift.io/clone [] [] [create] imagestreamimports.image.openshift.io [] [] [create] routes.route.openshift.io/custom-host [] [] [create] podsecuritypolicyreviews.security.openshift.io [] [] [create] podsecuritypolicyselfsubjectreviews.security.openshift.io [] [] [create] podsecuritypolicysubjectreviews.security.openshift.io [] [] [create] jenkins.build.openshift.io [] [] [edit view view admin edit view] builds [] [] [get create delete deletecollection get list patch update watch get list watch] builds.build.openshift.io [] [] [get create delete deletecollection get list patch update watch get list watch] projects [] [] [get delete get delete get patch update] projects.project.openshift.io [] [] [get delete get delete get patch update] namespaces [] [] [get get list watch] pods/attach [] [] [get list watch create delete deletecollection patch update] pods/exec [] [] [get list watch create delete deletecollection patch update] pods/portforward [] [] [get list watch create delete deletecollection patch update] pods/proxy [] [] [get list watch create delete deletecollection patch update] services/proxy [] [] [get list watch create delete deletecollection patch update] routes/status [] [] [get list watch update] routes.route.openshift.io/status [] [] [get list watch update] appliedclusterresourcequotas [] [] [get list watch] bindings [] [] [get list watch] builds/log [] [] [get list watch] deploymentconfigs/log [] [] [get list watch] deploymentconfigs/status [] [] [get list watch] events [] [] [get list watch] imagestreams/status [] [] [get list watch] limitranges [] [] [get list watch] namespaces/status [] [] [get list watch] pods/log [] [] [get list watch] pods/status [] [] [get list watch] replicationcontrollers/status [] [] [get list watch] resourcequotas/status [] [] [get list watch] resourcequotas [] [] [get list watch] resourcequotausages [] [] [get list watch] rolebindingrestrictions [] [] [get list watch] deploymentconfigs.apps.openshift.io/log [] [] [get list watch] deploymentconfigs.apps.openshift.io/status [] [] [get list watch] controllerrevisions.apps [] [] [get list watch] rolebindingrestrictions.authorization.openshift.io [] [] [get list watch] builds.build.openshift.io/log [] [] [get list watch] imagestreams.image.openshift.io/status [] [] [get list watch] appliedclusterresourcequotas.quota.openshift.io [] [] [get list watch] imagestreams/layers [] [] [get update get] imagestreams.image.openshift.io/layers [] [] [get update get] builds/details [] [] [update] builds.build.openshift.io/details [] [] [update] Name: basic-user Labels: <none> Annotations: openshift.io/description: A user that can get basic information about projects. rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- selfsubjectrulesreviews [] [] [create] selfsubjectaccessreviews.authorization.k8s.io [] [] [create] selfsubjectrulesreviews.authorization.openshift.io [] [] [create] clusterroles.rbac.authorization.k8s.io [] [] [get list watch] clusterroles [] [] [get list] clusterroles.authorization.openshift.io [] [] [get list] storageclasses.storage.k8s.io [] [] [get list] users [] [~] [get] users.user.openshift.io [] [~] [get] projects [] [] [list watch] projects.project.openshift.io [] [] [list watch] projectrequests [] [] [list] projectrequests.project.openshift.io [] [] [list] Name: cluster-admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- *.* [] [] [*] [*] [] [*]", "oc describe clusterrolebinding.rbac", "Name: alertmanager-main Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: alertmanager-main Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount alertmanager-main openshift-monitoring Name: basic-users Labels: <none> Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: basic-user Subjects: Kind Name Namespace ---- ---- --------- Group system:authenticated Name: cloud-credential-operator-rolebinding Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: cloud-credential-operator-role Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount default openshift-cloud-credential-operator Name: cluster-admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: cluster-admin Subjects: Kind Name Namespace ---- ---- --------- Group system:masters Name: cluster-admins Labels: <none> Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: cluster-admin Subjects: Kind Name Namespace ---- ---- --------- Group system:cluster-admins User system:admin Name: cluster-api-manager-rolebinding Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: cluster-api-manager-role Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount default openshift-machine-api", "oc describe rolebinding.rbac", "oc describe rolebinding.rbac -n joe-project", "Name: admin Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User kube:admin Name: system:deployers Labels: <none> Annotations: openshift.io/description: Allows deploymentconfigs in this namespace to rollout pods in this namespace. It is auto-managed by a controller; remove subjects to disa Role: Kind: ClusterRole Name: system:deployer Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount deployer joe-project Name: system:image-builders Labels: <none> Annotations: openshift.io/description: Allows builds in this namespace to push images to this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-builder Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount builder joe-project Name: system:image-pullers Labels: <none> Annotations: openshift.io/description: Allows all pods in this namespace to pull images from this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-puller Subjects: Kind Name Namespace ---- ---- --------- Group system:serviceaccounts:joe-project", "oc adm policy add-role-to-user <role> <user> -n <project>", "oc adm policy add-role-to-user admin alice -n joe", "apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: admin-0 namespace: joe roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: admin subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: alice", "oc describe rolebinding.rbac -n <project>", "oc describe rolebinding.rbac -n joe", "Name: admin Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User kube:admin Name: admin-0 Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User alice 1 Name: system:deployers Labels: <none> Annotations: openshift.io/description: Allows deploymentconfigs in this namespace to rollout pods in this namespace. It is auto-managed by a controller; remove subjects to disa Role: Kind: ClusterRole Name: system:deployer Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount deployer joe Name: system:image-builders Labels: <none> Annotations: openshift.io/description: Allows builds in this namespace to push images to this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-builder Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount builder joe Name: system:image-pullers Labels: <none> Annotations: openshift.io/description: Allows all pods in this namespace to pull images from this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-puller Subjects: Kind Name Namespace ---- ---- --------- Group system:serviceaccounts:joe", "oc create role <name> --verb=<verb> --resource=<resource> -n <project>", "oc create role podview --verb=get --resource=pod -n blue", "oc adm policy add-role-to-user podview user2 --role-namespace=blue -n blue", "oc create clusterrole <name> --verb=<verb> --resource=<resource>", "oc create clusterrole podviewonly --verb=get --resource=pod", "oc adm policy add-cluster-role-to-user cluster-admin <user>", "INFO Install complete! INFO Run 'export KUBECONFIG=<your working directory>/auth/kubeconfig' to manage the cluster with 'oc', the OpenShift CLI. INFO The cluster is ready when 'oc login -u kubeadmin -p <provided>' succeeds (wait a few minutes). INFO Access the OpenShift web-console here: https://console-openshift-console.apps.demo1.openshift4-beta-abcorp.com INFO Login to the console with user: kubeadmin, password: <provided>", "oc delete secrets kubeadmin -n kube-system", "oc create -f <path/to/manifests/dir>/imageContentSourcePolicy.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-operator-catalog 1 namespace: openshift-marketplace 2 spec: sourceType: grpc grpcPodConfig: securityContextConfig: <security_mode> 3 image: <registry>/<namespace>/redhat-operator-index:v4.15 4 displayName: My Operator Catalog publisher: <publisher_name> 5 updateStrategy: registryPoll: 6 interval: 30m", "oc apply -f catalogSource.yaml", "oc get pods -n openshift-marketplace", "NAME READY STATUS RESTARTS AGE my-operator-catalog-6njx6 1/1 Running 0 28s marketplace-operator-d9f549946-96sgr 1/1 Running 0 26h", "oc get catalogsource -n openshift-marketplace", "NAME DISPLAY TYPE PUBLISHER AGE my-operator-catalog My Operator Catalog grpc 5s", "oc get packagemanifest -n openshift-marketplace", "NAME CATALOG AGE jaeger-product My Operator Catalog 93s", "oc get packagemanifests -n openshift-marketplace", "NAME CATALOG AGE 3scale-operator Red Hat Operators 91m advanced-cluster-management Red Hat Operators 91m amq7-cert-manager Red Hat Operators 91m couchbase-enterprise-certified Certified Operators 91m crunchy-postgres-operator Certified Operators 91m mongodb-enterprise Certified Operators 91m etcd Community Operators 91m jaeger Community Operators 91m kubefed Community Operators 91m", "oc describe packagemanifests <operator_name> -n openshift-marketplace", "Kind: PackageManifest Install Modes: 1 Supported: true Type: OwnNamespace Supported: true Type: SingleNamespace Supported: false Type: MultiNamespace Supported: true Type: AllNamespaces Entries: Name: example-operator.v3.7.11 Version: 3.7.11 Name: example-operator.v3.7.10 Version: 3.7.10 Name: stable-3.7 2 Entries: Name: example-operator.v3.8.5 Version: 3.8.5 Name: example-operator.v3.8.4 Version: 3.8.4 Name: stable-3.8 3 Default Channel: stable-3.8 4", "oc get packagemanifests <operator_name> -n <catalog_namespace> -o yaml", "oc get packagemanifest --selector=catalog=<catalogsource_name> --field-selector metadata.name=<operator_name> -n <catalog_namespace> -o yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: <operatorgroup_name> namespace: <namespace> 1 spec: targetNamespaces: - <namespace> 2", "oc apply -f operatorgroup.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: <subscription_name> namespace: <namespace_per_install_mode> 1 spec: channel: <channel_name> 2 name: <operator_name> 3 source: <catalog_name> 4 sourceNamespace: <catalog_source_namespace> 5 config: env: 6 - name: ARGS value: \"-v=10\" envFrom: 7 - secretRef: name: license-secret volumes: 8 - name: <volume_name> configMap: name: <configmap_name> volumeMounts: 9 - mountPath: <directory_name> name: <volume_name> tolerations: 10 - operator: \"Exists\" resources: 11 requests: memory: \"64Mi\" cpu: \"250m\" limits: memory: \"128Mi\" cpu: \"500m\" nodeSelector: 12 foo: bar", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: example-operator namespace: example-operator spec: channel: stable-3.7 installPlanApproval: Manual 1 name: example-operator source: custom-operators sourceNamespace: openshift-marketplace startingCSV: example-operator.v3.7.10 2", "kind: Subscription spec: installPlanApproval: Manual 1", "kind: Subscription spec: config: env: - name: ROLEARN value: \"<role_arn>\" 1", "kind: Subscription spec: config: env: - name: CLIENTID value: \"<client_id>\" 1 - name: TENANTID value: \"<tenant_id>\" 2 - name: SUBSCRIPTIONID value: \"<subscription_id>\" 3", "oc apply -f subscription.yaml", "oc describe subscription <subscription_name> -n <namespace>", "oc describe operatorgroup <operatorgroup_name> -n <namespace>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/postinstallation_configuration/post-install-preparing-for-users
14.12.3. Dumping Storage Volume Information to an XML File
14.12.3. Dumping Storage Volume Information to an XML File vol-dumpxml --pool pool-or-uuid vol-name-or-key-or-path command takes the volume information as an XML dump to a specified file. This command requires a --pool pool-or-uuid , which is the name or UUID of the storage pool the volume is in. vol-name-or-key-or-path is the name or key or path of the volume to place the resulting XML file.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sect-storage_volume_commands-dumping_storage_volume_information_to_an_xml_file
18.5.3. DMZs and IPTables
18.5.3. DMZs and IPTables You can create iptables rules to route traffic to certain machines, such as a dedicated HTTP or FTP server, in a demilitarized zone ( DMZ ). A DMZ is a special local subnetwork dedicated to providing services on a public carrier, such as the Internet. For example, to set a rule for routing incoming HTTP requests to a dedicated HTTP server at 10.0.4.2 (outside of the 192.168.1.0/24 range of the LAN), NAT uses the PREROUTING table to forward the packets to the appropriate destination: With this command, all HTTP connections to port 80 from outside of the LAN are routed to the HTTP server on a network separate from the rest of the internal network. This form of network segmentation can prove safer than allowing HTTP connections to a machine on the network. If the HTTP server is configured to accept secure connections, then port 443 must be forwarded as well.
[ "iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j DNAT --to-destination 10.0.4.2:80" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/s2-firewall-dmz
Chapter 7. Common deployment scenarios
Chapter 7. Common deployment scenarios This section provides a brief overview of common deployment scenarios for Red Hat Satellite. Note that many variations and combinations of the following layouts are possible. 7.1. Single location An integrated Capsule is a virtual Capsule Server that is created by default in Satellite Server during the installation process. This means Satellite Server can be used to provision directly connected hosts for Satellite deployment in a single geographical location, therefore only one physical server is needed. The base systems of isolated Capsules can be directly managed by Satellite Server, however it is not recommended to use this layout to manage other hosts in remote locations. 7.2. Single location with segregated subnets Your infrastructure might require multiple isolated subnets even if Red Hat Satellite is deployed in a single geographic location. This can be achieved for example by deploying multiple Capsule Servers with DHCP and DNS services, but the recommended way is to create segregated subnets using a single Capsule. This Capsule is then used to manage hosts and compute resources in those segregated networks to ensure they only have to access the Capsule for provisioning, configuration, errata, and general management. For more information on configuring subnets see Managing Hosts . 7.3. Multiple locations It is recommended to create at least one Capsule Server per geographic location. This practice can save bandwidth since hosts obtain content from a local Capsule Server. Synchronization of content from remote repositories is done only by the Capsule, not by each host in a location. In addition, this layout makes the provisioning infrastructure more reliable and easier to configure. 7.4. Disconnected Satellite In high security environments where hosts are required to function in a closed network disconnected from the Internet, Red Hat Satellite can provision systems with the latest security updates, errata, packages and other content. In such case, Satellite Server does not have direct access to the Internet, but the layout of other infrastructure components is not affected. For information about installing Satellite Server from a disconnected network, see Installing Satellite Server in a disconnected network environment . For information about upgrading a disconnected Satellite, see Upgrading a Disconnected Satellite Server in Upgrading connected Red Hat Satellite to 6.16 . There are two options for importing content to a disconnected Satellite Server: Disconnected Satellite with content ISO - in this setup, you download ISO images with content from the Red Hat Customer Portal and extract them to Satellite Server or a local web server. The content on Satellite Server is then synchronized locally. This allows for complete network isolation of Satellite Server, however, the release frequency of content ISO images is around six weeks and not all product content is included. To see the products in your subscription for which content ISO images are available, log in to the Red Hat Customer Portal at https://access.redhat.com , navigate to Downloads > Red Hat Satellite , and click Content ISOs . For instructions on how to import content ISOs to a disconnected Satellite, see Configuring Satellite to Synchronize Content with a Local CDN Server in Managing content . Note that Content ISOs previously hosted at redhat.com for import into Satellite Server have been deprecated and will be removed in the Satellite version. Disconnected Satellite with Inter-Satellite Synchronization - in this setup, you install a connected Satellite Server and export content from it to populate a disconnected Satellite using some storage device. This allows for exporting both Red Hat provided and custom content at the frequency you choose, but requires deploying an additional server with a separate subscription. For instructions on how to configure Inter-Satellite Synchronization in Satellite, see Synchronizing Content Between Satellite Servers in Managing content . The above methods for importing content to a disconnected Satellite Server can also be used to speed up the initial population of a connected Satellite. 7.5. Capsule with external services You can configure a Capsule Server (integrated or standalone) to use external DNS, DHCP, or TFTP service. If you already have a server that provides these services in your environment, you can integrate it with your Satellite deployment. For information about how to configure a Capsule with external services, see Configuring Capsule Server with External Services in Installing Capsule Server .
null
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/overview_concepts_and_deployment_considerations/chap-Architecture_Guide-Deployment_Scenarios
Chapter 13. Removing
Chapter 13. Removing The steps for removing the Red Hat build of OpenTelemetry from an OpenShift Container Platform cluster are as follows: Shut down all Red Hat build of OpenTelemetry pods. Remove any OpenTelemetryCollector instances. Remove the Red Hat build of OpenTelemetry Operator. 13.1. Removing an OpenTelemetry Collector instance by using the web console You can remove an OpenTelemetry Collector instance in the Administrator view of the web console. Prerequisites You are logged in to the web console as a cluster administrator with the cluster-admin role. For Red Hat OpenShift Dedicated, you must be logged in using an account with the dedicated-admin role. Procedure Go to Operators Installed Operators Red Hat build of OpenTelemetry Operator OpenTelemetryInstrumentation or OpenTelemetryCollector . To remove the relevant instance, select Delete ... Delete . Optional: Remove the Red Hat build of OpenTelemetry Operator. 13.2. Removing an OpenTelemetry Collector instance by using the CLI You can remove an OpenTelemetry Collector instance on the command line. Prerequisites An active OpenShift CLI ( oc ) session by a cluster administrator with the cluster-admin role. Tip Ensure that your OpenShift CLI ( oc ) version is up to date and matches your OpenShift Container Platform version. Run oc login : USD oc login --username=<your_username> Procedure Get the name of the OpenTelemetry Collector instance by running the following command: USD oc get deployments -n <project_of_opentelemetry_instance> Remove the OpenTelemetry Collector instance by running the following command: USD oc delete opentelemetrycollectors <opentelemetry_instance_name> -n <project_of_opentelemetry_instance> Optional: Remove the Red Hat build of OpenTelemetry Operator. Verification To verify successful removal of the OpenTelemetry Collector instance, run oc get deployments again: USD oc get deployments -n <project_of_opentelemetry_instance> 13.3. Additional resources Deleting Operators from a cluster Getting started with the OpenShift CLI
[ "oc login --username=<your_username>", "oc get deployments -n <project_of_opentelemetry_instance>", "oc delete opentelemetrycollectors <opentelemetry_instance_name> -n <project_of_opentelemetry_instance>", "oc get deployments -n <project_of_opentelemetry_instance>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/red_hat_build_of_opentelemetry/dist-tracing-otel-removing
Chapter 11. Executing a business process in Business Central
Chapter 11. Executing a business process in Business Central After you build and deploy the project that contains your business process, you can execute the defined functionality for the business process. As an example, this procedure uses the Mortgage_Process sample project in Business Central. In this scenario, you input data into a mortgage application form acting as the mortgage broker. The MortgageApprovalProcess business process runs and determines whether or not the applicant has offered an acceptable down payment based on the decision rules defined in the project. The business process either ends the rule testing or requests that the applicant increase the down payment to proceed. If the application passes the business rule testing, the bank approver reviews the application and either approves or denies the loan. Prerequisites KIE Server is deployed and connected to Business Central. For more information about KIE Server configuration, see Installing and configuring Red Hat Process Automation Manager on Red Hat JBoss EAP 7.4 . Procedure In Business Central, go to Menu Projects and select a space. The default space is MySpace. In the upper-right corner of the window, click the arrow to Add Project and select Try Samples . Select the Mortgage_Process sample and click Ok . On the project page, select Mortgage_Process . On the Mortgage_Process page, click Build . After the project has built, click Deploy . Go to Menu Manage Process Definitions . Click anywhere in the MortgageApprovalProcess row to view the process details. Click the Diagram tab to view the business process diagram in the editor. Click New Process Instance to open the Application form and input the following values into the form fields: Down Payment : 30000 Years of amortization : 10 Name : Ivo Annual Income : 60000 SSN : 123456789 Age of property : 8 Address of property : Brno Locale : Rural Property Sale Price : 50000 Click Submit to start a new process instance. After starting the process instance, the Instance Details view opens. Click the Diagram tab to view the process flow within the process diagram. The state of the process is highlighted as it moves through each task. Click Menu Manage Tasks . For this example, the user or users working on the corresponding tasks are members of the following groups: approver : For the Qualify task broker : For the Correct Data and Increase Down Payment tasks manager : For the Final Approval task As the approver, review the Qualify task information, click Claim and then Start to start the task, and then select Is mortgage application in limit? and click Complete to complete the task flow. In the Tasks page, click anywhere in the Final Approval row to open the Final Approval task. Click Claim to claim responsibility for the task, and click Complete to finalize the loan approval process. Note The Save and Release buttons are only used to either pause the approval process and save the instance if you are waiting on a field value, or to release the task for another user to modify.
null
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_process_services_in_red_hat_process_automation_manager/execute-bus-proc