title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
listlengths 1
5.62k
β | url
stringlengths 79
342
|
---|---|---|---|
10.2.3. Using the QEMU Guest Agent
|
10.2.3. Using the QEMU Guest Agent The QEMU guest agent protocol (QEMU GA) package, qemu-guest-agent , is fully supported in Red Hat Enterprise Linux 6.5 and newer. However, there are the following limitations with regards to isa-serial/virtio-serial transport: The qemu-guest-agent cannot detect whether or not a client has connected to the channel. There is no way for a client to detect whether or not qemu-guest-agent has disconnected or reconnected to the back-end. If the virtio-serial device resets and qemu-guest-agent has not connected to the channel (generally caused by a reboot or hot plug), data from the client will be dropped. If qemu-guest-agent has connected to the channel following a virtio-serial device reset, data from the client will be queued (and eventually throttled if available buffers are exhausted), regardless of whether or not qemu-guest-agent is still running or connected.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sect-qemu-ga
|
function::tcpmib_get_state
|
function::tcpmib_get_state Name function::tcpmib_get_state - Get a socket's state Synopsis Arguments sk pointer to a struct sock Description Returns the sk_state from a struct sock.
|
[
"tcpmib_get_state:long(sk:long)"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-tcpmib-get-state
|
Chapter 11. Image [config.openshift.io/v1]
|
Chapter 11. Image [config.openshift.io/v1] Description Image governs policies related to imagestream imports and runtime configuration for external registries. It allows cluster admins to configure which registries OpenShift is allowed to import images from, extra CA trust bundles for external registries, and policies to block or allow registry hostnames. When exposing OpenShift's image registry to the public, this also lets cluster admins specify the external hostname. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 11.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec holds user settable values for configuration status object status holds observed values from the cluster. They may not be overridden. 11.1.1. .spec Description spec holds user settable values for configuration Type object Property Type Description additionalTrustedCA object additionalTrustedCA is a reference to a ConfigMap containing additional CAs that should be trusted during imagestream import, pod image pull, build image pull, and imageregistry pullthrough. The namespace for this config map is openshift-config. allowedRegistriesForImport array allowedRegistriesForImport limits the container image registries that normal users may import images from. Set this list to the registries that you trust to contain valid Docker images and that you want applications to be able to import from. Users with permission to create Images or ImageStreamMappings via the API are not affected by this policy - typically only administrators or system integrations will have those permissions. allowedRegistriesForImport[] object RegistryLocation contains a location of the registry specified by the registry domain name. The domain name might include wildcards, like '*' or '??'. externalRegistryHostnames array (string) externalRegistryHostnames provides the hostnames for the default external image registry. The external hostname should be set only when the image registry is exposed externally. The first value is used in 'publicDockerImageRepository' field in ImageStreams. The value must be in "hostname[:port]" format. registrySources object registrySources contains configuration that determines how the container runtime should treat individual registries when accessing images for builds+pods. (e.g. whether or not to allow insecure access). It does not contain configuration for the internal cluster registry. 11.1.2. .spec.additionalTrustedCA Description additionalTrustedCA is a reference to a ConfigMap containing additional CAs that should be trusted during imagestream import, pod image pull, build image pull, and imageregistry pullthrough. The namespace for this config map is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced config map 11.1.3. .spec.allowedRegistriesForImport Description allowedRegistriesForImport limits the container image registries that normal users may import images from. Set this list to the registries that you trust to contain valid Docker images and that you want applications to be able to import from. Users with permission to create Images or ImageStreamMappings via the API are not affected by this policy - typically only administrators or system integrations will have those permissions. Type array 11.1.4. .spec.allowedRegistriesForImport[] Description RegistryLocation contains a location of the registry specified by the registry domain name. The domain name might include wildcards, like '*' or '??'. Type object Property Type Description domainName string domainName specifies a domain name for the registry In case the registry use non-standard (80 or 443) port, the port should be included in the domain name as well. insecure boolean insecure indicates whether the registry is secure (https) or insecure (http) By default (if not specified) the registry is assumed as secure. 11.1.5. .spec.registrySources Description registrySources contains configuration that determines how the container runtime should treat individual registries when accessing images for builds+pods. (e.g. whether or not to allow insecure access). It does not contain configuration for the internal cluster registry. Type object Property Type Description allowedRegistries array (string) allowedRegistries are the only registries permitted for image pull and push actions. All other registries are denied. Only one of BlockedRegistries or AllowedRegistries may be set. blockedRegistries array (string) blockedRegistries cannot be used for image pull and push actions. All other registries are permitted. Only one of BlockedRegistries or AllowedRegistries may be set. containerRuntimeSearchRegistries array (string) containerRuntimeSearchRegistries are registries that will be searched when pulling images that do not have fully qualified domains in their pull specs. Registries will be searched in the order provided in the list. Note: this search list only works with the container runtime, i.e CRI-O. Will NOT work with builds or imagestream imports. insecureRegistries array (string) insecureRegistries are registries which do not have a valid TLS certificates or only support HTTP connections. 11.1.6. .status Description status holds observed values from the cluster. They may not be overridden. Type object Property Type Description externalRegistryHostnames array (string) externalRegistryHostnames provides the hostnames for the default external image registry. The external hostname should be set only when the image registry is exposed externally. The first value is used in 'publicDockerImageRepository' field in ImageStreams. The value must be in "hostname[:port]" format. internalRegistryHostname string internalRegistryHostname sets the hostname for the default internal image registry. The value must be in "hostname[:port]" format. This value is set by the image registry operator which controls the internal registry hostname. 11.2. API endpoints The following API endpoints are available: /apis/config.openshift.io/v1/images DELETE : delete collection of Image GET : list objects of kind Image POST : create an Image /apis/config.openshift.io/v1/images/{name} DELETE : delete an Image GET : read the specified Image PATCH : partially update the specified Image PUT : replace the specified Image /apis/config.openshift.io/v1/images/{name}/status GET : read status of the specified Image PATCH : partially update status of the specified Image PUT : replace status of the specified Image 11.2.1. /apis/config.openshift.io/v1/images HTTP method DELETE Description delete collection of Image Table 11.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Image Table 11.2. HTTP responses HTTP code Reponse body 200 - OK ImageList schema 401 - Unauthorized Empty HTTP method POST Description create an Image Table 11.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 11.4. Body parameters Parameter Type Description body Image schema Table 11.5. HTTP responses HTTP code Reponse body 200 - OK Image schema 201 - Created Image schema 202 - Accepted Image schema 401 - Unauthorized Empty 11.2.2. /apis/config.openshift.io/v1/images/{name} Table 11.6. Global path parameters Parameter Type Description name string name of the Image HTTP method DELETE Description delete an Image Table 11.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 11.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Image Table 11.9. HTTP responses HTTP code Reponse body 200 - OK Image schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Image Table 11.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 11.11. HTTP responses HTTP code Reponse body 200 - OK Image schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Image Table 11.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 11.13. Body parameters Parameter Type Description body Image schema Table 11.14. HTTP responses HTTP code Reponse body 200 - OK Image schema 201 - Created Image schema 401 - Unauthorized Empty 11.2.3. /apis/config.openshift.io/v1/images/{name}/status Table 11.15. Global path parameters Parameter Type Description name string name of the Image HTTP method GET Description read status of the specified Image Table 11.16. HTTP responses HTTP code Reponse body 200 - OK Image schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Image Table 11.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 11.18. HTTP responses HTTP code Reponse body 200 - OK Image schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Image Table 11.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 11.20. Body parameters Parameter Type Description body Image schema Table 11.21. HTTP responses HTTP code Reponse body 200 - OK Image schema 201 - Created Image schema 401 - Unauthorized Empty
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/config_apis/image-config-openshift-io-v1
|
2.3. Configuring Cluster Components
|
2.3. Configuring Cluster Components To configure the components and attributes of a cluster, click on the name of the cluster displayed on the Manage Clusters screen. This brings up the Nodes page, as described in Section 2.3.1, "Cluster Nodes" . This page displays a menu along the top of the page, as shown in Figure 2.3, "Cluster Components Menu" , with the following entries: Nodes , as described in Section 2.3.1, "Cluster Nodes" Resources , as described in Section 2.3.2, "Cluster Resources" Fence Devices , as described in Section 2.3.3, "Fence Devices" ACLs , as described in Section 2.3.4, "Configuring ACLs" Cluster Properties , as described in Section 2.3.5, "Cluster Properties" Figure 2.3. Cluster Components Menu 2.3.1. Cluster Nodes Selecting the Nodes option from the menu along the top of the cluster management page displays the currently configured nodes and the status of the currently selected node, including which resources are running on the node and the resource location preferences. This is the default page that displays when you select a cluster from the Manage Clusters screen. You can add or remove nodes from this page, and you can start, stop, restart, or put a node in standby mode. For information on standby mode, see Section 4.4.5, "Standby Mode" . You can also configure fence devices directly from this page, as described in Section 2.3.3, "Fence Devices" . by selecting Configure Fencing . 2.3.2. Cluster Resources Selecting the Resources option from the menu along the top of the cluster management page displays the currently configured resources for the cluster, organized according to resource groups. Selecting a group or a resource displays the attributes of that group or resource. From this screen, you can add or remove resources, you can edit the configuration of existing resources, and you can create a resource group. To add a new resource to the cluster, click Add . The brings up the Add Resource screen. When you select a resource type from the dropdown Type menu, the arguments you must specify for that resource appear in the menu. You can click Optional Arguments to display additional arguments you can specify for the resource you are defining. After entering the parameters for the resource you are creating, click Create Resource . When configuring the arguments for a resource, a brief description of the argument appears in the menu. If you move the cursor to the field, a longer help description of that argument is displayed. You can define as resource as a cloned resource, or as a master/slave resource. For information on these resource types, see Chapter 9, Advanced Configuration . Once you have created at least one resource, you can create a resource group. For information on resource groups, see Section 6.5, "Resource Groups" . To create a resource group, select a resource that will be part of the group from the Resources screen, then click Create Group . This displays the Create Group screen. Enter a group name and click Create Group . This returns you to the Resources screen, which now displays the group name for the resource. After you have created a resource group, you can indicate that group name as a resource parameter when you create or modify additional resources. 2.3.3. Fence Devices Selecting the Fence Devices option from the menu along the top of the cluster management page displays Fence Devices screen, showing the currently configured fence devices. To add a new fence device to the cluster, click Add . The brings up the Add Fence Device screen. When you select a fence device type from the drop-down Type menu, the arguments you must specify for that fence device appear in the menu. You can click on Optional Arguments to display additional arguments you can specify for the fence device you are defining. After entering the parameters for the new fence device, click Create Fence Instance . For information on configuring fence devices with Pacemaker, see Chapter 5, Fencing: Configuring STONITH . 2.3.4. Configuring ACLs Selecting the ACLS option from the menu along the top of the cluster management page displays a screen from which you can set permissions for local users, allowing read-only or read-write access to the cluster configuration by using access control lists (ACLs). To assign ACL permissions, you create a role and define the access permissions for that role. Each role can have an unlimited number of permissions (read/write/deny) applied to either an XPath query or the ID of a specific element. After defining the role, you can assign it to an existing user or group. 2.3.5. Cluster Properties Selecting the Cluster Properties option from the menu along the top of the cluster management page displays the cluster properties and allows you to modify these properties from their default values. For information on the Pacemaker cluster properties, see Chapter 12, Pacemaker Cluster Properties .
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/s1-guiclustcomponents-HAAR
|
Chapter 16. Events APIs
|
Chapter 16. Events APIs 16.1. Events APIs 16.1.1. Event [events.k8s.io/v1] Description Event is a report of an event somewhere in the cluster. It generally denotes some state change in the system. Events have a limited retention time and triggers and messages may evolve with time. Event consumers should not rely on the timing of an event with a given Reason reflecting a consistent underlying trigger, or the continued existence of events with that Reason. Events should be treated as informative, best-effort, supplemental data. Type object 16.2. Event [events.k8s.io/v1] Description Event is a report of an event somewhere in the cluster. It generally denotes some state change in the system. Events have a limited retention time and triggers and messages may evolve with time. Event consumers should not rely on the timing of an event with a given Reason reflecting a consistent underlying trigger, or the continued existence of events with that Reason. Events should be treated as informative, best-effort, supplemental data. Type object Required eventTime 16.2.1. Specification Property Type Description action string action is what action was taken/failed regarding to the regarding object. It is machine-readable. This field cannot be empty for new Events and it can have at most 128 characters. apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources deprecatedCount integer deprecatedCount is the deprecated field assuring backward compatibility with core.v1 Event type. deprecatedFirstTimestamp Time deprecatedFirstTimestamp is the deprecated field assuring backward compatibility with core.v1 Event type. deprecatedLastTimestamp Time deprecatedLastTimestamp is the deprecated field assuring backward compatibility with core.v1 Event type. deprecatedSource EventSource deprecatedSource is the deprecated field assuring backward compatibility with core.v1 Event type. eventTime MicroTime eventTime is the time when this Event was first observed. It is required. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata note string note is a human-readable description of the status of this operation. Maximal length of the note is 1kB, but libraries should be prepared to handle values up to 64kB. reason string reason is why the action was taken. It is human-readable. This field cannot be empty for new Events and it can have at most 128 characters. regarding ObjectReference regarding contains the object this Event is about. In most cases it's an Object reporting controller implements, e.g. ReplicaSetController implements ReplicaSets and this event is emitted because it acts on some changes in a ReplicaSet object. related ObjectReference related is the optional secondary object for more complex actions. E.g. when regarding object triggers a creation or deletion of related object. reportingController string reportingController is the name of the controller that emitted this Event, e.g. kubernetes.io/kubelet . This field cannot be empty for new Events. reportingInstance string reportingInstance is the ID of the controller instance, e.g. kubelet-xyzf . This field cannot be empty for new Events and it can have at most 128 characters. series object EventSeries contain information on series of events, i.e. thing that was/is happening continuously for some time. How often to update the EventSeries is up to the event reporters. The default event reporter in "k8s.io/client-go/tools/events/event_broadcaster.go" shows how this struct is updated on heartbeats and can guide customized reporter implementations. type string type is the type of this event (Normal, Warning), new types could be added in the future. It is machine-readable. This field cannot be empty for new Events. 16.2.1.1. .series Description EventSeries contain information on series of events, i.e. thing that was/is happening continuously for some time. How often to update the EventSeries is up to the event reporters. The default event reporter in "k8s.io/client-go/tools/events/event_broadcaster.go" shows how this struct is updated on heartbeats and can guide customized reporter implementations. Type object Required count lastObservedTime Property Type Description count integer count is the number of occurrences in this series up to the last heartbeat time. lastObservedTime MicroTime lastObservedTime is the time when last Event from the series was seen before last heartbeat. 16.2.2. API endpoints The following API endpoints are available: /apis/events.k8s.io/v1/events GET : list or watch objects of kind Event /apis/events.k8s.io/v1/watch/events GET : watch individual changes to a list of Event. deprecated: use the 'watch' parameter with a list operation instead. /apis/events.k8s.io/v1/namespaces/{namespace}/events DELETE : delete collection of Event GET : list or watch objects of kind Event POST : create an Event /apis/events.k8s.io/v1/watch/namespaces/{namespace}/events GET : watch individual changes to a list of Event. deprecated: use the 'watch' parameter with a list operation instead. /apis/events.k8s.io/v1/namespaces/{namespace}/events/{name} DELETE : delete an Event GET : read the specified Event PATCH : partially update the specified Event PUT : replace the specified Event /apis/events.k8s.io/v1/watch/namespaces/{namespace}/events/{name} GET : watch changes to an object of kind Event. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 16.2.2.1. /apis/events.k8s.io/v1/events Table 16.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list or watch objects of kind Event Table 16.2. HTTP responses HTTP code Reponse body 200 - OK EventList schema 401 - Unauthorized Empty 16.2.2.2. /apis/events.k8s.io/v1/watch/events Table 16.3. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of Event. deprecated: use the 'watch' parameter with a list operation instead. Table 16.4. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 16.2.2.3. /apis/events.k8s.io/v1/namespaces/{namespace}/events Table 16.5. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 16.6. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of Event Table 16.7. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 16.8. Body parameters Parameter Type Description body DeleteOptions schema Table 16.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind Event Table 16.10. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 16.11. HTTP responses HTTP code Reponse body 200 - OK EventList schema 401 - Unauthorized Empty HTTP method POST Description create an Event Table 16.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 16.13. Body parameters Parameter Type Description body Event schema Table 16.14. HTTP responses HTTP code Reponse body 200 - OK Event schema 201 - Created Event schema 202 - Accepted Event schema 401 - Unauthorized Empty 16.2.2.4. /apis/events.k8s.io/v1/watch/namespaces/{namespace}/events Table 16.15. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 16.16. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of Event. deprecated: use the 'watch' parameter with a list operation instead. Table 16.17. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 16.2.2.5. /apis/events.k8s.io/v1/namespaces/{namespace}/events/{name} Table 16.18. Global path parameters Parameter Type Description name string name of the Event namespace string object name and auth scope, such as for teams and projects Table 16.19. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete an Event Table 16.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 16.21. Body parameters Parameter Type Description body DeleteOptions schema Table 16.22. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Event Table 16.23. HTTP responses HTTP code Reponse body 200 - OK Event schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Event Table 16.24. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 16.25. Body parameters Parameter Type Description body Patch schema Table 16.26. HTTP responses HTTP code Reponse body 200 - OK Event schema 201 - Created Event schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Event Table 16.27. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 16.28. Body parameters Parameter Type Description body Event schema Table 16.29. HTTP responses HTTP code Reponse body 200 - OK Event schema 201 - Created Event schema 401 - Unauthorized Empty 16.2.2.6. /apis/events.k8s.io/v1/watch/namespaces/{namespace}/events/{name} Table 16.30. Global path parameters Parameter Type Description name string name of the Event namespace string object name and auth scope, such as for teams and projects Table 16.31. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind Event. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 16.32. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_microshift/4.18/html/api_reference/events-apis-1
|
A.17. libguestfs Troubleshooting
|
A.17. libguestfs Troubleshooting A test tool is available to check that libguestfs is working. Enter the following command after installing libguestfs (root access not required) to test for normal operation: This tool prints a large amount of text to test the operation of libguestfs. If the test is successful, the following text will appear near the end of the output:
|
[
"libguestfs-test-tool",
"===== TEST FINISHED OK ====="
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-troubleshooting-libguestfs_troubleshooting
|
Chapter 6. PodMonitor [monitoring.coreos.com/v1]
|
Chapter 6. PodMonitor [monitoring.coreos.com/v1] Description PodMonitor defines monitoring for a set of pods. Type object Required spec 6.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object Specification of desired Pod selection for target discovery by Prometheus. 6.1.1. .spec Description Specification of desired Pod selection for target discovery by Prometheus. Type object Required selector Property Type Description attachMetadata object attachMetadata defines additional metadata which is added to the discovered targets. It requires Prometheus >= v2.37.0. bodySizeLimit string When defined, bodySizeLimit specifies a job level limit on the size of uncompressed response body that will be accepted by Prometheus. It requires Prometheus >= v2.28.0. jobLabel string The label to use to retrieve the job name from. jobLabel selects the label from the associated Kubernetes Pod object which will be used as the job label for all metrics. For example if jobLabel is set to foo and the Kubernetes Pod object is labeled with foo: bar , then Prometheus adds the job="bar" label to all ingested metrics. If the value of this field is empty, the job label of the metrics defaults to the namespace and name of the PodMonitor object (e.g. <namespace>/<name> ). keepDroppedTargets integer Per-scrape limit on the number of targets dropped by relabeling that will be kept in memory. 0 means no limit. It requires Prometheus >= v2.47.0. labelLimit integer Per-scrape limit on number of labels that will be accepted for a sample. It requires Prometheus >= v2.27.0. labelNameLengthLimit integer Per-scrape limit on length of labels name that will be accepted for a sample. It requires Prometheus >= v2.27.0. labelValueLengthLimit integer Per-scrape limit on length of labels value that will be accepted for a sample. It requires Prometheus >= v2.27.0. namespaceSelector object Selector to select which namespaces the Kubernetes Pods objects are discovered from. podMetricsEndpoints array List of endpoints part of this PodMonitor. podMetricsEndpoints[] object PodMetricsEndpoint defines an endpoint serving Prometheus metrics to be scraped by Prometheus. podTargetLabels array (string) podTargetLabels defines the labels which are transferred from the associated Kubernetes Pod object onto the ingested metrics. sampleLimit integer sampleLimit defines a per-scrape limit on the number of scraped samples that will be accepted. scrapeClass string The scrape class to apply. scrapeProtocols array (string) scrapeProtocols defines the protocols to negotiate during a scrape. It tells clients the protocols supported by Prometheus in order of preference (from most to least preferred). If unset, Prometheus uses its default value. It requires Prometheus >= v2.49.0. selector object Label selector to select the Kubernetes Pod objects. targetLimit integer targetLimit defines a limit on the number of scraped targets that will be accepted. 6.1.2. .spec.attachMetadata Description attachMetadata defines additional metadata which is added to the discovered targets. It requires Prometheus >= v2.37.0. Type object Property Type Description node boolean When set to true, Prometheus must have the get permission on the Nodes objects. 6.1.3. .spec.namespaceSelector Description Selector to select which namespaces the Kubernetes Pods objects are discovered from. Type object Property Type Description any boolean Boolean describing whether all namespaces are selected in contrast to a list restricting them. matchNames array (string) List of namespace names to select from. 6.1.4. .spec.podMetricsEndpoints Description List of endpoints part of this PodMonitor. Type array 6.1.5. .spec.podMetricsEndpoints[] Description PodMetricsEndpoint defines an endpoint serving Prometheus metrics to be scraped by Prometheus. Type object Property Type Description authorization object authorization configures the Authorization header credentials to use when scraping the target. Cannot be set at the same time as basicAuth , or oauth2 . basicAuth object basicAuth configures the Basic Authentication credentials to use when scraping the target. Cannot be set at the same time as authorization , or oauth2 . bearerTokenSecret object bearerTokenSecret specifies a key of a Secret containing the bearer token for scraping targets. The secret needs to be in the same namespace as the PodMonitor object and readable by the Prometheus Operator. Deprecated: use authorization instead. enableHttp2 boolean enableHttp2 can be used to disable HTTP2 when scraping the target. filterRunning boolean When true, the pods which are not running (e.g. either in Failed or Succeeded state) are dropped during the target discovery. If unset, the filtering is enabled. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-phase followRedirects boolean followRedirects defines whether the scrape requests should follow HTTP 3xx redirects. honorLabels boolean When true, honorLabels preserves the metric's labels when they collide with the target's labels. honorTimestamps boolean honorTimestamps controls whether Prometheus preserves the timestamps when exposed by the target. interval string Interval at which Prometheus scrapes the metrics from the target. If empty, Prometheus uses the global scrape interval. metricRelabelings array metricRelabelings configures the relabeling rules to apply to the samples before ingestion. metricRelabelings[] object RelabelConfig allows dynamic rewriting of the label set for targets, alerts, scraped samples and remote write samples. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config oauth2 object oauth2 configures the OAuth2 settings to use when scraping the target. It requires Prometheus >= 2.27.0. Cannot be set at the same time as authorization , or basicAuth . params object params define optional HTTP URL parameters. params{} array (string) path string HTTP path from which to scrape for metrics. If empty, Prometheus uses the default value (e.g. /metrics ). port string Name of the Pod port which this endpoint refers to. It takes precedence over targetPort . proxyUrl string proxyURL configures the HTTP Proxy URL (e.g. "http://proxyserver:2195") to go through when scraping the target. relabelings array relabelings configures the relabeling rules to apply the target's metadata labels. The Operator automatically adds relabelings for a few standard Kubernetes fields. The original scrape job's name is available via the \__tmp_prometheus_job_name label. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config relabelings[] object RelabelConfig allows dynamic rewriting of the label set for targets, alerts, scraped samples and remote write samples. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config scheme string HTTP scheme to use for scraping. http and https are the expected values unless you rewrite the scheme label via relabeling. If empty, Prometheus uses the default value http . scrapeTimeout string Timeout after which Prometheus considers the scrape to be failed. If empty, Prometheus uses the global scrape timeout unless it is less than the target's scrape interval value in which the latter is used. targetPort integer-or-string Name or number of the target port of the Pod object behind the Service, the port must be specified with container port property. Deprecated: use 'port' instead. tlsConfig object TLS configuration to use when scraping the target. trackTimestampsStaleness boolean trackTimestampsStaleness defines whether Prometheus tracks staleness of the metrics that have an explicit timestamp present in scraped data. Has no effect if honorTimestamps is false. It requires Prometheus >= v2.48.0. 6.1.6. .spec.podMetricsEndpoints[].authorization Description authorization configures the Authorization header credentials to use when scraping the target. Cannot be set at the same time as basicAuth , or oauth2 . Type object Property Type Description credentials object Selects a key of a Secret in the namespace that contains the credentials for authentication. type string Defines the authentication type. The value is case-insensitive. "Basic" is not a supported value. Default: "Bearer" 6.1.7. .spec.podMetricsEndpoints[].authorization.credentials Description Selects a key of a Secret in the namespace that contains the credentials for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 6.1.8. .spec.podMetricsEndpoints[].basicAuth Description basicAuth configures the Basic Authentication credentials to use when scraping the target. Cannot be set at the same time as authorization , or oauth2 . Type object Property Type Description password object password specifies a key of a Secret containing the password for authentication. username object username specifies a key of a Secret containing the username for authentication. 6.1.9. .spec.podMetricsEndpoints[].basicAuth.password Description password specifies a key of a Secret containing the password for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 6.1.10. .spec.podMetricsEndpoints[].basicAuth.username Description username specifies a key of a Secret containing the username for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 6.1.11. .spec.podMetricsEndpoints[].bearerTokenSecret Description bearerTokenSecret specifies a key of a Secret containing the bearer token for scraping targets. The secret needs to be in the same namespace as the PodMonitor object and readable by the Prometheus Operator. Deprecated: use authorization instead. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 6.1.12. .spec.podMetricsEndpoints[].metricRelabelings Description metricRelabelings configures the relabeling rules to apply to the samples before ingestion. Type array 6.1.13. .spec.podMetricsEndpoints[].metricRelabelings[] Description RelabelConfig allows dynamic rewriting of the label set for targets, alerts, scraped samples and remote write samples. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config Type object Property Type Description action string Action to perform based on the regex matching. Uppercase and Lowercase actions require Prometheus >= v2.36.0. DropEqual and KeepEqual actions require Prometheus >= v2.41.0. Default: "Replace" modulus integer Modulus to take of the hash of the source label values. Only applicable when the action is HashMod . regex string Regular expression against which the extracted value is matched. replacement string Replacement value against which a Replace action is performed if the regular expression matches. Regex capture groups are available. separator string Separator is the string between concatenated SourceLabels. sourceLabels array (string) The source labels select values from existing labels. Their content is concatenated using the configured Separator and matched against the configured regular expression. targetLabel string Label to which the resulting string is written in a replacement. It is mandatory for Replace , HashMod , Lowercase , Uppercase , KeepEqual and DropEqual actions. Regex capture groups are available. 6.1.14. .spec.podMetricsEndpoints[].oauth2 Description oauth2 configures the OAuth2 settings to use when scraping the target. It requires Prometheus >= 2.27.0. Cannot be set at the same time as authorization , or basicAuth . Type object Required clientId clientSecret tokenUrl Property Type Description clientId object clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. clientSecret object clientSecret specifies a key of a Secret containing the OAuth2 client's secret. endpointParams object (string) endpointParams configures the HTTP parameters to append to the token URL. scopes array (string) scopes defines the OAuth2 scopes used for the token request. tokenUrl string tokenURL configures the URL to fetch the token from. 6.1.15. .spec.podMetricsEndpoints[].oauth2.clientId Description clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 6.1.16. .spec.podMetricsEndpoints[].oauth2.clientId.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 6.1.17. .spec.podMetricsEndpoints[].oauth2.clientId.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 6.1.18. .spec.podMetricsEndpoints[].oauth2.clientSecret Description clientSecret specifies a key of a Secret containing the OAuth2 client's secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 6.1.19. .spec.podMetricsEndpoints[].params Description params define optional HTTP URL parameters. Type object 6.1.20. .spec.podMetricsEndpoints[].relabelings Description relabelings configures the relabeling rules to apply the target's metadata labels. The Operator automatically adds relabelings for a few standard Kubernetes fields. The original scrape job's name is available via the \__tmp_prometheus_job_name label. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config Type array 6.1.21. .spec.podMetricsEndpoints[].relabelings[] Description RelabelConfig allows dynamic rewriting of the label set for targets, alerts, scraped samples and remote write samples. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config Type object Property Type Description action string Action to perform based on the regex matching. Uppercase and Lowercase actions require Prometheus >= v2.36.0. DropEqual and KeepEqual actions require Prometheus >= v2.41.0. Default: "Replace" modulus integer Modulus to take of the hash of the source label values. Only applicable when the action is HashMod . regex string Regular expression against which the extracted value is matched. replacement string Replacement value against which a Replace action is performed if the regular expression matches. Regex capture groups are available. separator string Separator is the string between concatenated SourceLabels. sourceLabels array (string) The source labels select values from existing labels. Their content is concatenated using the configured Separator and matched against the configured regular expression. targetLabel string Label to which the resulting string is written in a replacement. It is mandatory for Replace , HashMod , Lowercase , Uppercase , KeepEqual and DropEqual actions. Regex capture groups are available. 6.1.22. .spec.podMetricsEndpoints[].tlsConfig Description TLS configuration to use when scraping the target. Type object Property Type Description ca object Certificate authority used when verifying server certificates. cert object Client certificate to present when doing client-authentication. insecureSkipVerify boolean Disable target certificate validation. keySecret object Secret containing the client key file for the targets. serverName string Used to verify the hostname for the targets. 6.1.23. .spec.podMetricsEndpoints[].tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 6.1.24. .spec.podMetricsEndpoints[].tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 6.1.25. .spec.podMetricsEndpoints[].tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 6.1.26. .spec.podMetricsEndpoints[].tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 6.1.27. .spec.podMetricsEndpoints[].tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 6.1.28. .spec.podMetricsEndpoints[].tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 6.1.29. .spec.podMetricsEndpoints[].tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 6.1.30. .spec.selector Description Label selector to select the Kubernetes Pod objects. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 6.1.31. .spec.selector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 6.1.32. .spec.selector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 6.2. API endpoints The following API endpoints are available: /apis/monitoring.coreos.com/v1/podmonitors GET : list objects of kind PodMonitor /apis/monitoring.coreos.com/v1/namespaces/{namespace}/podmonitors DELETE : delete collection of PodMonitor GET : list objects of kind PodMonitor POST : create a PodMonitor /apis/monitoring.coreos.com/v1/namespaces/{namespace}/podmonitors/{name} DELETE : delete a PodMonitor GET : read the specified PodMonitor PATCH : partially update the specified PodMonitor PUT : replace the specified PodMonitor 6.2.1. /apis/monitoring.coreos.com/v1/podmonitors HTTP method GET Description list objects of kind PodMonitor Table 6.1. HTTP responses HTTP code Reponse body 200 - OK PodMonitorList schema 401 - Unauthorized Empty 6.2.2. /apis/monitoring.coreos.com/v1/namespaces/{namespace}/podmonitors HTTP method DELETE Description delete collection of PodMonitor Table 6.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind PodMonitor Table 6.3. HTTP responses HTTP code Reponse body 200 - OK PodMonitorList schema 401 - Unauthorized Empty HTTP method POST Description create a PodMonitor Table 6.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.5. Body parameters Parameter Type Description body PodMonitor schema Table 6.6. HTTP responses HTTP code Reponse body 200 - OK PodMonitor schema 201 - Created PodMonitor schema 202 - Accepted PodMonitor schema 401 - Unauthorized Empty 6.2.3. /apis/monitoring.coreos.com/v1/namespaces/{namespace}/podmonitors/{name} Table 6.7. Global path parameters Parameter Type Description name string name of the PodMonitor HTTP method DELETE Description delete a PodMonitor Table 6.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 6.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified PodMonitor Table 6.10. HTTP responses HTTP code Reponse body 200 - OK PodMonitor schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified PodMonitor Table 6.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.12. HTTP responses HTTP code Reponse body 200 - OK PodMonitor schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified PodMonitor Table 6.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.14. Body parameters Parameter Type Description body PodMonitor schema Table 6.15. HTTP responses HTTP code Reponse body 200 - OK PodMonitor schema 201 - Created PodMonitor schema 401 - Unauthorized Empty
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/monitoring_apis/podmonitor-monitoring-coreos-com-v1
|
Preface
|
Preface Open Java Development Kit (OpenJDK) is a free and open-source implementation of the Java Platform, Standard Edition (Java SE). Eclipse Temurin is available in three LTS versions: OpenJDK 8u, OpenJDK 11u, and OpenJDK 17u. Binaries for Eclipse Temurin are available for macOS, Microsoft Windows, and on multiple Linux x86 Operating Systems including Red Hat Enterprise Linux and Ubuntu.
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_eclipse_temurin_11.0.18/pr01
|
Chapter 63. VulnMgmtService
|
Chapter 63. VulnMgmtService 63.1. VulnMgmtExportWorkloads GET /v1/export/vuln-mgmt/workloads Streams vulnerability data upon request. Each entry consists of a deployment and the associated container images. 63.1.1. Description The response is structured as: {\"result\": {\"deployment\": {... }, \"images\": [... ]}} ... {\"result\": {\"deployment\": {... }, \"images\": [... ]}} 63.1.2. Parameters 63.1.2.1. Query Parameters Name Description Required Default Pattern timeout Request timeout in seconds. - null query Query to constrain the deployments for which vulnerability data is returned. The queries contain pairs of `Search Option:Value` separated by `+` signs. For HTTP requests the query should be quoted. For example > curl "USDROX_ENDPOINT/v1/export/vuln-mgmt/workloads?query=Deployment%3Ascanner%2BNamespace%3Astackrox" queries vulnerability data for all scanner deployments in the stackrox namespace. See https://docs.openshift.com/acs/operating/search-filter.html for more information. - null 63.1.3. Return Type Stream_result_of_v1VulnMgmtExportWorkloadsResponse 63.1.4. Content Type application/json 63.1.5. Responses Table 63.1. HTTP Response Codes Code Message Datatype 200 A successful response.(streaming responses) Stream_result_of_v1VulnMgmtExportWorkloadsResponse 0 An unexpected error response. GooglerpcStatus 63.1.6. Samples 63.1.7. Common object reference 63.1.7.1. CVSSV2AccessComplexity Enum Values ACCESS_HIGH ACCESS_MEDIUM ACCESS_LOW 63.1.7.2. CVSSV2Authentication Enum Values AUTH_MULTIPLE AUTH_SINGLE AUTH_NONE 63.1.7.3. CVSSV3Complexity Enum Values COMPLEXITY_LOW COMPLEXITY_HIGH 63.1.7.4. CVSSV3Privileges Enum Values PRIVILEGE_NONE PRIVILEGE_LOW PRIVILEGE_HIGH 63.1.7.5. CVSSV3UserInteraction Enum Values UI_NONE UI_REQUIRED 63.1.7.6. ContainerConfigEnvironmentConfig Field Name Required Nullable Type Description Format key String value String envVarSource EnvironmentConfigEnvVarSource UNSET, RAW, SECRET_KEY, CONFIG_MAP_KEY, FIELD, RESOURCE_FIELD, UNKNOWN, 63.1.7.7. EmbeddedVulnerabilityVulnerabilityType Enum Values UNKNOWN_VULNERABILITY IMAGE_VULNERABILITY K8S_VULNERABILITY ISTIO_VULNERABILITY NODE_VULNERABILITY OPENSHIFT_VULNERABILITY 63.1.7.8. EnvironmentConfigEnvVarSource Enum Values UNSET RAW SECRET_KEY CONFIG_MAP_KEY FIELD RESOURCE_FIELD UNKNOWN 63.1.7.9. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 63.1.7.10. PortConfigExposureInfo Field Name Required Nullable Type Description Format level PortConfigExposureLevel UNSET, EXTERNAL, NODE, INTERNAL, HOST, ROUTE, serviceName String serviceId String serviceClusterIp String servicePort Integer int32 nodePort Integer int32 externalIps List of string externalHostnames List of string 63.1.7.11. PortConfigExposureLevel Enum Values UNSET EXTERNAL NODE INTERNAL HOST ROUTE 63.1.7.12. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 63.1.7.12.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 63.1.7.13. SeccompProfileProfileType Enum Values UNCONFINED RUNTIME_DEFAULT LOCALHOST 63.1.7.14. SecurityContextSELinux Field Name Required Nullable Type Description Format user String role String type String level String 63.1.7.15. SecurityContextSeccompProfile Field Name Required Nullable Type Description Format type SeccompProfileProfileType UNCONFINED, RUNTIME_DEFAULT, LOCALHOST, localhostProfile String 63.1.7.16. StorageCVSSScore Field Name Required Nullable Type Description Format source StorageSource SOURCE_UNKNOWN, SOURCE_RED_HAT, SOURCE_OSV, SOURCE_NVD, url String cvssv2 StorageCVSSV2 cvssv3 StorageCVSSV3 63.1.7.17. StorageCVSSV2 Field Name Required Nullable Type Description Format vector String attackVector StorageCVSSV2AttackVector ATTACK_LOCAL, ATTACK_ADJACENT, ATTACK_NETWORK, accessComplexity CVSSV2AccessComplexity ACCESS_HIGH, ACCESS_MEDIUM, ACCESS_LOW, authentication CVSSV2Authentication AUTH_MULTIPLE, AUTH_SINGLE, AUTH_NONE, confidentiality StorageCVSSV2Impact IMPACT_NONE, IMPACT_PARTIAL, IMPACT_COMPLETE, integrity StorageCVSSV2Impact IMPACT_NONE, IMPACT_PARTIAL, IMPACT_COMPLETE, availability StorageCVSSV2Impact IMPACT_NONE, IMPACT_PARTIAL, IMPACT_COMPLETE, exploitabilityScore Float float impactScore Float float score Float float severity StorageCVSSV2Severity UNKNOWN, LOW, MEDIUM, HIGH, 63.1.7.18. StorageCVSSV2AttackVector Enum Values ATTACK_LOCAL ATTACK_ADJACENT ATTACK_NETWORK 63.1.7.19. StorageCVSSV2Impact Enum Values IMPACT_NONE IMPACT_PARTIAL IMPACT_COMPLETE 63.1.7.20. StorageCVSSV2Severity Enum Values UNKNOWN LOW MEDIUM HIGH 63.1.7.21. StorageCVSSV3 Field Name Required Nullable Type Description Format vector String exploitabilityScore Float float impactScore Float float attackVector StorageCVSSV3AttackVector ATTACK_LOCAL, ATTACK_ADJACENT, ATTACK_NETWORK, ATTACK_PHYSICAL, attackComplexity CVSSV3Complexity COMPLEXITY_LOW, COMPLEXITY_HIGH, privilegesRequired CVSSV3Privileges PRIVILEGE_NONE, PRIVILEGE_LOW, PRIVILEGE_HIGH, userInteraction CVSSV3UserInteraction UI_NONE, UI_REQUIRED, scope StorageCVSSV3Scope UNCHANGED, CHANGED, confidentiality StorageCVSSV3Impact IMPACT_NONE, IMPACT_LOW, IMPACT_HIGH, integrity StorageCVSSV3Impact IMPACT_NONE, IMPACT_LOW, IMPACT_HIGH, availability StorageCVSSV3Impact IMPACT_NONE, IMPACT_LOW, IMPACT_HIGH, score Float float severity StorageCVSSV3Severity UNKNOWN, NONE, LOW, MEDIUM, HIGH, CRITICAL, 63.1.7.22. StorageCVSSV3AttackVector Enum Values ATTACK_LOCAL ATTACK_ADJACENT ATTACK_NETWORK ATTACK_PHYSICAL 63.1.7.23. StorageCVSSV3Impact Enum Values IMPACT_NONE IMPACT_LOW IMPACT_HIGH 63.1.7.24. StorageCVSSV3Scope Enum Values UNCHANGED CHANGED 63.1.7.25. StorageCVSSV3Severity Enum Values UNKNOWN NONE LOW MEDIUM HIGH CRITICAL 63.1.7.26. StorageContainer Field Name Required Nullable Type Description Format id String config StorageContainerConfig image StorageContainerImage securityContext StorageSecurityContext volumes List of StorageVolume ports List of StoragePortConfig secrets List of StorageEmbeddedSecret resources StorageResources name String livenessProbe StorageLivenessProbe readinessProbe StorageReadinessProbe 63.1.7.27. StorageContainerConfig Field Name Required Nullable Type Description Format env List of ContainerConfigEnvironmentConfig command List of string args List of string directory String user String uid String int64 appArmorProfile String 63.1.7.28. StorageContainerImage Field Name Required Nullable Type Description Format id String name StorageImageName notPullable Boolean isClusterLocal Boolean 63.1.7.29. StorageCosignSignature Field Name Required Nullable Type Description Format rawSignature byte[] byte signaturePayload byte[] byte certPem byte[] byte certChainPem byte[] byte 63.1.7.30. StorageDataSource Field Name Required Nullable Type Description Format id String name String mirror String 63.1.7.31. StorageDeployment Field Name Required Nullable Type Description Format id String name String hash String uint64 type String namespace String namespaceId String orchestratorComponent Boolean replicas String int64 labels Map of string podLabels Map of string labelSelector StorageLabelSelector created Date date-time clusterId String clusterName String containers List of StorageContainer annotations Map of string priority String int64 inactive Boolean imagePullSecrets List of string serviceAccount String serviceAccountPermissionLevel StoragePermissionLevel UNSET, NONE, DEFAULT, ELEVATED_IN_NAMESPACE, ELEVATED_CLUSTER_WIDE, CLUSTER_ADMIN, automountServiceAccountToken Boolean hostNetwork Boolean hostPid Boolean hostIpc Boolean runtimeClass String tolerations List of StorageToleration ports List of StoragePortConfig stateTimestamp String int64 riskScore Float float platformComponent Boolean 63.1.7.32. StorageEmbeddedImageScanComponent Field Name Required Nullable Type Description Format name String version String license StorageLicense vulns List of StorageEmbeddedVulnerability layerIndex Integer int32 priority String int64 source StorageSourceType OS, PYTHON, JAVA, RUBY, NODEJS, GO, DOTNETCORERUNTIME, INFRASTRUCTURE, location String topCvss Float float riskScore Float float fixedBy String Component version that fixes all the fixable vulnerabilities in this component. executables List of StorageEmbeddedImageScanComponentExecutable 63.1.7.33. StorageEmbeddedImageScanComponentExecutable Field Name Required Nullable Type Description Format path String dependencies List of string 63.1.7.34. StorageEmbeddedSecret Field Name Required Nullable Type Description Format name String path String 63.1.7.35. StorageEmbeddedVulnerability Field Name Required Nullable Type Description Format cve String cvss Float float summary String link String fixedBy String scoreVersion StorageEmbeddedVulnerabilityScoreVersion V2, V3, cvssV2 StorageCVSSV2 cvssV3 StorageCVSSV3 publishedOn Date date-time lastModified Date date-time vulnerabilityType EmbeddedVulnerabilityVulnerabilityType UNKNOWN_VULNERABILITY, IMAGE_VULNERABILITY, K8S_VULNERABILITY, ISTIO_VULNERABILITY, NODE_VULNERABILITY, OPENSHIFT_VULNERABILITY, vulnerabilityTypes List of EmbeddedVulnerabilityVulnerabilityType suppressed Boolean suppressActivation Date date-time suppressExpiry Date date-time firstSystemOccurrence Date Time when the CVE was first seen, for this specific distro, in the system. date-time firstImageOccurrence Date Time when the CVE was first seen in this image. date-time severity StorageVulnerabilitySeverity UNKNOWN_VULNERABILITY_SEVERITY, LOW_VULNERABILITY_SEVERITY, MODERATE_VULNERABILITY_SEVERITY, IMPORTANT_VULNERABILITY_SEVERITY, CRITICAL_VULNERABILITY_SEVERITY, state StorageVulnerabilityState OBSERVED, DEFERRED, FALSE_POSITIVE, cvssMetrics List of StorageCVSSScore nvdCvss Float float 63.1.7.36. StorageEmbeddedVulnerabilityScoreVersion V2: No unset for automatic backwards compatibility Enum Values V2 V3 63.1.7.37. StorageImage Field Name Required Nullable Type Description Format id String name StorageImageName names List of StorageImageName This should deprecate the ImageName field long-term, allowing images with the same digest to be associated with different locations. TODO(dhaus): For now, this message will be without search tags due to duplicated search tags otherwise. metadata StorageImageMetadata scan StorageImageScan signatureVerificationData StorageImageSignatureVerificationData signature StorageImageSignature components Integer int32 cves Integer int32 fixableCves Integer int32 lastUpdated Date date-time notPullable Boolean isClusterLocal Boolean priority String int64 riskScore Float float topCvss Float float notes List of StorageImageNote 63.1.7.38. StorageImageLayer Field Name Required Nullable Type Description Format instruction String value String created Date date-time author String empty Boolean 63.1.7.39. StorageImageMetadata Field Name Required Nullable Type Description Format v1 StorageV1Metadata v2 StorageV2Metadata layerShas List of string dataSource StorageDataSource version String uint64 63.1.7.40. StorageImageName Field Name Required Nullable Type Description Format registry String remote String tag String fullName String 63.1.7.41. StorageImageNote Enum Values MISSING_METADATA MISSING_SCAN_DATA MISSING_SIGNATURE MISSING_SIGNATURE_VERIFICATION_DATA 63.1.7.42. StorageImageScan Field Name Required Nullable Type Description Format scannerVersion String scanTime Date date-time components List of StorageEmbeddedImageScanComponent operatingSystem String dataSource StorageDataSource notes List of StorageImageScanNote hash String uint64 63.1.7.43. StorageImageScanNote Enum Values UNSET OS_UNAVAILABLE PARTIAL_SCAN_DATA OS_CVES_UNAVAILABLE OS_CVES_STALE LANGUAGE_CVES_UNAVAILABLE CERTIFIED_RHEL_SCAN_UNAVAILABLE 63.1.7.44. StorageImageSignature Field Name Required Nullable Type Description Format signatures List of StorageSignature fetched Date date-time 63.1.7.45. StorageImageSignatureVerificationData Field Name Required Nullable Type Description Format results List of StorageImageSignatureVerificationResult 63.1.7.46. StorageImageSignatureVerificationResult Field Name Required Nullable Type Description Format verificationTime Date date-time verifierId String verifier_id correlates to the ID of the signature integration used to verify the signature. status StorageImageSignatureVerificationResultStatus UNSET, VERIFIED, FAILED_VERIFICATION, INVALID_SIGNATURE_ALGO, CORRUPTED_SIGNATURE, GENERIC_ERROR, description String description is set in the case of an error with the specific error's message. Otherwise, this will not be set. verifiedImageReferences List of string The full image names that are verified by this specific signature integration ID. 63.1.7.47. StorageImageSignatureVerificationResultStatus Status represents the status of the result. VERIFIED: VERIFIED is set when the signature's verification was successful. FAILED_VERIFICATION: FAILED_VERIFICATION is set when the signature's verification failed. INVALID_SIGNATURE_ALGO: INVALID_SIGNATURE_ALGO is set when the signature's algorithm is invalid and unsupported. CORRUPTED_SIGNATURE: CORRUPTED_SIGNATURE is set when the raw signature is corrupted, i.e. wrong base64 encoding. GENERIC_ERROR: GENERIC_ERROR is set when an error occurred during verification that cannot be associated with a specific status. Enum Values UNSET VERIFIED FAILED_VERIFICATION INVALID_SIGNATURE_ALGO CORRUPTED_SIGNATURE GENERIC_ERROR 63.1.7.48. StorageLabelSelector available tag: 3 Field Name Required Nullable Type Description Format matchLabels Map of string This is actually a oneof, but we can't make it one due to backwards compatibility constraints. requirements List of StorageLabelSelectorRequirement 63.1.7.49. StorageLabelSelectorOperator Enum Values UNKNOWN IN NOT_IN EXISTS NOT_EXISTS 63.1.7.50. StorageLabelSelectorRequirement Field Name Required Nullable Type Description Format key String op StorageLabelSelectorOperator UNKNOWN, IN, NOT_IN, EXISTS, NOT_EXISTS, values List of string 63.1.7.51. StorageLicense Field Name Required Nullable Type Description Format name String type String url String 63.1.7.52. StorageLivenessProbe Field Name Required Nullable Type Description Format defined Boolean 63.1.7.53. StoragePermissionLevel Enum Values UNSET NONE DEFAULT ELEVATED_IN_NAMESPACE ELEVATED_CLUSTER_WIDE CLUSTER_ADMIN 63.1.7.54. StoragePortConfig Field Name Required Nullable Type Description Format name String containerPort Integer int32 protocol String exposure PortConfigExposureLevel UNSET, EXTERNAL, NODE, INTERNAL, HOST, ROUTE, exposedPort Integer int32 exposureInfos List of PortConfigExposureInfo 63.1.7.55. StorageReadinessProbe Field Name Required Nullable Type Description Format defined Boolean 63.1.7.56. StorageResources Field Name Required Nullable Type Description Format cpuCoresRequest Float float cpuCoresLimit Float float memoryMbRequest Float float memoryMbLimit Float float 63.1.7.57. StorageSecurityContext Field Name Required Nullable Type Description Format privileged Boolean selinux SecurityContextSELinux dropCapabilities List of string addCapabilities List of string readOnlyRootFilesystem Boolean seccompProfile SecurityContextSeccompProfile allowPrivilegeEscalation Boolean 63.1.7.58. StorageSignature Field Name Required Nullable Type Description Format cosign StorageCosignSignature 63.1.7.59. StorageSource Enum Values SOURCE_UNKNOWN SOURCE_RED_HAT SOURCE_OSV SOURCE_NVD 63.1.7.60. StorageSourceType Enum Values OS PYTHON JAVA RUBY NODEJS GO DOTNETCORERUNTIME INFRASTRUCTURE 63.1.7.61. StorageTaintEffect Enum Values UNKNOWN_TAINT_EFFECT NO_SCHEDULE_TAINT_EFFECT PREFER_NO_SCHEDULE_TAINT_EFFECT NO_EXECUTE_TAINT_EFFECT 63.1.7.62. StorageToleration Field Name Required Nullable Type Description Format key String operator StorageTolerationOperator TOLERATION_OPERATION_UNKNOWN, TOLERATION_OPERATOR_EXISTS, TOLERATION_OPERATOR_EQUAL, value String taintEffect StorageTaintEffect UNKNOWN_TAINT_EFFECT, NO_SCHEDULE_TAINT_EFFECT, PREFER_NO_SCHEDULE_TAINT_EFFECT, NO_EXECUTE_TAINT_EFFECT, 63.1.7.63. StorageTolerationOperator Enum Values TOLERATION_OPERATION_UNKNOWN TOLERATION_OPERATOR_EXISTS TOLERATION_OPERATOR_EQUAL 63.1.7.64. StorageV1Metadata Field Name Required Nullable Type Description Format digest String created Date date-time author String layers List of StorageImageLayer user String command List of string entrypoint List of string volumes List of string labels Map of string 63.1.7.65. StorageV2Metadata Field Name Required Nullable Type Description Format digest String 63.1.7.66. StorageVolume Field Name Required Nullable Type Description Format name String source String destination String readOnly Boolean type String mountPropagation VolumeMountPropagation NONE, HOST_TO_CONTAINER, BIDIRECTIONAL, 63.1.7.67. StorageVulnerabilitySeverity Enum Values UNKNOWN_VULNERABILITY_SEVERITY LOW_VULNERABILITY_SEVERITY MODERATE_VULNERABILITY_SEVERITY IMPORTANT_VULNERABILITY_SEVERITY CRITICAL_VULNERABILITY_SEVERITY 63.1.7.68. StorageVulnerabilityState VulnerabilityState indicates if vulnerability is being observed or deferred(/suppressed). By default, it vulnerabilities are observed. OBSERVED: [Default state] Enum Values OBSERVED DEFERRED FALSE_POSITIVE 63.1.7.69. StreamResultOfV1VulnMgmtExportWorkloadsResponse Field Name Required Nullable Type Description Format result V1VulnMgmtExportWorkloadsResponse error GooglerpcStatus 63.1.7.70. V1VulnMgmtExportWorkloadsResponse The workloads response contains the full image details including the vulnerability data. Field Name Required Nullable Type Description Format deployment StorageDeployment images List of StorageImage 63.1.7.71. VolumeMountPropagation Enum Values NONE HOST_TO_CONTAINER BIDIRECTIONAL
|
[
"For any update to EnvVarSource, please also update 'ui/src/messages/common.js'",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Next tag: 12",
"Next available tag: 36",
"Next Tag: 13",
"Next Tag: 22",
"ScoreVersion can be deprecated ROX-26066",
"Next Tag: 19",
"If any fields of ImageMetadata are modified including subfields, please check pkg/images/enricher/metadata.go to ensure that those changes will be automatically picked up Next Tag: 6",
"Next tag: 8",
"Next Tag: 6",
"Label selector components are joined with logical AND, see https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/",
"Next available tag: 4",
"For any update to PermissionLevel, also update: - pkg/searchbasedpolicies/builders/k8s_rbac.go - ui/src/messages/common.js",
"Next Available Tag: 6",
"Stream result of v1VulnMgmtExportWorkloadsResponse"
] |
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/api_reference/vulnmgmtservice
|
Chapter 30. Multi-site Pacemaker clusters
|
Chapter 30. Multi-site Pacemaker clusters When a cluster spans more than one site, issues with network connectivity between the sites can lead to split-brain situations. When connectivity drops, there is no way for a node on one site to determine whether a node on another site has failed or is still functioning with a failed site interlink. In addition, it can be problematic to provide high availability services across two sites which are too far apart to keep synchronous. To address these issues, Pacemaker provides full support for the ability to configure high availability clusters that span multiple sites through the use of a Booth cluster ticket manager. 30.1. Overview of Booth cluster ticket manager The Booth ticket manager is a distributed service that is meant to be run on a different physical network than the networks that connect the cluster nodes at particular sites. It yields another, loose cluster, a Booth formation , that sits on top of the regular clusters at the sites. This aggregated communication layer facilitates consensus-based decision processes for individual Booth tickets. A Booth ticket is a singleton in the Booth formation and represents a time-sensitive, movable unit of authorization. Resources can be configured to require a certain ticket to run. This can ensure that resources are run at only one site at a time, for which a ticket or tickets have been granted. You can think of a Booth formation as an overlay cluster consisting of clusters running at different sites, where all the original clusters are independent of each other. It is the Booth service which communicates to the clusters whether they have been granted a ticket, and it is Pacemaker that determines whether to run resources in a cluster based on a Pacemaker ticket constraint. This means that when using the ticket manager, each of the clusters can run its own resources as well as shared resources. For example there can be resources A, B and C running only in one cluster, resources D, E, and F running only in the other cluster, and resources G and H running in either of the two clusters as determined by a ticket. It is also possible to have an additional resource J that could run in either of the two clusters as determined by a separate ticket. 30.2. Configuring multi-site clusters with Pacemaker You can configure a multi-site configuration that uses the Booth ticket manager with the following procedure. These example commands use the following arrangement: Cluster 1 consists of the nodes cluster1-node1 and cluster1-node2 Cluster 1 has a floating IP address assigned to it of 192.168.11.100 Cluster 2 consists of cluster2-node1 and cluster2-node2 Cluster 2 has a floating IP address assigned to it of 192.168.22.100 The arbitrator node is arbitrator-node with an ip address of 192.168.99.100 The name of the Booth ticket that this configuration uses is apacheticket These example commands assume that the cluster resources for an Apache service have been configured as part of the resource group apachegroup for each cluster. It is not required that the resources and resource groups be the same on each cluster to configure a ticket constraint for those resources, since the Pacemaker instance for each cluster is independent, but that is a common failover scenario. Note that at any time in the configuration procedure you can enter the pcs booth config command to display the booth configuration for the current node or cluster or the pcs booth status command to display the current status of booth on the local node. Procedure Install the booth-site Booth ticket manager package on each node of both clusters. Install the pcs , booth-core , and booth-arbitrator packages on the arbitrator node. If you are running the firewalld daemon, execute the following commands on all nodes in both clusters as well as on the arbitrator node to enable the ports that are required by the Red Hat High Availability Add-On. You may need to modify which ports are open to suit local conditions. For more information about the ports that are required by the Red Hat High-Availability Add-On, see Enabling ports for the High Availability Add-On . Create a Booth configuration on one node of one cluster. The addresses you specify for each cluster and for the arbitrator must be IP addresses. For each cluster, you specify a floating IP address. This command creates the configuration files /etc/booth/booth.conf and /etc/booth/booth.key on the node from which it is run. Create a ticket for the Booth configuration. This is the ticket that you will use to define the resource constraint that will allow resources to run only when this ticket has been granted to the cluster. This basic failover configuration procedure uses only one ticket, but you can create additional tickets for more complicated scenarios where each ticket is associated with a different resource or resources. Synchronize the Booth configuration to all nodes in the current cluster. From the arbitrator node, pull the Booth configuration to the arbitrator. If you have not previously done so, you must first authenticate pcs to the node from which you are pulling the configuration. Pull the Booth configuration to the other cluster and synchronize to all the nodes of that cluster. As with the arbitrator node, if you have not previously done so, you must first authenticate pcs to the node from which you are pulling the configuration. Start and enable Booth on the arbitrator. Note You must not manually start or enable Booth on any of the nodes of the clusters since Booth runs as a Pacemaker resource in those clusters. Configure Booth to run as a cluster resource on both cluster sites, using the floating IP addresses assigned to each cluster. This creates a resource group with booth-ip and booth-service as members of that group. Add a ticket constraint to the resource group you have defined for each cluster. You can enter the following command to display the currently configured ticket constraints. Grant the ticket you created for this setup to the first cluster. Note that it is not necessary to have defined ticket constraints before granting a ticket. Once you have initially granted a ticket to a cluster, then Booth takes over ticket management unless you override this manually with the pcs booth ticket revoke command. For information about the pcs booth administration commands, see the PCS help screen for the pcs booth command. It is possible to add or remove tickets at any time, even after completing this procedure. After adding or removing a ticket, however, you must synchronize the configuration files to the other nodes and clusters as well as to the arbitrator and grant the ticket as is shown in this procedure. For information about additional Booth administration commands that you can use for cleaning up and removing Booth configuration files, tickets, and resources, see the PCS help screen for the pcs booth command.
|
[
"yum install -y booth-site yum install -y booth-site yum install -y booth-site yum install -y booth-site",
"yum install -y pcs booth-core booth-arbitrator",
"firewall-cmd --permanent --add-service=high-availability firewall-cmd --add-service=high-availability",
"pcs booth setup sites 192.168.11.100 192.168.22.100 arbitrators 192.168.99.100",
"pcs booth ticket add apacheticket",
"pcs booth sync",
"pcs host auth cluster1-node1 pcs booth pull cluster1-node1",
"pcs host auth cluster1-node1 pcs booth pull cluster1-node1 pcs booth sync",
"pcs booth start pcs booth enable",
"pcs booth create ip 192.168.11.100 pcs booth create ip 192.168.22.100",
"pcs constraint ticket add apacheticket apachegroup pcs constraint ticket add apacheticket apachegroup",
"pcs constraint ticket [show]",
"pcs booth ticket grant apacheticket"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_high_availability_clusters/assembly_configuring-multisite-cluster-configuring-and-managing-high-availability-clusters
|
Chapter 14. Configuring custom domains for Knative services
|
Chapter 14. Configuring custom domains for Knative services 14.1. Configuring a custom domain for a Knative service Knative services are automatically assigned a default domain name based on your cluster configuration. For example, <service_name>-<namespace>.example.com . You can customize the domain for your Knative service by mapping a custom domain name that you own to a Knative service. You can do this by creating a DomainMapping resource for the service. You can also create multiple DomainMapping resources to map multiple domains and subdomains to a single service. 14.2. Custom domain mapping You can customize the domain for your Knative service by mapping a custom domain name that you own to a Knative service. To map a custom domain name to a custom resource (CR), you must create a DomainMapping CR that maps to an Addressable target CR, such as a Knative service or a Knative route. 14.2.1. Creating a custom domain mapping You can customize the domain for your Knative service by mapping a custom domain name that you own to a Knative service. To map a custom domain name to a custom resource (CR), you must create a DomainMapping CR that maps to an Addressable target CR, such as a Knative service or a Knative route. Prerequisites The OpenShift Serverless Operator and Knative Serving are installed on your cluster. Install the OpenShift CLI ( oc ). You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. You have created a Knative service and control a custom domain that you want to map to that service. Note Your custom domain must point to the IP address of the OpenShift Container Platform cluster. Procedure Create a YAML file containing the DomainMapping CR in the same namespace as the target CR you want to map to: apiVersion: serving.knative.dev/v1beta1 kind: DomainMapping metadata: name: <domain_name> 1 namespace: <namespace> 2 spec: ref: name: <target_name> 3 kind: <target_type> 4 apiVersion: serving.knative.dev/v1 1 The custom domain name that you want to map to the target CR. 2 The namespace of both the DomainMapping CR and the target CR. 3 The name of the target CR to map to the custom domain. 4 The type of CR being mapped to the custom domain. Example service domain mapping apiVersion: serving.knative.dev/v1beta1 kind: DomainMapping metadata: name: example.com namespace: default spec: ref: name: showcase kind: Service apiVersion: serving.knative.dev/v1 Example route domain mapping apiVersion: serving.knative.dev/v1beta1 kind: DomainMapping metadata: name: example.com namespace: default spec: ref: name: example-route kind: Route apiVersion: serving.knative.dev/v1 Apply the DomainMapping CR as a YAML file: USD oc apply -f <filename> 14.3. Custom domains for Knative services using the Knative CLI You can customize the domain for your Knative service by mapping a custom domain name that you own to a Knative service. You can use the Knative ( kn ) CLI to create a DomainMapping custom resource (CR) that maps to an Addressable target CR, such as a Knative service or a Knative route. 14.3.1. Creating a custom domain mapping by using the Knative CLI Prerequisites The OpenShift Serverless Operator and Knative Serving are installed on your cluster. You have created a Knative service or route, and control a custom domain that you want to map to that CR. Note Your custom domain must point to the DNS of the OpenShift Container Platform cluster. You have installed the Knative ( kn ) CLI. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Procedure Map a domain to a CR in the current namespace: USD kn domain create <domain_mapping_name> --ref <target_name> Example command USD kn domain create example.com --ref showcase The --ref flag specifies an Addressable target CR for domain mapping. If a prefix is not provided when using the --ref flag, it is assumed that the target is a Knative service in the current namespace. Map a domain to a Knative service in a specified namespace: USD kn domain create <domain_mapping_name> --ref <ksvc:service_name:service_namespace> Example command USD kn domain create example.com --ref ksvc:showcase:example-namespace Map a domain to a Knative route: USD kn domain create <domain_mapping_name> --ref <kroute:route_name> Example command USD kn domain create example.com --ref kroute:example-route 14.4. Domain mapping using the Developer perspective You can customize the domain for your Knative service by mapping a custom domain name that you own to a Knative service. You can use the Developer perspective of the OpenShift Container Platform web console to map a DomainMapping custom resource (CR) to a Knative service. 14.4.1. Mapping a custom domain to a service by using the Developer perspective Prerequisites You have logged in to the web console. You are in the Developer perspective. The OpenShift Serverless Operator and Knative Serving are installed on your cluster. This must be completed by a cluster administrator. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. You have created a Knative service and control a custom domain that you want to map to that service. Note Your custom domain must point to the IP address of the OpenShift Container Platform cluster. Procedure Navigate to the Topology page. Right-click the service you want to map to a domain, and select the Edit option that contains the service name. For example, if the service is named showcase , select the Edit showcase option. In the Advanced options section, click Show advanced Routing options . If the domain mapping CR that you want to map to the service already exists, you can select it in the Domain mapping list. If you want to create a new domain mapping CR, type the domain name into the box, and select the Create option. For example, if you type in example.com , the Create option is Create "example.com" . Click Save to save the changes to your service. Verification Navigate to the Topology page. Click on the service that you have created. In the Resources tab of the service information window, you can see the domain you have mapped to the service listed under Domain mappings . 14.5. Domain mapping using the Administrator perspective If you do not want to switch to the Developer perspective in the OpenShift Container Platform web console or use the Knative ( kn ) CLI or YAML files, you can use the Administator perspective of the OpenShift Container Platform web console. 14.5.1. Mapping a custom domain to a service by using the Administrator perspective Knative services are automatically assigned a default domain name based on your cluster configuration. For example, <service_name>-<namespace>.example.com . You can customize the domain for your Knative service by mapping a custom domain name that you own to a Knative service. You can do this by creating a DomainMapping resource for the service. You can also create multiple DomainMapping resources to map multiple domains and subdomains to a single service. If you have cluster administrator permissions on OpenShift Container Platform (or cluster or dedicated administrator permissions on OpenShift Dedicated or Red Hat OpenShift Service on AWS), you can create a DomainMapping custom resource (CR) by using the Administrator perspective in the web console. Prerequisites You have logged in to the web console. You are in the Administrator perspective. You have installed the OpenShift Serverless Operator. You have installed Knative Serving. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads. You have created a Knative service and control a custom domain that you want to map to that service. Note Your custom domain must point to the IP address of the cluster. Procedure Navigate to CustomResourceDefinitions and use the search box to find the DomainMapping custom resource definition (CRD). Click the DomainMapping CRD, then navigate to the Instances tab. Click Create DomainMapping . Modify the YAML for the DomainMapping CR so that it includes the following information for your instance: apiVersion: serving.knative.dev/v1beta1 kind: DomainMapping metadata: name: <domain_name> 1 namespace: <namespace> 2 spec: ref: name: <target_name> 3 kind: <target_type> 4 apiVersion: serving.knative.dev/v1 1 The custom domain name that you want to map to the target CR. 2 The namespace of both the DomainMapping CR and the target CR. 3 The name of the target CR to map to the custom domain. 4 The type of CR being mapped to the custom domain. Example domain mapping to a Knative service apiVersion: serving.knative.dev/v1beta1 kind: DomainMapping metadata: name: custom-ksvc-domain.example.com namespace: default spec: ref: name: showcase kind: Service apiVersion: serving.knative.dev/v1 Verification Access the custom domain by using a curl request. For example: Example command USD curl custom-ksvc-domain.example.com Example output {"artifact":"knative-showcase","greeting":"Welcome"} 14.5.2. Restricting cipher suites by using the Administrator perspective When you specify net-kourier for ingress and use DomainMapping , the TLS for OpenShift routing is set to passthrough, and TLS is handled by the Kourier Gateway. In such cases, you might need to restrict which TLS cipher suites for Kourier are allowed for users. Prerequisites You have logged in to the web console. You are in the Administrator perspective. You have installed the OpenShift Serverless Operator. You have installed Knative Serving. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads. Note Your custom domain must point to the IP address of the cluster. Procedure In the KnativeServing CR, use the cipher-suites value to specify the cipher suites you want to enable: KnativeServing CR example spec: config: kourier: cipher-suites: ECDHE-ECDSA-AES128-GCM-SHA256,ECDHE-ECDSA-CHACHA20-POLY1305 Other cipher suites will be disabled. You can specify multiple suites by separating them with commas. Note The Kourier Gateway's container image utilizes the Envoy proxy image, and the default enabled cipher suites depend on the version of the Envoy proxy. 14.6. Securing a mapped service using a TLS certificate 14.6.1. Securing a service with a custom domain by using a TLS certificate After you have configured a custom domain for a Knative service, you can use a TLS certificate to secure the mapped service. To do this, you must create a Kubernetes TLS secret, and then update the DomainMapping CR to use the TLS secret that you have created. Prerequisites You configured a custom domain for a Knative service and have a working DomainMapping CR. You have a TLS certificate from your Certificate Authority provider or a self-signed certificate. You have obtained the cert and key files from your Certificate Authority provider, or a self-signed certificate. Install the OpenShift CLI ( oc ). Procedure Create a Kubernetes TLS secret: USD oc create secret tls <tls_secret_name> --cert=<path_to_certificate_file> --key=<path_to_key_file> Add the networking.internal.knative.dev/certificate-uid: <id>` label to the Kubernetes TLS secret: USD oc label secret <tls_secret_name> networking.internal.knative.dev/certificate-uid="<id>" If you are using a third-party secret provider such as cert-manager , you can configure your secret manager to label the Kubernetes TLS secret automatically. cert-manager users can use the secret template offered to automatically generate secrets with the correct label. In this case, secret filtering is done based on the key only, but this value can carry useful information such as the certificate ID that the secret contains. Note The cert-manager Operator for Red Hat OpenShift is a Technology Preview feature. For more information, see the Installing the cert-manager Operator for Red Hat OpenShift documentation. Update the DomainMapping CR to use the TLS secret that you have created: apiVersion: serving.knative.dev/v1beta1 kind: DomainMapping metadata: name: <domain_name> namespace: <namespace> spec: ref: name: <service_name> kind: Service apiVersion: serving.knative.dev/v1 # TLS block specifies the secret to be used tls: secretName: <tls_secret_name> Verification Verify that the DomainMapping CR status is True , and that the URL column of the output shows the mapped domain with the scheme https : USD oc get domainmapping <domain_name> Example output NAME URL READY REASON example.com https://example.com True Optional: If the service is exposed publicly, verify that it is available by running the following command: USD curl https://<domain_name> If the certificate is self-signed, skip verification by adding the -k flag to the curl command. 14.6.2. Improving net-kourier memory usage by using secret filtering By default, the informers implementation for the Kubernetes client-go library fetches all resources of a particular type. This can lead to a substantial overhead when many resources are available, which can cause the Knative net-kourier ingress controller to fail on large clusters due to memory leaking. However, a filtering mechanism is available for the Knative net-kourier ingress controller, which enables the controller to only fetch Knative related secrets. The secret filtering is enabled by default on the OpenShift Serverless Operator side. An environment variable, ENABLE_SECRET_INFORMER_FILTERING_BY_CERT_UID=true , is added by default to the net-kourier controller pods. Important If you enable secret filtering, all of your secrets need to be labeled with networking.internal.knative.dev/certificate-uid: "<id>" . Otherwise, Knative Serving does not detect them, which leads to failures. You must label both new and existing secrets. Prerequisites You have cluster administrator permissions on OpenShift Container Platform, or you have cluster or dedicated administrator permissions on Red Hat OpenShift Service on AWS or OpenShift Dedicated. A project that you created or that you have roles and permissions for to create applications and other workloads. Install the OpenShift Serverless Operator and Knative Serving. Install the OpenShift CLI ( oc ). You can disable the secret filtering by setting the ENABLE_SECRET_INFORMER_FILTERING_BY_CERT_UID variable to false by using the workloads field in the KnativeServing custom resource (CR). Example KnativeServing CR apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving spec: ... workloads: - env: - container: controller envVars: - name: ENABLE_SECRET_INFORMER_FILTERING_BY_CERT_UID value: 'false' name: net-kourier-controller
|
[
"apiVersion: serving.knative.dev/v1beta1 kind: DomainMapping metadata: name: <domain_name> 1 namespace: <namespace> 2 spec: ref: name: <target_name> 3 kind: <target_type> 4 apiVersion: serving.knative.dev/v1",
"apiVersion: serving.knative.dev/v1beta1 kind: DomainMapping metadata: name: example.com namespace: default spec: ref: name: showcase kind: Service apiVersion: serving.knative.dev/v1",
"apiVersion: serving.knative.dev/v1beta1 kind: DomainMapping metadata: name: example.com namespace: default spec: ref: name: example-route kind: Route apiVersion: serving.knative.dev/v1",
"oc apply -f <filename>",
"kn domain create <domain_mapping_name> --ref <target_name>",
"kn domain create example.com --ref showcase",
"kn domain create <domain_mapping_name> --ref <ksvc:service_name:service_namespace>",
"kn domain create example.com --ref ksvc:showcase:example-namespace",
"kn domain create <domain_mapping_name> --ref <kroute:route_name>",
"kn domain create example.com --ref kroute:example-route",
"apiVersion: serving.knative.dev/v1beta1 kind: DomainMapping metadata: name: <domain_name> 1 namespace: <namespace> 2 spec: ref: name: <target_name> 3 kind: <target_type> 4 apiVersion: serving.knative.dev/v1",
"apiVersion: serving.knative.dev/v1beta1 kind: DomainMapping metadata: name: custom-ksvc-domain.example.com namespace: default spec: ref: name: showcase kind: Service apiVersion: serving.knative.dev/v1",
"curl custom-ksvc-domain.example.com",
"{\"artifact\":\"knative-showcase\",\"greeting\":\"Welcome\"}",
"spec: config: kourier: cipher-suites: ECDHE-ECDSA-AES128-GCM-SHA256,ECDHE-ECDSA-CHACHA20-POLY1305",
"oc create secret tls <tls_secret_name> --cert=<path_to_certificate_file> --key=<path_to_key_file>",
"oc label secret <tls_secret_name> networking.internal.knative.dev/certificate-uid=\"<id>\"",
"apiVersion: serving.knative.dev/v1beta1 kind: DomainMapping metadata: name: <domain_name> namespace: <namespace> spec: ref: name: <service_name> kind: Service apiVersion: serving.knative.dev/v1 TLS block specifies the secret to be used tls: secretName: <tls_secret_name>",
"oc get domainmapping <domain_name>",
"NAME URL READY REASON example.com https://example.com True",
"curl https://<domain_name>",
"apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving spec: workloads: - env: - container: controller envVars: - name: ENABLE_SECRET_INFORMER_FILTERING_BY_CERT_UID value: 'false' name: net-kourier-controller"
] |
https://docs.redhat.com/en/documentation/red_hat_openshift_serverless/1.35/html/serving/configuring-custom-domains-for-knative-services
|
Chapter 26. Cron
|
Chapter 26. Cron Only consumer is supported The Cron component is a generic interface component that allows triggering events at specific time interval specified using the Unix cron syntax (e.g. 0/2 * * * * ? to trigger an event every two seconds). Being an interface component, the Cron component does not contain a default implementation, instead it requires that the users plug the implementation of their choice. The following standard Camel components support the Cron endpoints: Camel-quartz Camel-spring The Cron component is also supported in Camel K , which can use the Kubernetes scheduler to trigger the routes when required by the cron expression. Camel K does not require additional libraries to be plugged when using cron expressions compatible with Kubernetes cron syntax. 26.1. Dependencies When using cron with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-cron-starter</artifactId> </dependency> Additional libraries may be needed in order to plug a specific implementation. 26.2. Configuring Options Camel components are configured on two separate levels: component level endpoint level 26.2.1. Configuring Component Options At the component level, you set general and shared configurations that are, then, inherited by the endpoints. It is the highest configuration level. For example, a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre-configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. You can configure components using: the Component DSL . in a configuration file (application.properties, *.yaml files, etc). directly in the Java code. 26.2.2. Configuring Endpoint Options You usually spend more time setting up endpoints because they have many options. These options help you customize what you want the endpoint to do. The options are also categorized into whether the endpoint is used as a consumer (from), as a producer (to), or both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL and DataFormat DSL as a type safe way of configuring endpoints and data formats in Java. A good practice when configuring options is to use Property Placeholders . Property placeholders provide a few benefits: They help prevent using hardcoded urls, port numbers, sensitive information, and other settings. They allow externalizing the configuration from the code. They help the code to become more flexible and reusable. The following two sections list all the options, firstly for the component followed by the endpoint. 26.3. Component Options The Cron component supports 3 options, which are listed below. Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean cronService (advanced) The id of the CamelCronService to use when multiple implementations are provided. String 26.4. Endpoint Options The Cron endpoint is configured using URI syntax: with the following path and query parameters: 26.4.1. Path Parameters (1 parameters) Name Description Default Type name (consumer) Required The name of the cron trigger. String 26.4.2. Query Parameters (4 parameters) Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean schedule (consumer) Required A cron expression that will be used to generate events. String exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern 26.5. Usage The component can be used to trigger events at specified times, as in the following example: from("cron:tab?schedule=0/1+*+*+*+*+?") .setBody().constant("event") .log("USD{body}"); The schedule expression 0/3+10+ * +? can be also written as 0/3 10 * * * ? and triggers an event every three seconds only in the tenth minute of each hour. Parts in the schedule expression means (in order): Seconds (optional) Minutes Hours Day of month Month Day of week Year (optional) Schedule expressions can be made of 5 to 7 parts. When expressions are composed of 6 parts, the first items is the "seconds" part (and year is considered missing). Other valid examples of schedule expressions are: 0/2 * * * ? (5 parts, an event every two minutes) 0 0/2 * * * MON-FRI 2030 (7 parts, an event every two minutes only in year 2030) Routes can also be written using the XML DSL. <route> <from uri="cron:tab?schedule=0/1+*+*+*+*+?"/> <setBody> <constant>event</constant> </setBody> <to uri="log:info"/> </route> 26.6. Spring Boot Auto-Configuration The component supports 4 options, which are listed below. Name Description Default Type camel.component.cron.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.cron.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.cron.cron-service The id of the CamelCronService to use when multiple implementations are provided. String camel.component.cron.enabled Whether to enable auto configuration of the cron component. This is enabled by default. Boolean
|
[
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-cron-starter</artifactId> </dependency>",
"cron:name",
"from(\"cron:tab?schedule=0/1+*+*+*+*+?\") .setBody().constant(\"event\") .log(\"USD{body}\");",
"<route> <from uri=\"cron:tab?schedule=0/1+*+*+*+*+?\"/> <setBody> <constant>event</constant> </setBody> <to uri=\"log:info\"/> </route>"
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-cron-component-starter
|
D.2. Common ACLs
|
D.2. Common ACLs This section covers the default access control configuration that is common for all four subsystem types. These access control rules manage access to basic and common configuration settings, such as logging and adding users and groups. Important These ACLs are common in that the same ACLs occur in each subsystem instance's acl.ldif file. These are not shared ACLs in the sense that the configuration files or settings are held in common by all subsystem instances. As with all other instance configuration, these ACLs are maintained independently of other subsystem instances, in the instance-specific acl.ldif file. D.2.1. certServer.acl.configuration Controls operations to the ACL configuration. The default configuration is: Table D.2. certServer.acl.configuration ACL Summary Operations Description Allow/Deny Access Targeted Users/Groups read View ACL resources and list ACL resources, ACL listing evaluators, and ACL evaluator types. Allow Administrators Agents Auditors modify Add, delete, and update ACL evaluators. Allow Administrators D.2.2. certServer.admin.certificate Controls which users can import a certificate through a Certificate Manager. By default, this operation is allowed to everyone. The default configuration is: Note This entry is associated with the CA administration web interface which is used to configure the instance. This ACL is only available during instance configuration and is unavailable after the CA is running. Table D.3. certServer.admin.certificate ACL Summary Operations Description Allow/Deny Access Targeted Users/Groups import Import a CA administrator certificate, and retrieve certificates by serial number. Allow Anyone D.2.3. certServer.auth.configuration Controls operations on the authentication configuration. Table D.4. certServer.auth.configuration ACL Summary Operations Description Allow/Deny Access Targeted Users/Groups read View authentication plug-ins, authentication type, configured authentication manager plug-ins, and authentication instances. List authentication manager plug-ins and authentication manager instances. Allow Administrators Agents Auditors modify Add or delete authentication plug-ins and authentication instances. Modify authentication instances. Allow Administrators D.2.4. certServer.clone.configuration Controls who can read and modify the configuration information used in cloning. The default setting is: Table D.5. certServer.clone.configuration ACL Summary Operations Description Allow/Deny Access Targeted Users/Groups read View original instance configuration. Allow Enterprise Administrators modify Modify original instance configuration. Allow Enterprise Administrators D.2.5. certServer.general.configuration Controls access to the general configuration of the subsystem instance, including who can view and edit the CA's settings. Table D.6. certServer.general.configuration ACL Summary Operations Description Allow/Deny Access Targeted Users/Groups read View the operating environment, LDAP configuration, SMTP configuration, server statistics, encryption, token names, subject name of certificates, certificate nicknames, all subsystems loaded by the server, CA certificates, and all certificates for management. Allow Administrators Agents Auditors modify Modify the settings for the LDAP database, SMTP, and encryption. Issue import certificates, install certificates, trust and untrust CA certificates, import cross-pair certificates, and delete certificates. Perform server restart and stop operations. Log in all tokens and check token status. Run self-tests on demand. Get certificate information. Process the certificate subject name. Validate the certificate subject name, certificate key length, and certificate extension. Allow Administrators D.2.6. certServer.log.configuration Controls access to the log configuration for the Certificate Manager, including changing the log settings. Table D.7. certServer.log.configuration ACL Summary Operations Description Allow/Deny Access Targeted Users/Groups read View log plug-in information, log plug-in configuration, and log instance configuration. List log plug-ins and log instances (excluding NTEventLog). Allow Administrators Agents Auditors modify Add and delete log plug-ins and log instances. Modify log instances, including log rollover parameters and log level. Allow Administrators D.2.7. certServer.log.configuration.fileName Restricts access to change the file name of a log for the instance. Table D.8. certServer.log.configuration.fileName ACL Summary Operations Description Allow/Deny Access Targeted Users/Groups read View the value of the fileName parameter for a log instance. Allow Administrators Agents Auditors modify Change the value of the fileName parameter for a log instance. Deny Anyone D.2.8. certServer.log.content.system Controls who can view the instance's logs. Table D.9. certServer.log.content.system ACL Summary Operations Description Allow/Deny Access Targeted Users/Groups read View log content. List all logs. Allow Administrators Agents Auditors D.2.9. certServer.log.content.signedAudit Controls who has access to the signed audit logs. The default setting is: Table D.10. certServer.log.content.signedAudit ACL Summary Operations Description Allow/Deny Access Targeted Users/Groups read View log content. List logs. Allow Auditors D.2.10. certServer.registry.configuration Controls access to the administration registry, the file that is used to register plug-in modules. Currently, this is only used to register certificate profile plug-ins. Table D.11. certServer.registry.configuration ACL Summary Operations Description Allow/Deny Access Targeted Users/Groups read View the administration registry, supported policy constraints, profile plug-in configuration, and the list of profile plug-ins. Allow Administrators Agents Auditors modify Register individual profile implementation plug-ins. Allow Administrators
|
[
"allow (read) group=\"Administrators\" || group=\"Certificate Manager Agents\" || group=\"Registration Manager Agents\" || group=\"Key Recovery Authority Agents\" || group=\"Online Certificate Status Manager Agents\" || group=\"Auditors\";allow (modify) group=\"Administrators\"",
"allow (import) user=\"anybody\"",
"allow (read) group=\"Administrators\" || group=\"Certificate Manager Agents\" || group=\"Registration Manager Agents\" || group=\"Key Recovery Authority Agents\" || group=\"Online Certificate Status Manager Agents\" || group=\"Auditors\";allow (modify) group=\"Administrators",
"allow (modify,read) group=\"Enterprise CA Administrators\" || group=\"Enterprise KRA Administrators\" || group=\"Enterprise OCSP Administrators\" || group=\"Enterprise TKS Administrators\"",
"allow (read) group=\"Administrators\" || group=\"Auditors\" || group=\"Certificate Manager Agents\" || group=\"Registration Manager Agents\" || group=\"Key Recovery Authority Agents\" || group=\"Online Certificate Status Manager Agents\";allow (modify) group=\"Administrators\"",
"allow (read) group=\"Administrators\" || group=\"Auditors\" || group=\"Certificate Manager Agents\" || group=\"Registration Manager Agents\" || group=\"Key Recovery Authority Agents\" || group=\"Online Certificate Status Manager Agents\";allow (modify) group=\"Administrators\"",
"allow (read) group=\"Administrators\" || group=\"Auditors\" || group=\"Certificate Manager Agents\" || group=\"Registration Manager Agents\" || group=\"Key Recovery Authority Agents\" || group=\"Online Certificate Status Manager Agents\";deny (modify) user=anybody",
"allow (read) group=\"Administrators\" || group=\"Certificate Manager Agents\" || group=\"Registration Manager Agents\" || group=\"Key Recovery Authority Agents\" || group=\"Online Certificate Status Manager Agents\" || group=\"Auditors\"",
"allow (read) group=\"Auditors\"",
"allow (read) group=\"Administrators\" || group=\"Certificate Manager Agents\" || group=\"Registration Manager Agents\" || group=\"Key Recovery Authority Agents\" || group=\"Online Certificate Status Manager Agents\" || group=\"Auditors\";allow (modify) group=\"Administrators\""
] |
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/common-acl-reference
|
Chapter 16. Re-enrolling an IdM client
|
Chapter 16. Re-enrolling an IdM client If a client machine has been destroyed and lost connection with the IdM servers, for example due to the client's hardware failure, and you still have its keytab, you can re-enroll the client. In this scenario, you want to get the client back in the IdM environment with the same hostname. 16.1. Client re-enrollment in IdM If a client machine has been destroyed and lost connection with the IdM servers, for example due to the client's hardware failure, and you still have its keytab, you can re-enroll the client. In this scenario, you want to get the client back in the IdM environment with the same hostname. During the re-enrollment, the client generates a new Kerberos key and SSH keys, but the identity of the client in the LDAP database remains unchanged. After the re-enrollment, the host has its keys and other information in the same LDAP object with the same FQDN as previously, before the machine's loss of connection with the IdM servers. Important You can only re-enroll clients whose domain entry is still active. If you uninstalled a client (using ipa-client-install --uninstall ) or disabled its host entry (using ipa host-disable ), you cannot re-enroll it. You cannot re-enroll a client after you have renamed it. This is because in IdM, the key attribute of the client's entry in LDAP is the client's hostname, its FQDN . As opposed to re-enrolling a client, during which the client's LDAP object remains unchanged, the outcome of renaming a client is that the client has its keys and other information in a different LDAP object with a new FQDN . Therefore, the only way to rename a client is to uninstall the host from IdM, change the host's hostname, and install it as an IdM client with a new name. For details on how to rename a client, see Renaming IdM client systems . What happens during client re-enrollment During re-enrollment, IdM: Revokes the original host certificate Creates new SSH keys Generates a new keytab 16.2. Re-enrolling a client by using user credentials: Interactive re-enrollment Re-enroll an Identity Management (IdM) client interactively by using the credentials of an authorized user. Procedure Re-create the client machine with the same host name. Run the ipa-client-install --force-join command on the client machine: The script prompts for a user whose identity will be used to re-enroll the client. This could be, for example, a hostadmin user with the Enrollment Administrator role: Additional resources For a more detailed procedure on enrolling clients by using an authorized user's credentials, see Installing a client by using user credentials: Interactive installation . 16.3. Re-enrolling a client by using the client keytab: Non-interactive re-enrollment You can re-enroll an Identity Management (IdM) client non-interactively by using the krb5.keytab keytab file of the client system from the deployment. For example, re-enrollment using the client keytab is appropriate for an automated installation. Prerequisites You have backed up the keytab of the client from the deployment on another system. Procedure Re-create the client machine with the same host name. Copy the keytab file from the backup location to the re-created client machine, for example its /tmp/ directory. Important Do not put the keytab in the /etc/krb5.keytab file as old keys are removed from this location during the execution of the ipa-client-install installation script. Use the ipa-client-install utility to re-enroll the client. Specify the keytab location with the --keytab option: Note The keytab specified in the --keytab option is only used when authenticating to initiate the re-enrollment. During the re-enrollment, IdM generates a new keytab for the client. 16.4. Testing an IdM client The command line informs you that the ipa-client-install was successful, but you can also do your own test. To test that the Identity Management (IdM) client can obtain information about users defined on the server, check that you are able to resolve a user defined on the server. For example, to check the default admin user: To test that authentication works correctly, su to a root user from a non-root user:
|
[
"ipa-client-install --force-join",
"User authorized to enroll computers: hostadmin Password for hostadmin @ EXAMPLE.COM :",
"ipa-client-install --keytab /tmp/krb5.keytab",
"[user@client ~]USD id admin uid=1254400000(admin) gid=1254400000(admins) groups=1254400000(admins)",
"[user@client ~]USD su - Last login: Thu Oct 18 18:39:11 CEST 2018 from 192.168.122.1 on pts/0"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/installing_identity_management/re-enrolling-an-ipa-client_installing-identity-management
|
Providing feedback on Red Hat build of Quarkus documentation
|
Providing feedback on Red Hat build of Quarkus documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team.
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.8/html/deploying_your_red_hat_build_of_quarkus_applications_to_openshift_container_platform/proc_providing-feedback-on-red-hat-documentation_quarkus-openshift
|
Chapter 7. Managing the collection of usage data
|
Chapter 7. Managing the collection of usage data Red Hat OpenShift AI administrators can choose whether to allow Red Hat to collect data about OpenShift AI usage in their cluster. Collecting this data allows Red Hat to monitor and improve our software and support. For further details about the data Red Hat collects, see Usage data collection notice for OpenShift AI . Usage data collection is enabled by default when you install OpenShift AI on your OpenShift cluster. See Disabling usage data collection for instructions on disabling the collection of this data in your cluster. If you have disabled data collection on your cluster, and you want to enable it again, see Enabling usage data collection for more information. 7.1. Usage data collection notice for OpenShift AI In connection with your use of this Red Hat offering, Red Hat may collect usage data about your use of the software. This data allows Red Hat to monitor the software and to improve Red Hat offerings and support, including identifying, troubleshooting, and responding to issues that impact users. What information does Red Hat collect? Tools within the software monitor various metrics and this information is transmitted to Red Hat. Metrics include information such as: Information about applications enabled in the product dashboard. The deployment sizes used (that is, the CPU and memory resources allocated). Information about documentation resources accessed from the product dashboard. The name of the notebook images used (that is, Minimal Python, Standard Data Science, and other images.). A unique random identifier that generates during the initial user login to associate data to a particular username. Usage information about components, features, and extensions. Third Party Service Providers Red Hat uses certain third party service providers to collect the telemetry data. Security Red Hat employs technical and organizational measures designed to protect the usage data. Personal Data Red Hat does not intend to collect personal information. If Red Hat discovers that personal information has been inadvertently received, Red Hat will delete such personal information and treat such personal information in accordance with Red Hat's Privacy Statement. For more information about Red Hat's privacy practices, see Red Hat's Privacy Statement . Enabling and Disabling Usage Data You can disable or enable usage data by following the instructions in Disabling usage data collection or Enabling usage data collection . 7.2. Enabling usage data collection Red Hat OpenShift AI administrators can select whether to allow Red Hat to collect data about OpenShift AI usage in their cluster. Usage data collection is enabled by default when you install OpenShift AI on your OpenShift cluster. If you have disabled data collection previously, you can re-enable it by following these steps. Prerequisites You have logged in to OpenShift AI as a user with OpenShift AI administrator privileges. Procedure From the OpenShift AI dashboard, click Settings Cluster settings . Locate the Usage data collection section. Select the Allow collection of usage data checkbox. Click Save changes . Verification A notification is shown when settings are updated: Settings changes saved. Additional resources Usage data collection notice for OpenShift AI 7.3. Disabling usage data collection Red Hat OpenShift AI administrators can choose whether to allow Red Hat to collect data about OpenShift AI usage in their cluster. Usage data collection is enabled by default when you install OpenShift AI on your OpenShift cluster. You can disable data collection by following these steps. Prerequisites You have logged in to OpenShift AI as a user with OpenShift AI administrator privileges. Procedure From the OpenShift AI dashboard, click Settings Cluster settings . Locate the Usage data collection section. Clear the Allow collection of usage data checkbox. Click Save changes . Verification A notification is shown when settings are updated: Settings changes saved. Additional resources Usage data collection notice for OpenShift AI
| null |
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_cloud_service/1/html/managing_resources/managing-collection-of-usage-data
|
Providing feedback on Red Hat documentation
|
Providing feedback on Red Hat documentation We appreciate your input on our documentation. Tell us how we can make it better. Providing documentation feedback in Jira Use the Create Issue form to provide feedback on the documentation for Red Hat OpenStack Services on OpenShift (RHOSO) or earlier releases of Red Hat OpenStack Platform (RHOSP). When you create an issue for RHOSO or RHOSP documents, the issue is recorded in the RHOSO Jira project, where you can track the progress of your feedback. To complete the Create Issue form, ensure that you are logged in to Jira. If you do not have a Red Hat Jira account, you can create an account at https://issues.redhat.com . Click the following link to open a Create Issue page: Create Issue Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form. Click Create .
| null |
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/performing_security_operations/proc_providing-feedback-on-red-hat-documentation
|
Chapter 3. Differences between OpenShift Container Platform 3 and 4
|
Chapter 3. Differences between OpenShift Container Platform 3 and 4 OpenShift Container Platform 4.13 introduces architectural changes and enhancements/ The procedures that you used to manage your OpenShift Container Platform 3 cluster might not apply to OpenShift Container Platform 4. For information on configuring your OpenShift Container Platform 4 cluster, review the appropriate sections of the OpenShift Container Platform documentation. For information on new features and other notable technical changes, review the OpenShift Container Platform 4.13 release notes . It is not possible to upgrade your existing OpenShift Container Platform 3 cluster to OpenShift Container Platform 4. You must start with a new OpenShift Container Platform 4 installation. Tools are available to assist in migrating your control plane settings and application workloads. 3.1. Architecture With OpenShift Container Platform 3, administrators individually deployed Red Hat Enterprise Linux (RHEL) hosts, and then installed OpenShift Container Platform on top of these hosts to form a cluster. Administrators were responsible for properly configuring these hosts and performing updates. OpenShift Container Platform 4 represents a significant change in the way that OpenShift Container Platform clusters are deployed and managed. OpenShift Container Platform 4 includes new technologies and functionality, such as Operators, machine sets, and Red Hat Enterprise Linux CoreOS (RHCOS), which are core to the operation of the cluster. This technology shift enables clusters to self-manage some functions previously performed by administrators. This also ensures platform stability and consistency, and simplifies installation and scaling. Beginning with OpenShift Container Platform 4.13, RHCOS now uses Red Hat Enterprise Linux (RHEL) 9.2 packages. This enhancement enables the latest fixes and features as well as the latest hardware support and driver updates. For more information about how this upgrade to RHEL 9.2 might affect your options configuration and services as well as driver and container support, see the RHCOS now uses RHEL 9.2 in the OpenShift Container Platform 4.13 release notes . For more information, see OpenShift Container Platform architecture . Immutable infrastructure OpenShift Container Platform 4 uses Red Hat Enterprise Linux CoreOS (RHCOS), which is designed to run containerized applications, and provides efficient installation, Operator-based management, and simplified upgrades. RHCOS is an immutable container host, rather than a customizable operating system like RHEL. RHCOS enables OpenShift Container Platform 4 to manage and automate the deployment of the underlying container host. RHCOS is a part of OpenShift Container Platform, which means that everything runs inside a container and is deployed using OpenShift Container Platform. In OpenShift Container Platform 4, control plane nodes must run RHCOS, ensuring that full-stack automation is maintained for the control plane. This makes rolling out updates and upgrades a much easier process than in OpenShift Container Platform 3. For more information, see Red Hat Enterprise Linux CoreOS (RHCOS) . Operators Operators are a method of packaging, deploying, and managing a Kubernetes application. Operators ease the operational complexity of running another piece of software. They watch over your environment and use the current state to make decisions in real time. Advanced Operators are designed to upgrade and react to failures automatically. For more information, see Understanding Operators . 3.2. Installation and upgrade Installation process To install OpenShift Container Platform 3.11, you prepared your Red Hat Enterprise Linux (RHEL) hosts, set all of the configuration values your cluster needed, and then ran an Ansible playbook to install and set up your cluster. In OpenShift Container Platform 4.13, you use the OpenShift installation program to create a minimum set of resources required for a cluster. After the cluster is running, you use Operators to further configure your cluster and to install new services. After first boot, Red Hat Enterprise Linux CoreOS (RHCOS) systems are managed by the Machine Config Operator (MCO) that runs in the OpenShift Container Platform cluster. For more information, see Installation process . If you want to add Red Hat Enterprise Linux (RHEL) worker machines to your OpenShift Container Platform 4.13 cluster, you use an Ansible playbook to join the RHEL worker machines after the cluster is running. For more information, see Adding RHEL compute machines to an OpenShift Container Platform cluster . Infrastructure options In OpenShift Container Platform 3.11, you installed your cluster on infrastructure that you prepared and maintained. In addition to providing your own infrastructure, OpenShift Container Platform 4 offers an option to deploy a cluster on infrastructure that the OpenShift Container Platform installation program provisions and the cluster maintains. For more information, see OpenShift Container Platform installation overview . Upgrading your cluster In OpenShift Container Platform 3.11, you upgraded your cluster by running Ansible playbooks. In OpenShift Container Platform 4.13, the cluster manages its own updates, including updates to Red Hat Enterprise Linux CoreOS (RHCOS) on cluster nodes. You can easily upgrade your cluster by using the web console or by using the oc adm upgrade command from the OpenShift CLI and the Operators will automatically upgrade themselves. If your OpenShift Container Platform 4.13 cluster has RHEL worker machines, then you will still need to run an Ansible playbook to upgrade those worker machines. For more information, see Updating clusters . 3.3. Migration considerations Review the changes and other considerations that might affect your transition from OpenShift Container Platform 3.11 to OpenShift Container Platform 4. 3.3.1. Storage considerations Review the following storage changes to consider when transitioning from OpenShift Container Platform 3.11 to OpenShift Container Platform 4.13. Local volume persistent storage Local storage is only supported by using the Local Storage Operator in OpenShift Container Platform 4.13. It is not supported to use the local provisioner method from OpenShift Container Platform 3.11. For more information, see Persistent storage using local volumes . FlexVolume persistent storage The FlexVolume plugin location changed from OpenShift Container Platform 3.11. The new location in OpenShift Container Platform 4.13 is /etc/kubernetes/kubelet-plugins/volume/exec . Attachable FlexVolume plugins are no longer supported. For more information, see Persistent storage using FlexVolume . Container Storage Interface (CSI) persistent storage Persistent storage using the Container Storage Interface (CSI) was Technology Preview in OpenShift Container Platform 3.11. OpenShift Container Platform 4.13 ships with several CSI drivers . You can also install your own driver. For more information, see Persistent storage using the Container Storage Interface (CSI) . Red Hat OpenShift Data Foundation OpenShift Container Storage 3, which is available for use with OpenShift Container Platform 3.11, uses Red Hat Gluster Storage as the backing storage. Red Hat OpenShift Data Foundation 4, which is available for use with OpenShift Container Platform 4, uses Red Hat Ceph Storage as the backing storage. For more information, see Persistent storage using Red Hat OpenShift Data Foundation and the interoperability matrix article. Unsupported persistent storage options Support for the following persistent storage options from OpenShift Container Platform 3.11 has changed in OpenShift Container Platform 4.13: GlusterFS is no longer supported. CephFS as a standalone product is no longer supported. Ceph RBD as a standalone product is no longer supported. If you used one of these in OpenShift Container Platform 3.11, you must choose a different persistent storage option for full support in OpenShift Container Platform 4.13. For more information, see Understanding persistent storage . Migration of in-tree volumes to CSI drivers OpenShift Container Platform 4 is migrating in-tree volume plugins to their Container Storage Interface (CSI) counterparts. In OpenShift Container Platform 4.13, CSI drivers are the new default for the following in-tree volume types: Amazon Web Services (AWS) Elastic Block Storage (EBS) Azure Disk Azure File Google Cloud Platform Persistent Disk (GCP PD) OpenStack Cinder VMware vSphere Note As of OpenShift Container Platform 4.13, VMware vSphere is not available by default. However, you can opt into VMware vSphere. All aspects of volume lifecycle, such as creation, deletion, mounting, and unmounting, is handled by the CSI driver. For more information, see CSI automatic migration . 3.3.2. Networking considerations Review the following networking changes to consider when transitioning from OpenShift Container Platform 3.11 to OpenShift Container Platform 4.13. Network isolation mode The default network isolation mode for OpenShift Container Platform 3.11 was ovs-subnet , though users frequently switched to use ovn-multitenant . The default network isolation mode for OpenShift Container Platform 4.13 is controlled by a network policy. If your OpenShift Container Platform 3.11 cluster used the ovs-subnet or ovs-multitenant mode, it is recommended to switch to a network policy for your OpenShift Container Platform 4.13 cluster. Network policies are supported upstream, are more flexible, and they provide the functionality that ovs-multitenant does. If you want to maintain the ovs-multitenant behavior while using a network policy in OpenShift Container Platform 4.13, follow the steps to configure multitenant isolation using network policy . For more information, see About network policy . OVN-Kubernetes as the default networking plugin in Red Hat OpenShift Networking In OpenShift Container Platform 3.11, OpenShift SDN was the default networking plugin in Red Hat OpenShift Networking. In OpenShift Container Platform 4.13, OVN-Kubernetes is now the default networking plugin. For information on migrating to OVN-Kubernetes from OpenShift SDN, see Migrating from the OpenShift SDN network plugin . 3.3.3. Logging considerations Review the following logging changes to consider when transitioning from OpenShift Container Platform 3.11 to OpenShift Container Platform 4.13. Deploying OpenShift Logging OpenShift Container Platform 4 provides a simple deployment mechanism for OpenShift Logging, by using a Cluster Logging custom resource. For more information, see Installing OpenShift Logging . Aggregated logging data You cannot transition your aggregate logging data from OpenShift Container Platform 3.11 into your new OpenShift Container Platform 4 cluster. For more information, see About OpenShift Logging . Unsupported logging configurations Some logging configurations that were available in OpenShift Container Platform 3.11 are no longer supported in OpenShift Container Platform 4.13. For more information on the explicitly unsupported logging cases, see the logging support documentation . 3.3.4. Security considerations Review the following security changes to consider when transitioning from OpenShift Container Platform 3.11 to OpenShift Container Platform 4.13. Unauthenticated access to discovery endpoints In OpenShift Container Platform 3.11, an unauthenticated user could access the discovery endpoints (for example, /api/* and /apis/* ). For security reasons, unauthenticated access to the discovery endpoints is no longer allowed in OpenShift Container Platform 4.13. If you do need to allow unauthenticated access, you can configure the RBAC settings as necessary; however, be sure to consider the security implications as this can expose internal cluster components to the external network. Identity providers Configuration for identity providers has changed for OpenShift Container Platform 4, including the following notable changes: The request header identity provider in OpenShift Container Platform 4.13 requires mutual TLS, where in OpenShift Container Platform 3.11 it did not. The configuration of the OpenID Connect identity provider was simplified in OpenShift Container Platform 4.13. It now obtains data, which previously had to specified in OpenShift Container Platform 3.11, from the provider's /.well-known/openid-configuration endpoint. For more information, see Understanding identity provider configuration . OAuth token storage format Newly created OAuth HTTP bearer tokens no longer match the names of their OAuth access token objects. The object names are now a hash of the bearer token and are no longer sensitive. This reduces the risk of leaking sensitive information. Default security context constraints The restricted security context constraints (SCC) in OpenShift Container Platform 4 can no longer be accessed by any authenticated user as the restricted SCC in OpenShift Container Platform 3.11. The broad authenticated access is now granted to the restricted-v2 SCC, which is more restrictive than the old restricted SCC. The restricted SCC still exists; users that want to use it must be specifically given permissions to do it. For more information, see Managing security context constraints . 3.3.5. Monitoring considerations Review the following monitoring changes when transitioning from OpenShift Container Platform 3.11 to OpenShift Container Platform 4.13. You cannot migrate Hawkular configurations and metrics to Prometheus. Alert for monitoring infrastructure availability The default alert that triggers to ensure the availability of the monitoring structure was called DeadMansSwitch in OpenShift Container Platform 3.11. This was renamed to Watchdog in OpenShift Container Platform 4. If you had PagerDuty integration set up with this alert in OpenShift Container Platform 3.11, you must set up the PagerDuty integration for the Watchdog alert in OpenShift Container Platform 4. For more information, see Applying custom Alertmanager configuration .
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/migrating_from_version_3_to_4/planning-migration-3-4
|
Release notes and known issues
|
Release notes and known issues Red Hat OpenShift Dev Spaces 3.15 Release notes and known issues for Red Hat OpenShift Dev Spaces 3.15 Jana Vrbkova [email protected] Red Hat Developer Group Documentation Team [email protected]
| null |
https://docs.redhat.com/en/documentation/red_hat_openshift_dev_spaces/3.15/html/release_notes_and_known_issues/index
|
Chapter 7. Infrastructure requirements
|
Chapter 7. Infrastructure requirements 7.1. Platform requirements Red Hat OpenShift Data Foundation 4.16 is supported only on OpenShift Container Platform version 4.16 and its minor versions. Bug fixes for version of Red Hat OpenShift Data Foundation will be released as bug fix versions. For more details, see the Red Hat OpenShift Container Platform Life Cycle Policy . For external cluster subscription requirements, see the Red Hat Knowledgebase article OpenShift Data Foundation Subscription Guide . For a complete list of supported platform versions, see the Red Hat OpenShift Data Foundation Supportability and Interoperability Checker . 7.1.1. Amazon EC2 Supports internal Red Hat OpenShift Data Foundation clusters only. An Internal cluster must meet both, storage device requirements and have a storage class that provides, EBS storage via the aws-ebs provisioner. OpenShift Data Foundation supports gp2-csi and gp3-csi drivers that were introduced by Amazon Web Services (AWS). These drivers offer better storage expansion capabilities and a reduced monthly price point ( gp3-csi ). You can now select the new drivers when selecting your storage class. In case a high throughput is required, gp3-csi is recommended to be used when deploying OpenShift Data Foundation. If you need a high input/output operation per second (IOPS), the recommended EC2 instance types are D2 or D3 . 7.1.2. Bare Metal Supports internal clusters and consuming external clusters. An internal cluster must meet both the storage device requirements and have a storage class that provide local SSD (NVMe/SATA/SAS, SAN) via the Local Storage Operator. 7.1.3. VMware vSphere Supports internal clusters and consuming external clusters. Recommended versions: vSphere 6.7, Update 2 or later vSphere 7.0 or later. For more details, see the VMware vSphere infrastructure requirements . Note If VMware ESXi does not recognize its devices as flash, mark them as flash devices. Before Red Hat OpenShift Data Foundation deployment, refer to Mark Storage Devices as Flash . Additionally, an Internal cluster must meet both the, storage device requirements and have a storage class providing either, vSAN or VMFS datastore via the vsphere-volume provisioner VMDK, RDM, or DirectPath storage devices via the Local Storage Operator. 7.1.4. Microsoft Azure Supports internal Red Hat OpenShift Data Foundation clusters only. An internal cluster must meet both, storage device requirements and have a storage class that provides, an zzure disk via the azure-disk provisioner. 7.1.5. Google Cloud Supports internal Red Hat OpenShift Data Foundation clusters only. An internal cluster must meet both, storage device requirements and have a storage class that provides, a GCE Persistent Disk via the gce-pd provisioner. 7.1.6. Red Hat OpenStack Platform [Technology Preview] Supports internal Red Hat OpenShift Data Foundation clusters and consuming external clusters. An internal cluster must meet both, storage device requirements and have a storage class that provides a standard disk via the Cinder provisioner. 7.1.7. IBM Power Supports internal Red Hat OpenShift Data Foundation clusters and consuming external clusters. An Internal cluster must meet both, storage device requirements and have a storage class providing local SSD (NVMe/SATA/SAS, SAN) via the Local Storage Operator. 7.1.8. IBM Z and IBM(R) LinuxONE Supports internal Red Hat OpenShift Data Foundation clusters. Also, supports external mode where Red Hat Ceph Storage is running on x86. An Internal cluster must meet both, storage device requirements and have a storage class providing local SSD (NVMe/SATA/SAS, SAN) via the Local Storage Operator. 7.1.9. Any platform Supports internal clusters and consuming external clusters. An internal cluster must meet both the storage device requirements and have a storage class that provide local SSD (NVMe/SATA/SAS, SAN) via the Local Storage Operator. 7.2. External mode requirement 7.2.1. Red Hat Ceph Storage To check the supportability and interoperability of Red Hat Ceph Storage (RHCS) with Red Hat OpenShift Data Foundation in external mode, go to the lab Red Hat OpenShift Data Foundation Supportability and Interoperability Checker . Select Service Type as ODF as Self-Managed Service . Select appropriate Version from the drop down. On the Versions tab, click the Supported RHCS Compatibility tab. For instructions regarding how to install a RHCS cluster, see the installation guide . 7.2.2. IBM FlashSystem To use IBM FlashSystem as a pluggable external storage on other providers, you need to first deploy it before you can deploy OpenShift Data Foundation, which would use the IBM FlashSystem storage class as a backing storage. For the latest supported FlashSystem storage systems and versions, see IBM ODF FlashSystem driver documentation . For instructions on how to deploy OpenShift Data Foundation, see Creating an OpenShift Data Foundation Cluster for external IBM FlashSystem storage . 7.3. Resource requirements Red Hat OpenShift Data Foundation services consist of an initial set of base services, and can be extended with additional device sets. All of these Red Hat OpenShift Data Foundation services pods are scheduled by kubernetes on OpenShift Container Platform nodes. Expanding the cluster in multiples of three, one node in each failure domain, is an easy way to satisfy the pod placement rules . Important These requirements relate to OpenShift Data Foundation services only, and not to any other services, operators or workloads that are running on these nodes. Table 7.1. Aggregate avaliable resource requirements for Red Hat OpenShift Data Foundation only Deployment Mode Base services Additional device Set Internal 30 CPU (logical) 72 GiB memory 3 storage devices 6 CPU (logical) 15 GiB memory 3 storage devices External 4 CPU (logical) 16 GiB memory Not applicable Example: For a 3 node cluster in an internal mode deployment with a single device set, a minimum of 3 x 10 = 30 units of CPU are required. For more information, see Chapter 6, Subscriptions and CPU units . For additional guidance with designing your Red Hat OpenShift Data Foundation cluster, see the ODF Sizing Tool . CPU units In this section, 1 CPU Unit maps to the Kubernetes concept of 1 CPU unit. 1 unit of CPU is equivalent to 1 core for non-hyperthreaded CPUs. 2 units of CPU are equivalent to 1 core for hyperthreaded CPUs. Red Hat OpenShift Data Foundation core-based subscriptions always come in pairs (2 cores). Table 7.2. Aggregate minimum resource requirements for IBM Power Deployment Mode Base services Internal 48 CPU (logical) 192 GiB memory 3 storage devices, each with additional 500GB of disk External 24 CPU (logical) 48 GiB memory Example: For a 3 node cluster in an internal-attached devices mode deployment, a minimum of 3 x 16 = 48 units of CPU and 3 x 64 = 192 GB of memory is required. 7.3.1. Resource requirements for IBM Z and IBM LinuxONE infrastructure Red Hat OpenShift Data Foundation services consist of an initial set of base services, and can be extended with additional device sets. All of these Red Hat OpenShift Data Foundation services pods are scheduled by kubernetes on OpenShift Container Platform nodes . Expanding the cluster in multiples of three, one node in each failure domain, is an easy way to satisfy the pod placement rules . Table 7.3. Aggregate available resource requirements for Red Hat OpenShift Data Foundation only (IBM Z and IBM(R) LinuxONE) Deployment Mode Base services Additional device Set IBM Z and IBM(R) LinuxONE minimum hardware requirements Internal 30 CPU (logical) 3 nodes with 10 CPUs (logical) each 72 GiB memory 3 storage devices 6 CPU (logical) 15 GiB memory 3 storage devices 1 IFL External 4 CPU (logical) 16 GiB memory Not applicable Not applicable CPU Is the number of virtual cores defined in the hypervisor, IBM Z/VM, Kernel Virtual Machine (KVM), or both. IFL (Integrated Facility for Linux) Is the physical core for IBM Z and IBM(R) LinuxONE. Minimum system environment In order to operate a minimal cluster with 1 logical partition (LPAR), one additional IFL is required on top of the 6 IFLs. OpenShift Container Platform consumes these IFLs . 7.3.2. Minimum deployment resource requirements An OpenShift Data Foundation cluster will be deployed with minimum configuration when the standard deployment resource requirement is not met. Important These requirements relate to OpenShift Data Foundation services only, and not to any other services, operators or workloads that are running on these nodes. Table 7.4. Aggregate resource requirements for OpenShift Data Foundation only Deployment Mode Base services Internal 24 CPU (logical) 72 GiB memory 3 storage devices If you want to add additional device sets, we recommend converting your minimum deployment to standard deployment. 7.3.3. Compact deployment resource requirements Red Hat OpenShift Data Foundation can be installed on a three-node OpenShift compact bare metal cluster, where all the workloads run on three strong master nodes. There are no worker or storage nodes. Important These requirements relate to OpenShift Data Foundation services only, and not to any other services, operators or workloads that are running on these nodes. Table 7.5. Aggregate resource requirements for OpenShift Data Foundation only Deployment Mode Base services Additional device Set Internal 24 CPU (logical) 72 GiB memory 3 storage devices 6 CPU (logical) 15 GiB memory 3 storage devices To configure OpenShift Container Platform on a compact bare metal cluster, see Configuring a three-node cluster and Delivering a Three-node Architecture for Edge Deployments . 7.3.4. Resource requirements for MCG only deployment An OpenShift Data Foundation cluster deployed only with the Multicloud Object Gateway (MCG) component provides the flexibility in deployment and helps to reduce the resource consumption. Table 7.6. Aggregate resource requirements for MCG only deployment Deployment Mode Core Database (DB) Endpoint Internal 1 CPU 4 GiB memory 0.5 CPU 4 GiB memory 1 CPU 2 GiB memory Note The defaut auto scale is between 1 - 2. 7.3.5. Resource requirements for using Network File system You can create exports using Network File System (NFS) that can then be accessed externally from the OpenShift cluster. If you plan to use this feature, the NFS service consumes 3 CPUs and 8Gi of Ram. NFS is optional and is disabled by default. The NFS volume can be accessed two ways: In-cluster: by an application pod inside of the Openshift cluster. Out of cluster: from outside of the Openshift cluster. For more information about the NFS feature, see Creating exports using NFS 7.3.6. Resource requirements for performance profiles OpenShift Data Foundation provides three performance profiles to enhance the performance of the clusters. You can choose one of these profiles based on your available resources and desired performance level during deployment or post deployment. Table 7.7. Recommended resource requirement for different performance profiles Performance profile CPU Memory Lean 24 72 GiB Balanced 30 72 GiB Performance 45 96 GiB Important Make sure to select the profiles based on the available free resources as you might already be running other workloads. 7.4. Pod placement rules Kubernetes is responsible for pod placement based on declarative placement rules. The Red Hat OpenShift Data Foundation base service placement rules for Internal cluster can be summarized as follows: Nodes are labeled with the cluster.ocs.openshift.io/openshift-storage key Nodes are sorted into pseudo failure domains if none exist Components requiring high availability are spread across failure domains A storage device must be accessible in each failure domain This leads to the requirement that there be at least three nodes, and that nodes be in three distinct rack or zone failure domains in the case of pre-existing topology labels . For additional device sets, there must be a storage device, and sufficient resources for the pod consuming it, in each of the three failure domains. Manual placement rules can be used to override default placement rules, but generally this approach is only suitable for bare metal deployments. 7.5. Storage device requirements Use this section to understand the different storage capacity requirements that you can consider when planning internal mode deployments and upgrades. We generally recommend 12 devices or less per node. This recommendation ensures both that nodes stay below cloud provider dynamic storage device attachment limits, and to limit the recovery time after node failures with local storage devices. Expanding the cluster in multiples of three, one node in each failure domain, is an easy way to satisfy pod placement rules . Storage nodes should have at least two disks, one for the operating system and the remaining disks for OpenShift Data Foundation components. Note You can expand the storage capacity only in the increment of the capacity selected at the time of installation. 7.5.1. Dynamic storage devices Red Hat OpenShift Data Foundation permits the selection of either 0.5 TiB, 2 TiB or 4 TiB capacities as the request size for dynamic storage device sizes. The number of dynamic storage devices that can run per node is a function of the node size, underlying provisioner limits and resource requirements . 7.5.2. Local storage devices For local storage deployment, any disk size of 16 TiB or less can be used, and all disks should be of the same size and type. The number of local storage devices that can run per node is a function of the node size and resource requirements . Expanding the cluster in multiples of three, one node in each failure domain, is an easy way to satisfy pod placement rules . Note Disk partitioning is not supported. 7.5.3. Capacity planning Always ensure that available storage capacity stays ahead of consumption. Recovery is difficult if available storage capacity is completely exhausted, and requires more intervention than simply adding capacity or deleting or migrating content. Capacity alerts are issued when cluster storage capacity reaches 75% (near-full) and 85% (full) of total capacity. Always address capacity warnings promptly, and review your storage regularly to ensure that you do not run out of storage space. When you get to 75% (near-full), either free up space or expand the cluster. When you get the 85% (full) alert, it indicates that you have run out of storage space completely and cannot free up space using standard commands. At this point, contact Red Hat Customer Support . The following tables show example node configurations for Red Hat OpenShift Data Foundation with dynamic storage devices. Table 7.8. Example initial configurations with 3 nodes Storage Device size Storage Devices per node Total capacity Usable storage capacity 0.5 TiB 1 1.5 TiB 0.5 TiB 2 TiB 1 6 TiB 2 TiB 4 TiB 1 12 TiB 4 TiB Table 7.9. Example of expanded configurations with 30 nodes (N) Storage Device size (D) Storage Devices per node (M) Total capacity (D * M * N) Usable storage capacity (D*M*N/3) 0.5 TiB 3 45 TiB 15 TiB 2 TiB 6 360 TiB 120 TiB 4 TiB 9 1080 TiB 360 TiB
| null |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/planning_your_deployment/infrastructure-requirements_rhodf
|
Chapter 19. Automation controller configuration
|
Chapter 19. Automation controller configuration You can configure automation controller settings within the Settings screen in the following tabs: Each tab contains fields with a Reset option, enabling you to revert any value entered back to the default value. Reset All enables you to revert all the values to their factory default values. Save applies the changes you make, but it does not exit the edit dialog. To return to the Settings page, from the navigation panel select Settings or use the breadcrumbs at the top of the current view. 19.1. Authenticating automation controller Through the automation controller UI, you can set up a simplified login through various authentication types, such as GitHub, Google, LDAP, RADIUS, and SAML. Once you create and register your developer application with the appropriate service, you can set up authorizations for them. Procedure From the navigation panel, select Settings . Select from the following Authentication options: Azure AD settings Github settings Google OAuth2 settings LDAP Authentication RADIUS settings SAML settings Transparent SAML Logins Enabling Logging for SAML TACACS+ settings Generic OIDC settings Ensure that you include all the required information. Click Save to apply the settings or Cancel to abandon the changes. 19.2. Configuring jobs The Jobs tab enables you to configure the types of modules that can be used by the automation controller's Ad Hoc Commands feature, set limits on the number of jobs that can be scheduled, define their output size, and other details pertaining to working with jobs in automation controller. Procedure From the navigation panel, select Settings . Select Jobs settings in the Jobs option. Set the configurable options from the fields provided. Click the tooltip icon to the field that you need additional information about. For more information about configuring Galaxy settings, see the Ansible Galaxy Support section of the Automation controller User Guide . Note The values for all timeouts are in seconds. Click Save to apply the settings and Cancel to abandon the changes. 19.3. Configuring system settings The System tab enables you to complete the following actions: Define the base URL for the automation controller host Configure alerts Enable activity capturing Control visibility of users Enable certain automation controller features and functionality through a license file Configure logging aggregation options Procedure From the navigation panel, select Settings . Choose from the following System options: Miscellaneous System settings : Enable activity streams, specify the default execution environment, define the base URL for the automation controller host, enable automation controller administration alerts, set user visibility, define analytics, specify usernames and passwords, and configure proxies. Miscellaneous Authentication settings : Configure options associated with authentication methods (built-in or SSO), sessions (timeout, number of sessions logged in, tokens), and social authentication mapping. Logging settings : Configure logging options based on the type you choose: For more information about each of the logging aggregation types, see the Logging and Aggregation section. Set the configurable options from the fields provided. Click the tooltip icon to the field that you need additional information about. The following is an example of the Miscellaneous System settings: Note The Allow External Users to Create Oauth2 Tokens setting is disabled by default. This ensures external users cannot create their own tokens. If you enable then disable it, any tokens created by external users in the meantime still exist, and are not automatically revoked. Click Save to apply the settings and Cancel to abandon the changes. 19.4. Configuring the user interface The User Interface tab enables you to set automation controller analytics settings, and configure custom logos and login messages. Procedure From the navigation panel, select Settings . Select User Interface settings from the User Interface option. Click Edit to configure your preferences. 19.4.1. Configuring usability analytics and data collection Usability data collection is included with automation controller to collect data to understand how users interact with it, to enhance future releases, and to streamline your user experience. Only users installing a trial of Red Hat Ansible Automation Platform or a fresh installation of automation controller are opted-in for this data collection. Automation controller collects user data automatically to help improve the product. You can opt out or control the way automation controller collects data by setting your participation level in the User Interface settings . Procedure From the navigation panel, select Settings . Select User Interface settings from the User Interface options. Click Edit . Select the desired level of data collection from the User Analytics Tracking State list: Off : Prevents any data collection. Anonymous : Enables data collection without your specific user data. Detailed : Enables data collection including your specific user data. Click Save to apply the settings or Cancel to abandon the changes. Additional resources For more information, see the Red Hat Privacy Statement . 19.4.2. Custom logos and images Automation controller supports the use of a custom logo. You can add a custom logo by uploading an image and supplying a custom login message from the User Interface settings . To access these settings, select Settings from the navigation panel. For the best results, use a .png file with a transparent background. GIF, PNG, and JPEG formats are supported. You can add specific information (such as a
|
[
"managed > manifest_limit => non-compliant managed =< manifest_limit => compliant"
] |
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/automation_controller_administration_guide/controller-config
|
31.4. Adding HBAC Service Groups
|
31.4. Adding HBAC Service Groups HBAC service groups can simplify HBAC rules management: instead of adding individual services to an HBAC rule, you can add a whole service group. To add an HBAC service group, you can use: the IdM web UI (see the section called "Web UI: Adding an HBAC Service Group" ) the command line (see the section called "Command Line: Adding an HBAC Service Group" ) Web UI: Adding an HBAC Service Group Select Policy Host-Based Access Control HBAC Service Groups . Click Add to add an HBAC service group. Enter a name for the service group, and click Add and Edit . On the service group configuration page, click Add to add an HBAC service as a member of the group. Figure 31.7. Adding HBAC Services to an HBAC Service Group Command Line: Adding an HBAC Service Group Use the ipa hbacsvcgroup-add command to add an HBAC service group. For example, to add a group named login : Use the ipa hbacsvcgroup-add-member command to add an HBAC service as a member of the group. For example, to add the sshd service to the login group:
|
[
"ipa hbacsvcgroup-add Service group name: login -------------------------------- Added HBAC service group \"login\" -------------------------------- Service group name: login",
"ipa hbacsvcgroup-add-member Service group name: login [member HBAC service]: sshd Service group name: login Member HBAC service: sshd ------------------------- Number of members added 1 -------------------------"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/hbac-add-service-group
|
Release notes for Red Hat build of OpenJDK 21.0.2
|
Release notes for Red Hat build of OpenJDK 21.0.2 Red Hat build of OpenJDK 21 Red Hat Customer Content Services
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/21/html/release_notes_for_red_hat_build_of_openjdk_21.0.2/index
|
Chapter 4. Installing the Migration Toolkit for Containers
|
Chapter 4. Installing the Migration Toolkit for Containers You can install the Migration Toolkit for Containers (MTC) on OpenShift Container Platform 4. Note To install MTC on OpenShift Container Platform 3, see Installing the legacy Migration Toolkit for Containers Operator on OpenShift Container Platform 3 . By default, the MTC web console and the Migration Controller pod run on the target cluster. You can configure the Migration Controller custom resource manifest to run the MTC web console and the Migration Controller pod on a remote cluster . After you have installed MTC, you must configure an object storage to use as a replication repository. To uninstall MTC, see Uninstalling MTC and deleting resources . 4.1. Compatibility guidelines You must install the Migration Toolkit for Containers (MTC) Operator that is compatible with your OpenShift Container Platform version. Definitions legacy platform OpenShift Container Platform 4.5 and earlier. modern platform OpenShift Container Platform 4.6 and later. legacy operator The MTC Operator designed for legacy platforms. modern operator The MTC Operator designed for modern platforms. control cluster The cluster that runs the MTC controller and GUI. remote cluster A source or destination cluster for a migration that runs Velero. The Control Cluster communicates with Remote clusters via the Velero API to drive migrations. Table 4.1. MTC compatibility: Migrating from a legacy platform OpenShift Container Platform 4.5 or earlier OpenShift Container Platform 4.6 or later Stable MTC version MTC 1.7. z Legacy 1.7 operator: Install manually with the operator.yml file. Important This cluster cannot be the control cluster. MTC 1.7. z Install with OLM, release channel release-v1.7 Note Edge cases exist in which network restrictions prevent modern clusters from connecting to other clusters involved in the migration. For example, when migrating from an OpenShift Container Platform 3.11 cluster on premises to a modern OpenShift Container Platform cluster in the cloud, where the modern cluster cannot connect to the OpenShift Container Platform 3.11 cluster. With MTC 1.7, if one of the remote clusters is unable to communicate with the control cluster because of network restrictions, use the crane tunnel-api command. With the stable MTC release, although you should always designate the most modern cluster as the control cluster, in this specific case it is possible to designate the legacy cluster as the control cluster and push workloads to the remote cluster. 4.2. Installing the legacy Migration Toolkit for Containers Operator on OpenShift Container Platform 4.2 to 4.5 You can install the legacy Migration Toolkit for Containers Operator manually on OpenShift Container Platform versions 4.2 to 4.5. Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. You must have access to registry.redhat.io . You must have podman installed. Procedure Log in to registry.redhat.io with your Red Hat Customer Portal credentials: USD sudo podman login registry.redhat.io Download the operator.yml file by entering the following command: USD sudo podman cp USD(sudo podman create \ registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./ Download the controller.yml file by entering the following command: USD sudo podman cp USD(sudo podman create \ registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/controller.yml ./ Log in to your source cluster. Verify that the cluster can authenticate with registry.redhat.io : USD oc run test --image registry.redhat.io/ubi8 --command sleep infinity Create the Migration Toolkit for Containers Operator object: USD oc create -f operator.yml Example output namespace/openshift-migration created rolebinding.rbac.authorization.k8s.io/system:deployers created serviceaccount/migration-operator created customresourcedefinition.apiextensions.k8s.io/migrationcontrollers.migration.openshift.io created role.rbac.authorization.k8s.io/migration-operator created rolebinding.rbac.authorization.k8s.io/migration-operator created clusterrolebinding.rbac.authorization.k8s.io/migration-operator created deployment.apps/migration-operator created Error from server (AlreadyExists): error when creating "./operator.yml": rolebindings.rbac.authorization.k8s.io "system:image-builders" already exists 1 Error from server (AlreadyExists): error when creating "./operator.yml": rolebindings.rbac.authorization.k8s.io "system:image-pullers" already exists 1 You can ignore Error from server (AlreadyExists) messages. They are caused by the Migration Toolkit for Containers Operator creating resources for earlier versions of OpenShift Container Platform 4 that are provided in later releases. Create the MigrationController object: USD oc create -f controller.yml Verify that the MTC pods are running: USD oc get pods -n openshift-migration 4.3. Installing the Migration Toolkit for Containers Operator on OpenShift Container Platform 4.7 You install the Migration Toolkit for Containers Operator on OpenShift Container Platform 4.7 by using the Operator Lifecycle Manager. Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. Procedure In the OpenShift Container Platform web console, click Operators OperatorHub . Use the Filter by keyword field to find the Migration Toolkit for Containers Operator . Select the Migration Toolkit for Containers Operator and click Install . Click Install . On the Installed Operators page, the Migration Toolkit for Containers Operator appears in the openshift-migration project with the status Succeeded . Click Migration Toolkit for Containers Operator . Under Provided APIs , locate the Migration Controller tile, and click Create Instance . Click Create . Click Workloads Pods to verify that the MTC pods are running. 4.4. Proxy configuration For OpenShift Container Platform 4.1 and earlier versions, you must configure proxies in the MigrationController custom resource (CR) manifest after you install the Migration Toolkit for Containers Operator because these versions do not support a cluster-wide proxy object. For OpenShift Container Platform 4.2 to 4.7, the Migration Toolkit for Containers (MTC) inherits the cluster-wide proxy settings. You can change the proxy parameters if you want to override the cluster-wide proxy settings. 4.4.1. Direct volume migration Direct Volume Migration (DVM) was introduced in MTC 1.4.2. DVM supports only one proxy. The source cluster cannot access the route of the target cluster if the target cluster is also behind a proxy. If you want to perform a DVM from a source cluster behind a proxy, you must configure a TCP proxy that works at the transport layer and forwards the SSL connections transparently without decrypting and re-encrypting them with their own SSL certificates. A Stunnel proxy is an example of such a proxy. 4.4.1.1. TCP proxy setup for DVM You can set up a direct connection between the source and the target cluster through a TCP proxy and configure the stunnel_tcp_proxy variable in the MigrationController CR to use the proxy: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port Direct volume migration (DVM) supports only basic authentication for the proxy. Moreover, DVM works only from behind proxies that can tunnel a TCP connection transparently. HTTP/HTTPS proxies in man-in-the-middle mode do not work. The existing cluster-wide proxies might not support this behavior. As a result, the proxy settings for DVM are intentionally kept different from the usual proxy configuration in MTC. 4.4.1.2. Why use a TCP proxy instead of an HTTP/HTTPS proxy? You can enable DVM by running Rsync between the source and the target cluster over an OpenShift route. Traffic is encrypted using Stunnel, a TCP proxy. The Stunnel running on the source cluster initiates a TLS connection with the target Stunnel and transfers data over an encrypted channel. Cluster-wide HTTP/HTTPS proxies in OpenShift are usually configured in man-in-the-middle mode where they negotiate their own TLS session with the outside servers. However, this does not work with Stunnel. Stunnel requires that its TLS session be untouched by the proxy, essentially making the proxy a transparent tunnel which simply forwards the TCP connection as-is. Therefore, you must use a TCP proxy. 4.4.1.3. Known issue Migration fails with error Upgrade request required The migration Controller uses the SPDY protocol to execute commands within remote pods. If the remote cluster is behind a proxy or a firewall that does not support the SPDY protocol, the migration controller fails to execute remote commands. The migration fails with the error message Upgrade request required . Workaround: Use a proxy that supports the SPDY protocol. In addition to supporting the SPDY protocol, the proxy or firewall also must pass the Upgrade HTTP header to the API server. The client uses this header to open a websocket connection with the API server. If the Upgrade header is blocked by the proxy or firewall, the migration fails with the error message Upgrade request required . Workaround: Ensure that the proxy forwards the Upgrade header. 4.4.2. Tuning network policies for migrations OpenShift supports restricting traffic to or from pods using NetworkPolicy or EgressFirewalls based on the network plugin used by the cluster. If any of the source namespaces involved in a migration use such mechanisms to restrict network traffic to pods, the restrictions might inadvertently stop traffic to Rsync pods during migration. Rsync pods running on both the source and the target clusters must connect to each other over an OpenShift Route. Existing NetworkPolicy or EgressNetworkPolicy objects can be configured to automatically exempt Rsync pods from these traffic restrictions. 4.4.2.1. NetworkPolicy configuration 4.4.2.1.1. Egress traffic from Rsync pods You can use the unique labels of Rsync pods to allow egress traffic to pass from them if the NetworkPolicy configuration in the source or destination namespaces blocks this type of traffic. The following policy allows all egress traffic from Rsync pods in the namespace: apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress 4.4.2.1.2. Ingress traffic to Rsync pods apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress 4.4.2.2. EgressNetworkPolicy configuration The EgressNetworkPolicy object or Egress Firewalls are OpenShift constructs designed to block egress traffic leaving the cluster. Unlike the NetworkPolicy object, the Egress Firewall works at a project level because it applies to all pods in the namespace. Therefore, the unique labels of Rsync pods do not exempt only Rsync pods from the restrictions. However, you can add the CIDR ranges of the source or target cluster to the Allow rule of the policy so that a direct connection can be setup between two clusters. Based on which cluster the Egress Firewall is present in, you can add the CIDR range of the other cluster to allow egress traffic between the two: apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny 4.4.2.3. Configuring supplemental groups for Rsync pods When your PVCs use a shared storage, you can configure the access to that storage by adding supplemental groups to Rsync pod definitions in order for the pods to allow access: Table 4.2. Supplementary groups for Rsync pods Variable Type Default Description src_supplemental_groups string Not set Comma-separated list of supplemental groups for source Rsync pods target_supplemental_groups string Not set Comma-separated list of supplemental groups for target Rsync pods Example usage The MigrationController CR can be updated to set values for these supplemental groups: spec: src_supplemental_groups: "1000,2000" target_supplemental_groups: "2000,3000" 4.4.3. Configuring proxies Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. Procedure Get the MigrationController CR manifest: USD oc get migrationcontroller <migration_controller> -n openshift-migration Update the proxy parameters: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration ... spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2 1 Stunnel proxy URL for direct volume migration. 2 Comma-separated list of destination domain names, domains, IP addresses, or other network CIDRs to exclude proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass proxy for all destinations. If you scale up workers that are not included in the network defined by the networking.machineNetwork[].cidr field from the installation configuration, you must add them to this list to prevent connection issues. This field is ignored if neither the httpProxy nor the httpsProxy field is set. Save the manifest as migration-controller.yaml . Apply the updated manifest: USD oc replace -f migration-controller.yaml -n openshift-migration For more information, see Configuring the cluster-wide proxy . 4.5. Configuring a replication repository You must configure an object storage to use as a replication repository. The Migration Toolkit for Containers (MTC) copies data from the source cluster to the replication repository, and then from the replication repository to the target cluster. MTC supports the file system and snapshot data copy methods for migrating data from the source cluster to the target cluster. Select a method that is suited for your environment and is supported by your storage provider. MTC supports the following storage providers: Multicloud Object Gateway Amazon Web Services S3 Google Cloud Platform Microsoft Azure Blob Generic S3 object storage, for example, Minio or Ceph S3 4.5.1. Prerequisites All clusters must have uninterrupted network access to the replication repository. If you use a proxy server with an internally hosted replication repository, you must ensure that the proxy allows access to the replication repository. 4.5.2. Retrieving Multicloud Object Gateway credentials You must retrieve the Multicloud Object Gateway (MCG) credentials and S3 endpoint in order to configure MCG as a replication repository for the Migration Toolkit for Containers (MTC). You must retrieve the Multicloud Object Gateway (MCG) credentials in order to create a Secret custom resource (CR) for the OpenShift API for Data Protection (OADP). MCG is a component of OpenShift Container Storage. Prerequisites You must deploy OpenShift Container Storage by using the appropriate OpenShift Container Storage deployment guide . Procedure Obtain the S3 endpoint, AWS_ACCESS_KEY_ID , and AWS_SECRET_ACCESS_KEY by running the describe command on the NooBaa custom resource. You use these credentials to add MCG as a replication repository. 4.5.3. Configuring Amazon Web Services You configure Amazon Web Services (AWS) S3 object storage as a replication repository for the Migration Toolkit for Containers (MTC). Prerequisites You must have the AWS CLI installed. The AWS S3 storage bucket must be accessible to the source and target clusters. If you are using the snapshot copy method: You must have access to EC2 Elastic Block Storage (EBS). The source and target clusters must be in the same region. The source and target clusters must have the same storage class. The storage class must be compatible with snapshots. Procedure Set the BUCKET variable: USD BUCKET=<your_bucket> Set the REGION variable: USD REGION=<your_region> Create an AWS S3 bucket: USD aws s3api create-bucket \ --bucket USDBUCKET \ --region USDREGION \ --create-bucket-configuration LocationConstraint=USDREGION 1 1 us-east-1 does not support a LocationConstraint . If your region is us-east-1 , omit --create-bucket-configuration LocationConstraint=USDREGION . Create an IAM user: USD aws iam create-user --user-name velero 1 1 If you want to use Velero to back up multiple clusters with multiple S3 buckets, create a unique user name for each cluster. Create a velero-policy.json file: USD cat > velero-policy.json <<EOF { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ec2:DescribeVolumes", "ec2:DescribeSnapshots", "ec2:CreateTags", "ec2:CreateVolume", "ec2:CreateSnapshot", "ec2:DeleteSnapshot" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "s3:GetObject", "s3:DeleteObject", "s3:PutObject", "s3:AbortMultipartUpload", "s3:ListMultipartUploadParts" ], "Resource": [ "arn:aws:s3:::USD{BUCKET}/*" ] }, { "Effect": "Allow", "Action": [ "s3:ListBucket", "s3:GetBucketLocation", "s3:ListBucketMultipartUploads" ], "Resource": [ "arn:aws:s3:::USD{BUCKET}" ] } ] } EOF Attach the policies to give the velero user the necessary permissions: USD aws iam put-user-policy \ --user-name velero \ --policy-name velero \ --policy-document file://velero-policy.json Create an access key for the velero user: USD aws iam create-access-key --user-name velero Example output { "AccessKey": { "UserName": "velero", "Status": "Active", "CreateDate": "2017-07-31T22:24:41.576Z", "SecretAccessKey": <AWS_SECRET_ACCESS_KEY>, "AccessKeyId": <AWS_ACCESS_KEY_ID> } } Record the AWS_SECRET_ACCESS_KEY and the AWS_ACCESS_KEY_ID . You use the credentials to add AWS as a replication repository. 4.5.4. Configuring Google Cloud Platform You configure a Google Cloud Platform (GCP) storage bucket as a replication repository for the Migration Toolkit for Containers (MTC). Prerequisites You must have the gcloud and gsutil CLI tools installed. See the Google cloud documentation for details. The GCP storage bucket must be accessible to the source and target clusters. If you are using the snapshot copy method: The source and target clusters must be in the same region. The source and target clusters must have the same storage class. The storage class must be compatible with snapshots. Procedure Log in to GCP: USD gcloud auth login Set the BUCKET variable: USD BUCKET=<bucket> 1 1 Specify your bucket name. Create the storage bucket: USD gsutil mb gs://USDBUCKET/ Set the PROJECT_ID variable to your active project: USD PROJECT_ID=USD(gcloud config get-value project) Create a service account: USD gcloud iam service-accounts create velero \ --display-name "Velero service account" List your service accounts: USD gcloud iam service-accounts list Set the SERVICE_ACCOUNT_EMAIL variable to match its email value: USD SERVICE_ACCOUNT_EMAIL=USD(gcloud iam service-accounts list \ --filter="displayName:Velero service account" \ --format 'value(email)') Attach the policies to give the velero user the necessary permissions: USD ROLE_PERMISSIONS=( compute.disks.get compute.disks.create compute.disks.createSnapshot compute.snapshots.get compute.snapshots.create compute.snapshots.useReadOnly compute.snapshots.delete compute.zones.get ) Create the velero.server custom role: USD gcloud iam roles create velero.server \ --project USDPROJECT_ID \ --title "Velero Server" \ --permissions "USD(IFS=","; echo "USD{ROLE_PERMISSIONS[*]}")" Add IAM policy binding to the project: USD gcloud projects add-iam-policy-binding USDPROJECT_ID \ --member serviceAccount:USDSERVICE_ACCOUNT_EMAIL \ --role projects/USDPROJECT_ID/roles/velero.server Update the IAM service account: USD gsutil iam ch serviceAccount:USDSERVICE_ACCOUNT_EMAIL:objectAdmin gs://USD{BUCKET} Save the IAM service account keys to the credentials-velero file in the current directory: USD gcloud iam service-accounts keys create credentials-velero \ --iam-account USDSERVICE_ACCOUNT_EMAIL You use the credentials-velero file to add GCP as a replication repository. 4.5.5. Configuring Microsoft Azure You configure a Microsoft Azure Blob storage container as a replication repository for the Migration Toolkit for Containers (MTC). Prerequisites You must have the Azure CLI installed. The Azure Blob storage container must be accessible to the source and target clusters. If you are using the snapshot copy method: The source and target clusters must be in the same region. The source and target clusters must have the same storage class. The storage class must be compatible with snapshots. Procedure Log in to Azure: USD az login Set the AZURE_RESOURCE_GROUP variable: USD AZURE_RESOURCE_GROUP=Velero_Backups Create an Azure resource group: USD az group create -n USDAZURE_RESOURCE_GROUP --location CentralUS 1 1 Specify your location. Set the AZURE_STORAGE_ACCOUNT_ID variable: USD AZURE_STORAGE_ACCOUNT_ID="veleroUSD(uuidgen | cut -d '-' -f5 | tr '[A-Z]' '[a-z]')" Create an Azure storage account: USD az storage account create \ --name USDAZURE_STORAGE_ACCOUNT_ID \ --resource-group USDAZURE_RESOURCE_GROUP \ --sku Standard_GRS \ --encryption-services blob \ --https-only true \ --kind BlobStorage \ --access-tier Hot Set the BLOB_CONTAINER variable: USD BLOB_CONTAINER=velero Create an Azure Blob storage container: USD az storage container create \ -n USDBLOB_CONTAINER \ --public-access off \ --account-name USDAZURE_STORAGE_ACCOUNT_ID Create a service principal and credentials for velero : USD AZURE_SUBSCRIPTION_ID=`az account list --query '[?isDefault].id' -o tsv` \ AZURE_TENANT_ID=`az account list --query '[?isDefault].tenantId' -o tsv` \ AZURE_CLIENT_SECRET=`az ad sp create-for-rbac --name "velero" \ --role "Contributor" --query 'password' -o tsv` \ AZURE_CLIENT_ID=`az ad sp list --display-name "velero" \ --query '[0].appId' -o tsv` Save the service principal credentials in the credentials-velero file: USD cat << EOF > ./credentials-velero AZURE_SUBSCRIPTION_ID=USD{AZURE_SUBSCRIPTION_ID} AZURE_TENANT_ID=USD{AZURE_TENANT_ID} AZURE_CLIENT_ID=USD{AZURE_CLIENT_ID} AZURE_CLIENT_SECRET=USD{AZURE_CLIENT_SECRET} AZURE_RESOURCE_GROUP=USD{AZURE_RESOURCE_GROUP} AZURE_CLOUD_NAME=AzurePublicCloud EOF You use the credentials-velero file to add Azure as a replication repository. 4.5.6. Additional resources MTC workflow About data copy methods Adding a replication repository to the MTC web console 4.6. Uninstalling MTC and deleting resources You can uninstall the Migration Toolkit for Containers (MTC) and delete its resources to clean up the cluster. Note Deleting the velero CRDs removes Velero from the cluster. Prerequisites You must be logged in as a user with cluster-admin privileges. Procedure Delete the MigrationController custom resource (CR) on all clusters: USD oc delete migrationcontroller <migration_controller> Uninstall the Migration Toolkit for Containers Operator on OpenShift Container Platform 4 by using the Operator Lifecycle Manager. Delete cluster-scoped resources on all clusters by running the following commands: migration custom resource definitions (CRDs): USD oc delete USD(oc get crds -o name | grep 'migration.openshift.io') velero CRDs: USD oc delete USD(oc get crds -o name | grep 'velero') migration cluster roles: USD oc delete USD(oc get clusterroles -o name | grep 'migration.openshift.io') migration-operator cluster role: USD oc delete clusterrole migration-operator velero cluster roles: USD oc delete USD(oc get clusterroles -o name | grep 'velero') migration cluster role bindings: USD oc delete USD(oc get clusterrolebindings -o name | grep 'migration.openshift.io') migration-operator cluster role bindings: USD oc delete clusterrolebindings migration-operator velero cluster role bindings: USD oc delete USD(oc get clusterrolebindings -o name | grep 'velero')
|
[
"sudo podman login registry.redhat.io",
"sudo podman cp USD(sudo podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./",
"sudo podman cp USD(sudo podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/controller.yml ./",
"oc run test --image registry.redhat.io/ubi8 --command sleep infinity",
"oc create -f operator.yml",
"namespace/openshift-migration created rolebinding.rbac.authorization.k8s.io/system:deployers created serviceaccount/migration-operator created customresourcedefinition.apiextensions.k8s.io/migrationcontrollers.migration.openshift.io created role.rbac.authorization.k8s.io/migration-operator created rolebinding.rbac.authorization.k8s.io/migration-operator created clusterrolebinding.rbac.authorization.k8s.io/migration-operator created deployment.apps/migration-operator created Error from server (AlreadyExists): error when creating \"./operator.yml\": rolebindings.rbac.authorization.k8s.io \"system:image-builders\" already exists 1 Error from server (AlreadyExists): error when creating \"./operator.yml\": rolebindings.rbac.authorization.k8s.io \"system:image-pullers\" already exists",
"oc create -f controller.yml",
"oc get pods -n openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress",
"apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny",
"spec: src_supplemental_groups: \"1000,2000\" target_supplemental_groups: \"2000,3000\"",
"oc get migrationcontroller <migration_controller> -n openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2",
"oc replace -f migration-controller.yaml -n openshift-migration",
"BUCKET=<your_bucket>",
"REGION=<your_region>",
"aws s3api create-bucket --bucket USDBUCKET --region USDREGION --create-bucket-configuration LocationConstraint=USDREGION 1",
"aws iam create-user --user-name velero 1",
"cat > velero-policy.json <<EOF { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"ec2:DescribeVolumes\", \"ec2:DescribeSnapshots\", \"ec2:CreateTags\", \"ec2:CreateVolume\", \"ec2:CreateSnapshot\", \"ec2:DeleteSnapshot\" ], \"Resource\": \"*\" }, { \"Effect\": \"Allow\", \"Action\": [ \"s3:GetObject\", \"s3:DeleteObject\", \"s3:PutObject\", \"s3:AbortMultipartUpload\", \"s3:ListMultipartUploadParts\" ], \"Resource\": [ \"arn:aws:s3:::USD{BUCKET}/*\" ] }, { \"Effect\": \"Allow\", \"Action\": [ \"s3:ListBucket\", \"s3:GetBucketLocation\", \"s3:ListBucketMultipartUploads\" ], \"Resource\": [ \"arn:aws:s3:::USD{BUCKET}\" ] } ] } EOF",
"aws iam put-user-policy --user-name velero --policy-name velero --policy-document file://velero-policy.json",
"aws iam create-access-key --user-name velero",
"{ \"AccessKey\": { \"UserName\": \"velero\", \"Status\": \"Active\", \"CreateDate\": \"2017-07-31T22:24:41.576Z\", \"SecretAccessKey\": <AWS_SECRET_ACCESS_KEY>, \"AccessKeyId\": <AWS_ACCESS_KEY_ID> } }",
"gcloud auth login",
"BUCKET=<bucket> 1",
"gsutil mb gs://USDBUCKET/",
"PROJECT_ID=USD(gcloud config get-value project)",
"gcloud iam service-accounts create velero --display-name \"Velero service account\"",
"gcloud iam service-accounts list",
"SERVICE_ACCOUNT_EMAIL=USD(gcloud iam service-accounts list --filter=\"displayName:Velero service account\" --format 'value(email)')",
"ROLE_PERMISSIONS=( compute.disks.get compute.disks.create compute.disks.createSnapshot compute.snapshots.get compute.snapshots.create compute.snapshots.useReadOnly compute.snapshots.delete compute.zones.get )",
"gcloud iam roles create velero.server --project USDPROJECT_ID --title \"Velero Server\" --permissions \"USD(IFS=\",\"; echo \"USD{ROLE_PERMISSIONS[*]}\")\"",
"gcloud projects add-iam-policy-binding USDPROJECT_ID --member serviceAccount:USDSERVICE_ACCOUNT_EMAIL --role projects/USDPROJECT_ID/roles/velero.server",
"gsutil iam ch serviceAccount:USDSERVICE_ACCOUNT_EMAIL:objectAdmin gs://USD{BUCKET}",
"gcloud iam service-accounts keys create credentials-velero --iam-account USDSERVICE_ACCOUNT_EMAIL",
"az login",
"AZURE_RESOURCE_GROUP=Velero_Backups",
"az group create -n USDAZURE_RESOURCE_GROUP --location CentralUS 1",
"AZURE_STORAGE_ACCOUNT_ID=\"veleroUSD(uuidgen | cut -d '-' -f5 | tr '[A-Z]' '[a-z]')\"",
"az storage account create --name USDAZURE_STORAGE_ACCOUNT_ID --resource-group USDAZURE_RESOURCE_GROUP --sku Standard_GRS --encryption-services blob --https-only true --kind BlobStorage --access-tier Hot",
"BLOB_CONTAINER=velero",
"az storage container create -n USDBLOB_CONTAINER --public-access off --account-name USDAZURE_STORAGE_ACCOUNT_ID",
"AZURE_SUBSCRIPTION_ID=`az account list --query '[?isDefault].id' -o tsv` AZURE_TENANT_ID=`az account list --query '[?isDefault].tenantId' -o tsv` AZURE_CLIENT_SECRET=`az ad sp create-for-rbac --name \"velero\" --role \"Contributor\" --query 'password' -o tsv` AZURE_CLIENT_ID=`az ad sp list --display-name \"velero\" --query '[0].appId' -o tsv`",
"cat << EOF > ./credentials-velero AZURE_SUBSCRIPTION_ID=USD{AZURE_SUBSCRIPTION_ID} AZURE_TENANT_ID=USD{AZURE_TENANT_ID} AZURE_CLIENT_ID=USD{AZURE_CLIENT_ID} AZURE_CLIENT_SECRET=USD{AZURE_CLIENT_SECRET} AZURE_RESOURCE_GROUP=USD{AZURE_RESOURCE_GROUP} AZURE_CLOUD_NAME=AzurePublicCloud EOF",
"oc delete migrationcontroller <migration_controller>",
"oc delete USD(oc get crds -o name | grep 'migration.openshift.io')",
"oc delete USD(oc get crds -o name | grep 'velero')",
"oc delete USD(oc get clusterroles -o name | grep 'migration.openshift.io')",
"oc delete clusterrole migration-operator",
"oc delete USD(oc get clusterroles -o name | grep 'velero')",
"oc delete USD(oc get clusterrolebindings -o name | grep 'migration.openshift.io')",
"oc delete clusterrolebindings migration-operator",
"oc delete USD(oc get clusterrolebindings -o name | grep 'velero')"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/migration_toolkit_for_containers/installing-mtc
|
Chapter 13. Managing machines with the Cluster API
|
Chapter 13. Managing machines with the Cluster API 13.1. About the Cluster API Important Managing machines with the Cluster API is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . The Cluster API is an upstream project that is integrated into OpenShift Container Platform as a Technology Preview for Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, Red Hat OpenStack Platform (RHOSP), and VMware vSphere. 13.1.1. Cluster API overview You can use the Cluster API to create and manage compute machine sets and compute machines in your OpenShift Container Platform cluster. This capability is in addition or an alternative to managing machines with the Machine API. For OpenShift Container Platform 4.18 clusters, you can use the Cluster API to perform node host provisioning management actions after the cluster installation finishes. This system enables an elastic, dynamic provisioning method on top of public or private cloud infrastructure. With the Cluster API Technology Preview, you can create compute machines and compute machine sets on OpenShift Container Platform clusters for supported providers. You can also explore the features that are enabled by this implementation that might not be available with the Machine API. 13.1.1.1. Cluster API benefits By using the Cluster API, OpenShift Container Platform users and developers gain the following advantages: The option to use upstream community Cluster API infrastructure providers that might not be supported by the Machine API. The opportunity to collaborate with third parties who maintain machine controllers for infrastructure providers. The ability to use the same set of Kubernetes tools for infrastructure management in OpenShift Container Platform. The ability to create compute machine sets by using the Cluster API that support features that are not available with the Machine API. 13.1.1.2. Cluster API limitations Using the Cluster API to manage machines is a Technology Preview feature and has the following limitations: To use this feature, you must enable the TechPreviewNoUpgrade feature set. Important Enabling this feature set cannot be undone and prevents minor version updates. Only Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, Red Hat OpenStack Platform (RHOSP), and VMware vSphere clusters can use the Cluster API. You must manually create some of the primary resources that the Cluster API requires. For more information, see "Getting started with the Cluster API". You cannot use the Cluster API to manage control plane machines. Migration of existing compute machine sets created by the Machine API to Cluster API compute machine sets is not supported. Full feature parity with the Machine API is not available. For clusters that use the Cluster API, OpenShift CLI ( oc ) commands prioritize Cluster API objects over Machine API objects. This behavior impacts any oc command that acts upon any object that is represented in both the Cluster API and the Machine API. For more information and a workaround for this issue, see "Referencing the intended objects when using the CLI" in the troubleshooting content. Additional resources Enabling features using feature gates Getting started with the Cluster API Referencing the intended objects when using the CLI 13.1.2. Cluster API architecture The OpenShift Container Platform integration of the upstream Cluster API is implemented and managed by the Cluster CAPI Operator. The Cluster CAPI Operator and its operands are provisioned in the openshift-cluster-api namespace, in contrast to the Machine API, which uses the openshift-machine-api namespace. 13.1.2.1. The Cluster CAPI Operator The Cluster CAPI Operator is an OpenShift Container Platform Operator that maintains the lifecycle of Cluster API resources. This Operator is responsible for all administrative tasks related to deploying the Cluster API project within an OpenShift Container Platform cluster. If a cluster is configured correctly to allow the use of the Cluster API, the Cluster CAPI Operator installs the Cluster API components on the cluster. For more information, see the "Cluster CAPI Operator" entry in the Cluster Operators reference content. Additional resources Cluster CAPI Operator 13.1.2.2. Cluster API primary resources The Cluster API consists of the following primary resources. For the Technology Preview of this feature, you must create some of these resources manually in the openshift-cluster-api namespace. Cluster A fundamental unit that represents a cluster that is managed by the Cluster API. Infrastructure cluster A provider-specific resource that defines properties that all of the compute machine sets in the cluster share, such as the region and subnets. Machine template A provider-specific template that defines the properties of the machines that a compute machine set creates. Machine set A group of machines. Compute machine sets are to machines as replica sets are to pods. To add machines or scale them down, change the replicas field on the compute machine set custom resource to meet your compute needs. With the Cluster API, a compute machine set references a Cluster object and a provider-specific machine template. Machine A fundamental unit that describes the host for a node. The Cluster API creates machines based on the configuration in the machine template. 13.2. Getting started with the Cluster API Important Managing machines with the Cluster API is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . For the Cluster API Technology Preview, you must manually create some of the primary resources that the Cluster API requires. 13.2.1. Creating the Cluster API primary resources To create the Cluster API primary resources, you must obtain the cluster ID value, which you use for the <cluster_name> parameter in the cluster resource manifest. 13.2.1.1. Obtaining the cluster ID value You can find the cluster ID value by using the OpenShift CLI ( oc ). Prerequisites You have deployed an OpenShift Container Platform cluster. You have access to the cluster using an account with cluster-admin permissions. You have installed the OpenShift CLI ( oc ). Procedure Obtain the value of the cluster ID by running the following command: USD oc get infrastructure cluster \ -o jsonpath='{.status.infrastructureName}' You can create the Cluster API primary resources manually by creating YAML manifest files and applying them with the OpenShift CLI ( oc ). 13.2.1.2. Creating the Cluster API cluster resource You can create the cluster resource by creating a YAML manifest file and applying it with the OpenShift CLI ( oc ). Prerequisites You have deployed an OpenShift Container Platform cluster. You have enabled the use of the Cluster API. You have access to the cluster using an account with cluster-admin permissions. You have installed the OpenShift CLI ( oc ). You have the cluster ID value. Procedure Create a YAML file similar to the following. This procedure uses <cluster_resource_file>.yaml as an example file name. apiVersion: cluster.x-k8s.io/v1beta1 kind: Cluster metadata: name: <cluster_name> 1 namespace: openshift-cluster-api spec: infrastructureRef: apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: <infrastructure_kind> 2 name: <cluster_name> namespace: openshift-cluster-api 1 Specify the cluster ID as the name of the cluster. 2 Specify the infrastructure kind for the cluster. The following values are valid: Cluster cloud provider Value Amazon Web Services (AWS) AWSCluster Google Cloud Platform (GCP) GCPCluster Microsoft Azure AzureCluster Red Hat OpenStack Platform (RHOSP) OpenStackCluster VMware vSphere VSphereCluster Create the cluster CR by running the following command: USD oc create -f <cluster_resource_file>.yaml Verification Confirm that the cluster CR exists by running the following command: USD oc get cluster Example output NAME PHASE AGE VERSION <cluster_name> Provisioning 4h6m The cluster resource is ready when the value of PHASE is Provisioned . Additional resources Cluster API configuration 13.2.1.3. Creating a Cluster API machine template You can create a provider-specific machine template resource by creating a YAML manifest file and applying it with the OpenShift CLI ( oc ). Prerequisites You have deployed an OpenShift Container Platform cluster. You have enabled the use of the Cluster API. You have access to the cluster using an account with cluster-admin permissions. You have installed the OpenShift CLI ( oc ). You have created and applied the cluster resource. Procedure Create a YAML file similar to the following. This procedure uses <machine_template_resource_file>.yaml as an example file name. apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: <machine_template_kind> 1 metadata: name: <template_name> 2 namespace: openshift-cluster-api spec: template: spec: 3 1 Specify the machine template kind. This value must match the value for your platform. The following values are valid: Cluster cloud provider Value Amazon Web Services (AWS) AWSMachineTemplate Google Cloud Platform (GCP) GCPMachineTemplate Microsoft Azure AzureMachineTemplate Red Hat OpenStack Platform (RHOSP) OpenStackMachineTemplate VMware vSphere VSphereMachineTemplate 2 Specify a name for the machine template. 3 Specify the details for your environment. These parameters are provider specific. For more information, see the sample Cluster API machine template YAML for your provider. Create the machine template CR by running the following command: USD oc create -f <machine_template_resource_file>.yaml Verification Confirm that the machine template CR is created by running the following command: USD oc get <machine_template_kind> where <machine_template_kind> is the value that corresponds to your platform. Example output NAME AGE <template_name> 77m Additional resources Sample YAML for a Cluster API machine template resource on Amazon Web Services Sample YAML for a Cluster API machine template resource on Google Cloud Platform Sample YAML for a Cluster API machine template resource on Microsoft Azure Sample YAML for a Cluster API machine template resource on RHOSP Sample YAML for a Cluster API machine template resource on VMware vSphere 13.2.1.4. Creating a Cluster API compute machine set You can create compute machine sets that use the Cluster API to dynamically manage the machine compute resources for specific workloads of your choice. Prerequisites You have deployed an OpenShift Container Platform cluster. You have enabled the use of the Cluster API. You have access to the cluster using an account with cluster-admin permissions. You have installed the OpenShift CLI ( oc ). You have created the cluster and machine template resources. Procedure Create a YAML file similar to the following. This procedure uses <machine_set_resource_file>.yaml as an example file name. apiVersion: cluster.x-k8s.io/v1beta1 kind: MachineSet metadata: name: <machine_set_name> 1 namespace: openshift-cluster-api spec: clusterName: <cluster_name> 2 replicas: 1 selector: matchLabels: test: example template: metadata: labels: test: example spec: 3 # ... 1 Specify a name for the compute machine set. The cluster ID, machine role, and region form a typical pattern for this value in the following format: <cluster_name>-<role>-<region> . 2 Specify the name of the cluster. 3 Specify the details for your environment. These parameters are provider specific. For more information, see the sample Cluster API compute machine set YAML for your provider. Create the compute machine set CR by running the following command: USD oc create -f <machine_set_resource_file>.yaml Confirm that the compute machine set CR is created by running the following command: USD oc get machineset -n openshift-cluster-api 1 1 Specify the openshift-cluster-api namespace. Example output NAME CLUSTER REPLICAS READY AVAILABLE AGE VERSION <machine_set_name> <cluster_name> 1 1 1 17m When the new compute machine set is available, the REPLICAS and AVAILABLE values match. If the compute machine set is not available, wait a few minutes and run the command again. Verification To verify that the compute machine set is creating machines according to your required configuration, review the lists of machines and nodes in the cluster by running the following commands: View the list of Cluster API machines: USD oc get machine -n openshift-cluster-api 1 1 Specify the openshift-cluster-api namespace. Example output NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION <machine_set_name>-<string_id> <cluster_name> <ip_address>.<region>.compute.internal <provider_id> Running 8m23s View the list of nodes: USD oc get node Example output NAME STATUS ROLES AGE VERSION <ip_address_1>.<region>.compute.internal Ready worker 5h14m v1.28.5 <ip_address_2>.<region>.compute.internal Ready master 5h19m v1.28.5 <ip_address_3>.<region>.compute.internal Ready worker 7m v1.28.5 Additional resources Sample YAML for a Cluster API compute machine set resource on Amazon Web Services Sample YAML for a Cluster API compute machine set resource on Google Cloud Platform Sample YAML for a Cluster API compute machine set resource on Microsoft Azure Sample YAML for a Cluster API compute machine set resource on RHOSP Sample YAML for a Cluster API compute machine set resource on VMware vSphere 13.3. Managing machines with the Cluster API Important Managing machines with the Cluster API is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 13.3.1. Modifying a Cluster API machine template You can update the machine template resource for your cluster by modifying the YAML manifest file and applying it with the OpenShift CLI ( oc ). Prerequisites You have deployed an OpenShift Container Platform cluster that uses the Cluster API. You have access to the cluster using an account with cluster-admin permissions. You have installed the OpenShift CLI ( oc ). Procedure List the machine template resource for your cluster by running the following command: USD oc get <machine_template_kind> 1 1 Specify the value that corresponds to your platform. The following values are valid: Cluster cloud provider Value Amazon Web Services AWSMachineTemplate Google Cloud Platform GCPMachineTemplate Microsoft Azure AzureMachineTemplate RHOSP OpenStackMachineTemplate VMware vSphere VSphereMachineTemplate Example output NAME AGE <template_name> 77m Write the machine template resource for your cluster to a file that you can edit by running the following command: USD oc get <machine_template_kind> <template_name> -o yaml > <template_name>.yaml where <template_name> is the name of the machine template resource for your cluster. Make a copy of the <template_name>.yaml file with a different name. This procedure uses <modified_template_name>.yaml as an example file name. Use a text editor to make changes to the <modified_template_name>.yaml file that defines the updated machine template resource for your cluster. When editing the machine template resource, observe the following: The parameters in the spec stanza are provider specific. For more information, see the sample Cluster API machine template YAML for your provider. You must use a value for the metadata.name parameter that differs from any existing values. Important For any Cluster API compute machine sets that reference this template, you must update the spec.template.spec.infrastructureRef.name parameter to match the metadata.name value in the new machine template resource. Apply the machine template CR by running the following command: USD oc apply -f <modified_template_name>.yaml 1 1 Use the edited YAML file with a new name. steps For any Cluster API compute machine sets that reference this template, update the spec.template.spec.infrastructureRef.name parameter to match the metadata.name value in the new machine template resource. For more information, see "Modifying a compute machine set by using the CLI." Additional resources Sample YAML for a Cluster API machine template resource on Amazon Web Services Sample YAML for a Cluster API machine template resource on Google Cloud Platform Sample YAML for a Cluster API machine template resource on Microsoft Azure Sample YAML for a Cluster API machine template resource on RHOSP Sample YAML for a Cluster API machine template resource on VMware vSphere Modifying a compute machine set by using the CLI 13.3.2. Modifying a compute machine set by using the CLI You can modify the configuration of a compute machine set, and then propagate the changes to the machines in your cluster by using the CLI. By updating the compute machine set configuration, you can enable features or change the properties of the machines it creates. When you modify a compute machine set, your changes only apply to compute machines that are created after you save the updated MachineSet custom resource (CR). The changes do not affect existing machines. Note Changes made in the underlying cloud provider are not reflected in the Machine or MachineSet CRs. To adjust instance configuration in cluster-managed infrastructure, use the cluster-side resources. You can replace the existing machines with new ones that reflect the updated configuration by scaling the compute machine set to create twice the number of replicas and then scaling it down to the original number of replicas. If you need to scale a compute machine set without making other changes, you do not need to delete the machines. Note By default, the OpenShift Container Platform router pods are deployed on compute machines. Because the router is required to access some cluster resources, including the web console, do not scale the compute machine set to 0 unless you first relocate the router pods. The output examples in this procedure use the values for an AWS cluster. Prerequisites Your OpenShift Container Platform cluster uses the Cluster API. You are logged in to the cluster as an administrator by using the OpenShift CLI ( oc ). Procedure List the compute machine sets in your cluster by running the following command: USD oc get machinesets.cluster.x-k8s.io -n openshift-cluster-api Example output NAME CLUSTER REPLICAS READY AVAILABLE AGE VERSION <compute_machine_set_name_1> <cluster_name> 1 1 1 26m <compute_machine_set_name_2> <cluster_name> 1 1 1 26m Edit a compute machine set by running the following command: USD oc edit machinesets.cluster.x-k8s.io <machine_set_name> \ -n openshift-cluster-api Note the value of the spec.replicas field, because you need it when scaling the machine set to apply the changes. apiVersion: cluster.x-k8s.io/v1beta1 kind: MachineSet metadata: name: <machine_set_name> namespace: openshift-cluster-api spec: replicas: 2 1 # ... 1 The examples in this procedure show a compute machine set that has a replicas value of 2 . Update the compute machine set CR with the configuration options that you want and save your changes. List the machines that are managed by the updated compute machine set by running the following command: USD oc get machines.cluster.x-k8s.io \ -n openshift-cluster-api \ -l cluster.x-k8s.io/set-name=<machine_set_name> Example output for an AWS cluster NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION <machine_name_original_1> <cluster_name> <original_1_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 4h <machine_name_original_2> <cluster_name> <original_2_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 4h For each machine that is managed by the updated compute machine set, set the delete annotation by running the following command: USD oc annotate machines.cluster.x-k8s.io/<machine_name_original_1> \ -n openshift-cluster-api \ cluster.x-k8s.io/delete-machine="true" To create replacement machines with the new configuration, scale the compute machine set to twice the number of replicas by running the following command: USD oc scale --replicas=4 \ 1 machinesets.cluster.x-k8s.io <machine_set_name> \ -n openshift-cluster-api 1 The original example value of 2 is doubled to 4 . List the machines that are managed by the updated compute machine set by running the following command: USD oc get machines.cluster.x-k8s.io \ -n openshift-cluster-api \ -l cluster.x-k8s.io/set-name=<machine_set_name> Example output for an AWS cluster NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION <machine_name_original_1> <cluster_name> <original_1_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 4h <machine_name_original_2> <cluster_name> <original_2_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 4h <machine_name_updated_1> <cluster_name> <updated_1_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Provisioned 55s <machine_name_updated_2> <cluster_name> <updated_2_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Provisioning 55s When the new machines are in the Running phase, you can scale the compute machine set to the original number of replicas. To remove the machines that were created with the old configuration, scale the compute machine set to the original number of replicas by running the following command: USD oc scale --replicas=2 \ 1 machinesets.cluster.x-k8s.io <machine_set_name> \ -n openshift-cluster-api 1 The original example value of 2 . Verification To verify that a machine created by the updated machine set has the correct configuration, examine the relevant fields in the CR for one of the new machines by running the following command: USD oc describe machines.cluster.x-k8s.io <machine_name_updated_1> \ -n openshift-cluster-api To verify that the compute machines without the updated configuration are deleted, list the machines that are managed by the updated compute machine set by running the following command: USD oc get machines.cluster.x-k8s.io \ -n openshift-cluster-api \ cluster.x-k8s.io/set-name=<machine_set_name> Example output while deletion is in progress for an AWS cluster NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION <machine_name_original_1> <cluster_name> <original_1_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 18m <machine_name_original_2> <cluster_name> <original_2_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 18m <machine_name_updated_1> <cluster_name> <updated_1_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 18m <machine_name_updated_2> <cluster_name> <updated_2_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 18m Example output when deletion is complete for an AWS cluster NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION <machine_name_updated_1> <cluster_name> <updated_1_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 18m <machine_name_updated_2> <cluster_name> <updated_2_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 18m Additional resources Sample YAML for a Cluster API compute machine set resource on Amazon Web Services Sample YAML for a Cluster API compute machine set resource on Google Cloud Platform Sample YAML for a Cluster API compute machine set resource on Microsoft Azure Sample YAML for a Cluster API compute machine set resource on RHOSP Sample YAML for a Cluster API compute machine set resource on VMware vSphere 13.4. Cluster API configuration Important Managing machines with the Cluster API is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . The following example YAML files show how to make the Cluster API primary resources work together and configure settings for the machines that they create that are appropriate for your environment. 13.4.1. Sample YAML for a Cluster API cluster resource The cluster resource defines the name and infrastructure provider for the cluster and is managed by the Cluster API. This resource has the same structure for all providers. apiVersion: cluster.x-k8s.io/v1beta1 kind: Cluster metadata: name: <cluster_name> 1 namespace: openshift-cluster-api spec: controlPlaneEndpoint: 2 host: <control_plane_endpoint_address> port: 6443 infrastructureRef: apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: <infrastructure_kind> 3 name: <cluster_name> namespace: openshift-cluster-api 1 Specify the name of the cluster. 2 Specify the IP address of the control plane endpoint and the port used to access it. 3 Specify the infrastructure kind for the cluster. The following values are valid: Cluster cloud provider Value Amazon Web Services AWSCluster GCP GCPCluster Azure AzureCluster RHOSP OpenStackCluster VMware vSphere VSphereCluster 13.4.2. Provider-specific configuration options The remaining Cluster API resources are provider-specific. For provider-specific configuration options for your cluster, see the following resources: Cluster API configuration options for Amazon Web Services Cluster API configuration options for Google Cloud Platform Cluster API configuration options for Microsoft Azure Cluster API configuration options for RHOSP Cluster API configuration options for VMware vSphere 13.5. Configuration options for Cluster API machines 13.5.1. Cluster API configuration options for Amazon Web Services Important Managing machines with the Cluster API is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can change the configuration of your Amazon Web Services (AWS) Cluster API machines by updating values in the Cluster API custom resource manifests. 13.5.1.1. Sample YAML for configuring Amazon Web Services clusters The following example YAML files show configurations for an Amazon Web Services cluster. 13.5.1.1.1. Sample YAML for a Cluster API machine template resource on Amazon Web Services The machine template resource is provider-specific and defines the basic properties of the machines that a compute machine set creates. The compute machine set references this template when creating machines. apiVersion: infrastructure.cluster.x-k8s.io/v1beta2 kind: AWSMachineTemplate 1 metadata: name: <template_name> 2 namespace: openshift-cluster-api spec: template: spec: 3 uncompressedUserData: true iamInstanceProfile: # ... instanceType: m5.large ignition: storageType: UnencryptedUserData version: "3.2" ami: id: # ... subnet: filters: - name: tag:Name values: - # ... additionalSecurityGroups: - filters: - name: tag:Name values: - # ... 1 Specify the machine template kind. This value must match the value for your platform. 2 Specify a name for the machine template. 3 Specify the details for your environment. The values here are examples. 13.5.1.1.2. Sample YAML for a Cluster API compute machine set resource on Amazon Web Services The compute machine set resource defines additional properties of the machines that it creates. The compute machine set also references the cluster resource and machine template when creating machines. apiVersion: cluster.x-k8s.io/v1beta1 kind: MachineSet metadata: name: <machine_set_name> 1 namespace: openshift-cluster-api labels: cluster.x-k8s.io/cluster-name: <cluster_name> 2 spec: clusterName: <cluster_name> 3 replicas: 1 selector: matchLabels: test: example cluster.x-k8s.io/cluster-name: <cluster_name> cluster.x-k8s.io/set-name: <machine_set_name> template: metadata: labels: test: example cluster.x-k8s.io/cluster-name: <cluster_name> cluster.x-k8s.io/set-name: <machine_set_name> node-role.kubernetes.io/<role>: "" spec: bootstrap: dataSecretName: worker-user-data clusterName: <cluster_name> infrastructureRef: apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: AWSMachineTemplate 4 name: <template_name> 5 1 Specify a name for the compute machine set. The cluster ID, machine role, and region form a typical pattern for this value in the following format: <cluster_name>-<role>-<region> . 2 3 Specify the cluster ID as the name of the cluster. 4 Specify the machine template kind. This value must match the value for your platform. 5 Specify the machine template name. 13.5.2. Cluster API configuration options for Google Cloud Platform Important Managing machines with the Cluster API is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can change the configuration of your Google Cloud Platform (GCP) Cluster API machines by updating values in the Cluster API custom resource manifests. 13.5.2.1. Sample YAML for configuring Google Cloud Platform clusters The following example YAML files show configurations for a Google Cloud Platform cluster. 13.5.2.1.1. Sample YAML for a Cluster API machine template resource on Google Cloud Platform The machine template resource is provider-specific and defines the basic properties of the machines that a compute machine set creates. The compute machine set references this template when creating machines. apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: GCPMachineTemplate 1 metadata: name: <template_name> 2 namespace: openshift-cluster-api spec: template: spec: 3 rootDeviceType: pd-ssd rootDeviceSize: 128 instanceType: n1-standard-4 image: projects/rhcos-cloud/global/images/rhcos-411-85-202203181601-0-gcp-x86-64 subnet: <cluster_name>-worker-subnet serviceAccounts: email: <service_account_email_address> scopes: - https://www.googleapis.com/auth/cloud-platform additionalLabels: kubernetes-io-cluster-<cluster_name>: owned additionalNetworkTags: - <cluster_name>-worker ipForwarding: Disabled 1 Specify the machine template kind. This value must match the value for your platform. 2 Specify a name for the machine template. 3 Specify the details for your environment. The values here are examples. 13.5.2.1.2. Sample YAML for a Cluster API compute machine set resource on Google Cloud Platform The compute machine set resource defines additional properties of the machines that it creates. The compute machine set also references the cluster resource and machine template when creating machines. apiVersion: cluster.x-k8s.io/v1beta1 kind: MachineSet metadata: name: <machine_set_name> 1 namespace: openshift-cluster-api labels: cluster.x-k8s.io/cluster-name: <cluster_name> 2 spec: clusterName: <cluster_name> 3 replicas: 1 selector: matchLabels: test: example cluster.x-k8s.io/cluster-name: <cluster_name> cluster.x-k8s.io/set-name: <machine_set_name> template: metadata: labels: test: example cluster.x-k8s.io/cluster-name: <cluster_name> cluster.x-k8s.io/set-name: <machine_set_name> node-role.kubernetes.io/<role>: "" spec: bootstrap: dataSecretName: worker-user-data clusterName: <cluster_name> infrastructureRef: apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: GCPMachineTemplate 4 name: <template_name> 5 failureDomain: <failure_domain> 6 1 Specify a name for the compute machine set. The cluster ID, machine role, and region form a typical pattern for this value in the following format: <cluster_name>-<role>-<region> . 2 3 Specify the cluster ID as the name of the cluster. 4 Specify the machine template kind. This value must match the value for your platform. 5 Specify the machine template name. 6 Specify the failure domain within the GCP region. 13.5.3. Cluster API configuration options for Microsoft Azure Important Managing machines with the Cluster API is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can change the configuration of your Microsoft Azure Cluster API machines by updating values in the Cluster API custom resource manifests. 13.5.3.1. Sample YAML for configuring Microsoft Azure clusters The following example YAML files show configurations for an Azure cluster. 13.5.3.1.1. Sample YAML for a Cluster API machine template resource on Microsoft Azure The machine template resource is provider-specific and defines the basic properties of the machines that a compute machine set creates. The compute machine set references this template when creating machines. apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: AzureMachineTemplate 1 metadata: name: <template_name> 2 namespace: openshift-cluster-api spec: template: spec: 3 disableExtensionOperations: true identity: UserAssigned image: id: /subscriptions/<subscription_id>/resourceGroups/<cluster_name>-rg/providers/Microsoft.Compute/galleries/gallery_<compliant_cluster_name>/images/<cluster_name>-gen2/versions/latest 4 networkInterfaces: - acceleratedNetworking: true privateIPConfigs: 1 subnetName: <cluster_name>-worker-subnet osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux sshPublicKey: <ssh_key_value> userAssignedIdentities: - providerID: 'azure:///subscriptions/<subscription_id>/resourcegroups/<cluster_name>-rg/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<cluster_name>-identity' vmSize: Standard_D4s_v3 1 Specify the machine template kind. This value must match the value for your platform. 2 Specify a name for the machine template. 3 Specify the details for your environment. The values here are examples. 4 Specify an image that is compatible with your instance type. The Hyper-V generation V2 images created by the installation program have a -gen2 suffix, while V1 images have the same name without the suffix. Note Default OpenShift Container Platform cluster names contain hyphens ( - ), which are not compatible with Azure gallery name requirements. The value of <compliant_cluster_name> in this configuration must use underscores ( _ ) instead of hyphens to comply with these requirements. Other instances of <cluster_name> do not change. For example, a cluster name of jdoe-test-2m2np transforms to jdoe_test_2m2np . The full string for gallery_<compliant_cluster_name> in this example is gallery_jdoe_test_2m2np , not gallery_jdoe-test-2m2np . The complete value of spec.template.spec.image.id for this example value is /subscriptions/<subscription_id>/resourceGroups/jdoe-test-2m2np-rg/providers/Microsoft.Compute/galleries/gallery_jdoe_test_2m2np/images/jdoe-test-2m2np-gen2/versions/latest . 13.5.3.1.2. Sample YAML for a Cluster API compute machine set resource on Microsoft Azure The compute machine set resource defines additional properties of the machines that it creates. The compute machine set also references the cluster resource and machine template when creating machines. apiVersion: cluster.x-k8s.io/v1beta1 kind: MachineSet metadata: name: <machine_set_name> 1 namespace: openshift-cluster-api labels: cluster.x-k8s.io/cluster-name: <cluster_name> 2 spec: clusterName: <cluster_name> replicas: 1 selector: matchLabels: test: example cluster.x-k8s.io/cluster-name: <cluster_name> cluster.x-k8s.io/set-name: <machine_set_name> template: metadata: labels: test: example cluster.x-k8s.io/cluster-name: <cluster_name> cluster.x-k8s.io/set-name: <machine_set_name> node-role.kubernetes.io/<role>: "" spec: bootstrap: dataSecretName: worker-user-data clusterName: <cluster_name> infrastructureRef: apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: AzureMachineTemplate 3 name: <template_name> 4 1 Specify a name for the compute machine set. The cluster ID, machine role, and region form a typical pattern for this value in the following format: <cluster_name>-<role>-<region> . 2 Specify the cluster ID as the name of the cluster. 3 Specify the machine template kind. This value must match the value for your platform. 4 Specify the machine template name. 13.5.4. Cluster API configuration options for Red Hat OpenStack Platform Important Managing machines with the Cluster API is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can change the configuration of your Red Hat OpenStack Platform (RHOSP) Cluster API machines by updating values in the Cluster API custom resource manifests. 13.5.4.1. Sample YAML for configuring RHOSP clusters The following example YAML files show configurations for a RHOSP cluster. 13.5.4.1.1. Sample YAML for a Cluster API machine template resource on RHOSP The machine template resource is provider-specific and defines the basic properties of the machines that a compute machine set creates. The compute machine set references this template when creating machines. apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: OpenStackMachineTemplate 1 metadata: name: <template_name> 2 namespace: openshift-cluster-api spec: template: spec: 3 flavor: <openstack_node_machine_flavor> 4 image: filter: name: <openstack_image> 5 1 Specify the machine template kind. This value must match the value for your platform. 2 Specify a name for the machine template. 3 Specify the details for your environment. The values here are examples. 4 Specify the RHOSP flavor to use. For more information, see Creating flavors for launching instances . 5 Specify the image to use. 13.5.4.1.2. Sample YAML for a Cluster API compute machine set resource on RHOSP The compute machine set resource defines additional properties of the machines that it creates. The compute machine set also references the infrastructure resource and machine template when creating machines. apiVersion: cluster.x-k8s.io/v1beta1 kind: MachineSet metadata: name: <machine_set_name> 1 namespace: openshift-cluster-api spec: clusterName: <cluster_name> 2 replicas: 1 selector: matchLabels: test: example cluster.x-k8s.io/cluster-name: <cluster_name> cluster.x-k8s.io/set-name: <machine_set_name> template: metadata: labels: test: example cluster.x-k8s.io/cluster-name: <cluster_name> cluster.x-k8s.io/set-name: <machine_set_name> node-role.kubernetes.io/<role>: "" spec: bootstrap: dataSecretName: worker-user-data 3 clusterName: <cluster_name> infrastructureRef: apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: OpenStackMachineTemplate 4 name: <template_name> 5 failureDomain: <nova_availability_zone> 6 1 Specify a name for the compute machine set. 2 Specify the cluster ID as the name of the cluster. 3 For the Cluster API Technology Preview, the Operator can use the worker user data secret from the openshift-machine-api namespace. 4 Specify the machine template kind. This value must match the value for your platform. 5 Specify the machine template name. 6 Optional: Specify the name of the Nova availability zone for the machine set to create machines in. If you do not specify a value, machines are not restricted to a specific availability zone. 13.5.5. Cluster API configuration options for VMware vSphere Important Managing machines with the Cluster API is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can change the configuration of your VMware vSphere Cluster API machines by updating values in the Cluster API custom resource manifests. 13.5.5.1. Sample YAML for configuring VMware vSphere clusters The following example YAML files show configurations for a VMware vSphere cluster. 13.5.5.1.1. Sample YAML for a Cluster API machine template resource on VMware vSphere The machine template resource is provider-specific and defines the basic properties of the machines that a compute machine set creates. The compute machine set references this template when creating machines. apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: VSphereMachineTemplate 1 metadata: name: <template_name> 2 namespace: openshift-cluster-api spec: template: spec: 3 template: <vm_template_name> 4 server: <vcenter_server_ip> 5 diskGiB: 128 cloneMode: linkedClone 6 datacenter: <vcenter_data_center_name> 7 datastore: <vcenter_datastore_name> 8 folder: <vcenter_vm_folder_path> 9 resourcePool: <vsphere_resource_pool> 10 numCPUs: 4 memoryMiB: 16384 network: devices: - dhcp4: true networkName: "<vm_network_name>" 11 1 Specify the machine template kind. This value must match the value for your platform. 2 Specify a name for the machine template. 3 Specify the details for your environment. The values here are examples. 4 Specify the vSphere VM template to use, such as user-5ddjd-rhcos . 5 Specify the vCenter server IP or fully qualified domain name. 6 Specify the type of VM clone to use. The following values are valid: fullClone linkedClone When using the linkedClone type, the disk size matches the clone source instead of using the diskGiB value. For more information, see the vSphere documentation about VM clone types. 7 Specify the vCenter data center to deploy the compute machine set on. 8 Specify the vCenter datastore to deploy the compute machine set on. 9 Specify the path to the vSphere VM folder in vCenter, such as /dc1/vm/user-inst-5ddjd . 10 Specify the vSphere resource pool for your VMs. 11 Specify the vSphere VM network to deploy the compute machine set to. This VM network must be where other compute machines reside in the cluster. 13.5.5.1.2. Sample YAML for a Cluster API compute machine set resource on VMware vSphere The compute machine set resource defines additional properties of the machines that it creates. The compute machine set also references the cluster resource and machine template when creating machines. apiVersion: cluster.x-k8s.io/v1beta1 kind: MachineSet metadata: name: <machine_set_name> 1 namespace: openshift-cluster-api labels: cluster.x-k8s.io/cluster-name: <cluster_name> 2 spec: clusterName: <cluster_name> 3 replicas: 1 selector: matchLabels: test: example cluster.x-k8s.io/cluster-name: <cluster_name> cluster.x-k8s.io/set-name: <machine_set_name> template: metadata: labels: test: example cluster.x-k8s.io/cluster-name: <cluster_name> cluster.x-k8s.io/set-name: <machine_set_name> node-role.kubernetes.io/<role>: "" spec: bootstrap: dataSecretName: worker-user-data clusterName: <cluster_name> infrastructureRef: apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: VSphereMachineTemplate 4 name: <template_name> 5 failureDomain: 6 - name: <failure_domain_name> region: <region_a> zone: <zone_a> server: <vcenter_server_name> topology: datacenter: <region_a_data_center> computeCluster: "</region_a_data_center/host/zone_a_cluster>" resourcePool: "</region_a_data_center/host/zone_a_cluster/Resources/resource_pool>" datastore: "</region_a_data_center/datastore/datastore_a>" networks: - port-group 1 Specify a name for the compute machine set. The cluster ID, machine role, and region form a typical pattern for this value in the following format: <cluster_name>-<role>-<region> . 2 3 Specify the cluster ID as the name of the cluster. 4 Specify the machine template kind. This value must match the value for your platform. 5 Specify the machine template name. 6 Specify the failure domain configuration details. Note Using multiple regions and zones on a vSphere cluster that uses the Cluster API is not a validated configuration. 13.6. Troubleshooting clusters that use the Cluster API Important Managing machines with the Cluster API is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Use the information in this section to understand and recover from issues you might encounter. Generally, troubleshooting steps for problems with the Cluster API are similar to those steps for problems with the Machine API. The Cluster CAPI Operator and its operands are provisioned in the openshift-cluster-api namespace, whereas the Machine API uses the openshift-machine-api namespace. When using oc commands that reference a namespace, be sure to reference the correct one. 13.6.1. Referencing the intended objects when using the CLI For clusters that use the Cluster API, OpenShift CLI ( oc ) commands prioritize Cluster API objects over Machine API objects. This behavior impacts any oc command that acts upon any object that is represented in both the Cluster API and the Machine API. This explanation uses the oc delete machine command, which deletes a machine, as an example. Cause When you run an oc command, oc communicates with the Kube API server to determine which objects to act upon. The Kube API server uses the first installed custom resource definition (CRD) it encounters alphabetically when an oc command is run. CRDs for Cluster API objects are in the cluster.x-k8s.io group, while CRDs for Machine API objects are in the machine.openshift.io group. Because the letter c precedes the letter m alphabetically, the Kube API server matches on the Cluster API object CRD. As a result, the oc command acts upon Cluster API objects. Consequences Due to this behavior, the following unintended outcomes can occur on a cluster that uses the Cluster API: For namespaces that contain both types of objects, commands such as oc get machine return only Cluster API objects. For namespaces that contain only Machine API objects, commands such as oc get machine return no results. Workaround You can ensure that oc commands act on the type of objects you intend by using the corresponding fully qualified name. Prerequisites You have access to the cluster using an account with cluster-admin permissions. You have installed the OpenShift CLI ( oc ). Procedure To delete a Machine API machine, use the fully qualified name machine.machine.openshift.io when running the oc delete machine command: USD oc delete machine.machine.openshift.io <machine_name> To delete a Cluster API machine, use the fully qualified name machine.cluster.x-k8s.io when running the oc delete machine command: USD oc delete machine.cluster.x-k8s.io <machine_name>
|
[
"oc get infrastructure cluster -o jsonpath='{.status.infrastructureName}'",
"apiVersion: cluster.x-k8s.io/v1beta1 kind: Cluster metadata: name: <cluster_name> 1 namespace: openshift-cluster-api spec: infrastructureRef: apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: <infrastructure_kind> 2 name: <cluster_name> namespace: openshift-cluster-api",
"oc create -f <cluster_resource_file>.yaml",
"oc get cluster",
"NAME PHASE AGE VERSION <cluster_name> Provisioning 4h6m",
"apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: <machine_template_kind> 1 metadata: name: <template_name> 2 namespace: openshift-cluster-api spec: template: spec: 3",
"oc create -f <machine_template_resource_file>.yaml",
"oc get <machine_template_kind>",
"NAME AGE <template_name> 77m",
"apiVersion: cluster.x-k8s.io/v1beta1 kind: MachineSet metadata: name: <machine_set_name> 1 namespace: openshift-cluster-api spec: clusterName: <cluster_name> 2 replicas: 1 selector: matchLabels: test: example template: metadata: labels: test: example spec: 3",
"oc create -f <machine_set_resource_file>.yaml",
"oc get machineset -n openshift-cluster-api 1",
"NAME CLUSTER REPLICAS READY AVAILABLE AGE VERSION <machine_set_name> <cluster_name> 1 1 1 17m",
"oc get machine -n openshift-cluster-api 1",
"NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION <machine_set_name>-<string_id> <cluster_name> <ip_address>.<region>.compute.internal <provider_id> Running 8m23s",
"oc get node",
"NAME STATUS ROLES AGE VERSION <ip_address_1>.<region>.compute.internal Ready worker 5h14m v1.28.5 <ip_address_2>.<region>.compute.internal Ready master 5h19m v1.28.5 <ip_address_3>.<region>.compute.internal Ready worker 7m v1.28.5",
"oc get <machine_template_kind> 1",
"NAME AGE <template_name> 77m",
"oc get <machine_template_kind> <template_name> -o yaml > <template_name>.yaml",
"oc apply -f <modified_template_name>.yaml 1",
"oc get machinesets.cluster.x-k8s.io -n openshift-cluster-api",
"NAME CLUSTER REPLICAS READY AVAILABLE AGE VERSION <compute_machine_set_name_1> <cluster_name> 1 1 1 26m <compute_machine_set_name_2> <cluster_name> 1 1 1 26m",
"oc edit machinesets.cluster.x-k8s.io <machine_set_name> -n openshift-cluster-api",
"apiVersion: cluster.x-k8s.io/v1beta1 kind: MachineSet metadata: name: <machine_set_name> namespace: openshift-cluster-api spec: replicas: 2 1",
"oc get machines.cluster.x-k8s.io -n openshift-cluster-api -l cluster.x-k8s.io/set-name=<machine_set_name>",
"NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION <machine_name_original_1> <cluster_name> <original_1_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 4h <machine_name_original_2> <cluster_name> <original_2_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 4h",
"oc annotate machines.cluster.x-k8s.io/<machine_name_original_1> -n openshift-cluster-api cluster.x-k8s.io/delete-machine=\"true\"",
"oc scale --replicas=4 \\ 1 machinesets.cluster.x-k8s.io <machine_set_name> -n openshift-cluster-api",
"oc get machines.cluster.x-k8s.io -n openshift-cluster-api -l cluster.x-k8s.io/set-name=<machine_set_name>",
"NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION <machine_name_original_1> <cluster_name> <original_1_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 4h <machine_name_original_2> <cluster_name> <original_2_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 4h <machine_name_updated_1> <cluster_name> <updated_1_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Provisioned 55s <machine_name_updated_2> <cluster_name> <updated_2_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Provisioning 55s",
"oc scale --replicas=2 \\ 1 machinesets.cluster.x-k8s.io <machine_set_name> -n openshift-cluster-api",
"oc describe machines.cluster.x-k8s.io <machine_name_updated_1> -n openshift-cluster-api",
"oc get machines.cluster.x-k8s.io -n openshift-cluster-api cluster.x-k8s.io/set-name=<machine_set_name>",
"NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION <machine_name_original_1> <cluster_name> <original_1_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 18m <machine_name_original_2> <cluster_name> <original_2_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 18m <machine_name_updated_1> <cluster_name> <updated_1_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 18m <machine_name_updated_2> <cluster_name> <updated_2_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 18m",
"NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION <machine_name_updated_1> <cluster_name> <updated_1_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 18m <machine_name_updated_2> <cluster_name> <updated_2_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 18m",
"apiVersion: cluster.x-k8s.io/v1beta1 kind: Cluster metadata: name: <cluster_name> 1 namespace: openshift-cluster-api spec: controlPlaneEndpoint: 2 host: <control_plane_endpoint_address> port: 6443 infrastructureRef: apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: <infrastructure_kind> 3 name: <cluster_name> namespace: openshift-cluster-api",
"apiVersion: infrastructure.cluster.x-k8s.io/v1beta2 kind: AWSMachineTemplate 1 metadata: name: <template_name> 2 namespace: openshift-cluster-api spec: template: spec: 3 uncompressedUserData: true iamInstanceProfile: # instanceType: m5.large ignition: storageType: UnencryptedUserData version: \"3.2\" ami: id: # subnet: filters: - name: tag:Name values: - # additionalSecurityGroups: - filters: - name: tag:Name values: - #",
"apiVersion: cluster.x-k8s.io/v1beta1 kind: MachineSet metadata: name: <machine_set_name> 1 namespace: openshift-cluster-api labels: cluster.x-k8s.io/cluster-name: <cluster_name> 2 spec: clusterName: <cluster_name> 3 replicas: 1 selector: matchLabels: test: example cluster.x-k8s.io/cluster-name: <cluster_name> cluster.x-k8s.io/set-name: <machine_set_name> template: metadata: labels: test: example cluster.x-k8s.io/cluster-name: <cluster_name> cluster.x-k8s.io/set-name: <machine_set_name> node-role.kubernetes.io/<role>: \"\" spec: bootstrap: dataSecretName: worker-user-data clusterName: <cluster_name> infrastructureRef: apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: AWSMachineTemplate 4 name: <template_name> 5",
"apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: GCPMachineTemplate 1 metadata: name: <template_name> 2 namespace: openshift-cluster-api spec: template: spec: 3 rootDeviceType: pd-ssd rootDeviceSize: 128 instanceType: n1-standard-4 image: projects/rhcos-cloud/global/images/rhcos-411-85-202203181601-0-gcp-x86-64 subnet: <cluster_name>-worker-subnet serviceAccounts: email: <service_account_email_address> scopes: - https://www.googleapis.com/auth/cloud-platform additionalLabels: kubernetes-io-cluster-<cluster_name>: owned additionalNetworkTags: - <cluster_name>-worker ipForwarding: Disabled",
"apiVersion: cluster.x-k8s.io/v1beta1 kind: MachineSet metadata: name: <machine_set_name> 1 namespace: openshift-cluster-api labels: cluster.x-k8s.io/cluster-name: <cluster_name> 2 spec: clusterName: <cluster_name> 3 replicas: 1 selector: matchLabels: test: example cluster.x-k8s.io/cluster-name: <cluster_name> cluster.x-k8s.io/set-name: <machine_set_name> template: metadata: labels: test: example cluster.x-k8s.io/cluster-name: <cluster_name> cluster.x-k8s.io/set-name: <machine_set_name> node-role.kubernetes.io/<role>: \"\" spec: bootstrap: dataSecretName: worker-user-data clusterName: <cluster_name> infrastructureRef: apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: GCPMachineTemplate 4 name: <template_name> 5 failureDomain: <failure_domain> 6",
"apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: AzureMachineTemplate 1 metadata: name: <template_name> 2 namespace: openshift-cluster-api spec: template: spec: 3 disableExtensionOperations: true identity: UserAssigned image: id: /subscriptions/<subscription_id>/resourceGroups/<cluster_name>-rg/providers/Microsoft.Compute/galleries/gallery_<compliant_cluster_name>/images/<cluster_name>-gen2/versions/latest 4 networkInterfaces: - acceleratedNetworking: true privateIPConfigs: 1 subnetName: <cluster_name>-worker-subnet osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux sshPublicKey: <ssh_key_value> userAssignedIdentities: - providerID: 'azure:///subscriptions/<subscription_id>/resourcegroups/<cluster_name>-rg/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<cluster_name>-identity' vmSize: Standard_D4s_v3",
"apiVersion: cluster.x-k8s.io/v1beta1 kind: MachineSet metadata: name: <machine_set_name> 1 namespace: openshift-cluster-api labels: cluster.x-k8s.io/cluster-name: <cluster_name> 2 spec: clusterName: <cluster_name> replicas: 1 selector: matchLabels: test: example cluster.x-k8s.io/cluster-name: <cluster_name> cluster.x-k8s.io/set-name: <machine_set_name> template: metadata: labels: test: example cluster.x-k8s.io/cluster-name: <cluster_name> cluster.x-k8s.io/set-name: <machine_set_name> node-role.kubernetes.io/<role>: \"\" spec: bootstrap: dataSecretName: worker-user-data clusterName: <cluster_name> infrastructureRef: apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: AzureMachineTemplate 3 name: <template_name> 4",
"apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: OpenStackMachineTemplate 1 metadata: name: <template_name> 2 namespace: openshift-cluster-api spec: template: spec: 3 flavor: <openstack_node_machine_flavor> 4 image: filter: name: <openstack_image> 5",
"apiVersion: cluster.x-k8s.io/v1beta1 kind: MachineSet metadata: name: <machine_set_name> 1 namespace: openshift-cluster-api spec: clusterName: <cluster_name> 2 replicas: 1 selector: matchLabels: test: example cluster.x-k8s.io/cluster-name: <cluster_name> cluster.x-k8s.io/set-name: <machine_set_name> template: metadata: labels: test: example cluster.x-k8s.io/cluster-name: <cluster_name> cluster.x-k8s.io/set-name: <machine_set_name> node-role.kubernetes.io/<role>: \"\" spec: bootstrap: dataSecretName: worker-user-data 3 clusterName: <cluster_name> infrastructureRef: apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: OpenStackMachineTemplate 4 name: <template_name> 5 failureDomain: <nova_availability_zone> 6",
"apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: VSphereMachineTemplate 1 metadata: name: <template_name> 2 namespace: openshift-cluster-api spec: template: spec: 3 template: <vm_template_name> 4 server: <vcenter_server_ip> 5 diskGiB: 128 cloneMode: linkedClone 6 datacenter: <vcenter_data_center_name> 7 datastore: <vcenter_datastore_name> 8 folder: <vcenter_vm_folder_path> 9 resourcePool: <vsphere_resource_pool> 10 numCPUs: 4 memoryMiB: 16384 network: devices: - dhcp4: true networkName: \"<vm_network_name>\" 11",
"apiVersion: cluster.x-k8s.io/v1beta1 kind: MachineSet metadata: name: <machine_set_name> 1 namespace: openshift-cluster-api labels: cluster.x-k8s.io/cluster-name: <cluster_name> 2 spec: clusterName: <cluster_name> 3 replicas: 1 selector: matchLabels: test: example cluster.x-k8s.io/cluster-name: <cluster_name> cluster.x-k8s.io/set-name: <machine_set_name> template: metadata: labels: test: example cluster.x-k8s.io/cluster-name: <cluster_name> cluster.x-k8s.io/set-name: <machine_set_name> node-role.kubernetes.io/<role>: \"\" spec: bootstrap: dataSecretName: worker-user-data clusterName: <cluster_name> infrastructureRef: apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: VSphereMachineTemplate 4 name: <template_name> 5 failureDomain: 6 - name: <failure_domain_name> region: <region_a> zone: <zone_a> server: <vcenter_server_name> topology: datacenter: <region_a_data_center> computeCluster: \"</region_a_data_center/host/zone_a_cluster>\" resourcePool: \"</region_a_data_center/host/zone_a_cluster/Resources/resource_pool>\" datastore: \"</region_a_data_center/datastore/datastore_a>\" networks: - port-group",
"oc delete machine.machine.openshift.io <machine_name>",
"oc delete machine.cluster.x-k8s.io <machine_name>"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/machine_management/managing-machines-with-the-cluster-api
|
8.104. kernel
|
8.104. kernel 8.104.1. RHSA-2015:1081 - Important: kernel security, bug fix, and enhancement update Updated kernel packages that fix several security issues and several bugs are now available for Red Hat Enterprise Linux 6. Red Hat Product Security has rated this update as having Important security impact. Common Vulnerability Scoring System (CVSS) base scores, which give detailed severity ratings, are available for each vulnerability from the CVE links below. The kernel packages contain the Linux kernel, the core of any Linux operating system. Security Fixes CVE-2015-1805 , Important It was found that the Linux kernel's implementation of vectored pipe read and write functionality did not take into account the I/O vectors that were already processed when retrying after a failed atomic access operation, potentially resulting in memory corruption due to an I/O vector array overrun. A local, unprivileged user could use this flaw to crash the system or, potentially, escalate their privileges on the system. CVE-2015-3331 , Important A buffer overflow flaw was found in the way the Linux kernel's Intel AES-NI instructions optimized version of the RFC4106 GCM mode decryption functionality handled fragmented packets. A remote attacker could use this flaw to crash, or potentially escalate their privileges on, a system over a connection with an active AES-GCM mode IPSec security association. CVE-2014-9419 , Low An information leak flaw was found in the way the Linux kernel changed certain segment registers and thread-local storage (TLS) during a context switch. A local, unprivileged user could use this flaw to leak the user space TLS base address of an arbitrary process. CVE-2014-9420 , Low It was found that the Linux kernel's ISO file system implementation did not correctly limit the traversal of Rock Ridge extension Continuation Entries (CE). An attacker with physical access to the system could use this flaw to trigger an infinite loop in the kernel, resulting in a denial of service. CVE-2014-9585 , Low An information leak flaw was found in the way the Linux kernel's Virtual Dynamic Shared Object (vDSO) implementation performed address randomization. A local, unprivileged user could use this flaw to leak kernel memory addresses to user-space. Red Hat would like to thank Carl Henrik Lunde for reporting CVE-2014-9420. The security impact of the CVE-2015-1805 issue was discovered by Red Hat. Bug Fixes BZ# 1201674 When repeating a Coordinated Universal Time (UTC) value during a leap second (when the UTC time should be 23:59:60), the International Atomic Time (TAI) timescale previously stopped as the kernel NTP code incremented the TAI offset one second later than expected. A patch has been provided, which fixes the bug by incrementing the offset during the leap second itself. Now, the correct TAI is set during the leap second. BZ# 1204626 Due to a race condition, deleting a cgroup while pages belonging to that group were being swapped in could trigger a kernel crash. This update fixes the race condition, and deleting a cgroup is now safe even under heavy swapping. BZ# 1207815 Previously, the open() system call in some cases failed with an EBUSY error if the opened file was also being renamed at the same time. With this update, the kernel automatically retries open() when this failure occurs, and if the retry is not successful either, open() now fails with an ESTALE error. BZ# 1208620 Prior to this update, cgroup blocked new threads from joining the target thread group during cgroup migration, which led to a race condition against exec() and exit() functions, and a consequent kernel panic. This bug has been fixed by extending thread group locking such that it covers all operations which can alter the thread group - fork(), exit(), and exec(), and cgroup migration no longer causes the kernel to panic. BZ# 1211940 The hrtimer_start() function previously attempted to reinsert a timer which was already defined. As a consequence, the timer node pointed to itself and the rb_insert_color() function entered an infinite loop. This update prevents the hrtimer_enqueue_reprogram() function from racing and makes sure the timer state in remove_hrtimer() is preserved, thus fixing the bug. BZ# 1144442 Previously, the bridge device did not propagate VLAN information to its ports and Generic Receive Offload (GRO) information to the connected devices. This resulted in lower receive performance of VLANs over bridge devices because GRO was not enabled. An attempt to resolve this problem was made with BZ#858198 by introducing a patch that allows VLANs to be registered with the participating bridge ports and adds GRO to the bridge device feature set. However, this attempt introduced a number of regressions, which broke the vast majority of stacked setups involving bridge devices and VLANs. This update reverts the patch provided by BZ#858198 and removes support for this capability. BZ# 1199900 Previously, the kernel initialized FPU state for the signal handler too early, right after the current state was saved for the sigreturn() function. As a consequence, a task could lose its floating-point unit (FPU) context if the signal delivery failed. The fix ensures that the drop_init_fpu() fuction is only called when the signal is delivered successfully, and FPU context is no longer lost in the described situation. BZ# 1203366 On mounting a Common Internet File System (CIFS) share using kerberos authentication, the CIFS module uses the request_key mechanism to obtain the user's krb5 credentials. Once the key has been used and is no longer needed, CIFS revoked it. This caused key revoked errors to be returned when attempting to refetch the key. To fix this bug, the key_invalidate() call has been backported from the upstream code to discard the key. This call renders the discarded key invisible to further searches and wakes up the garbage collector immediately to remove the key from the keyrings and to destroy it. As a result, discarded keys are immediately cleared and are no longer returned on key searches. BZ# 1203544 Previously, the fc_remote_port_del() call preceded the calls to re-establish the session with the Fibre Channel (FC) transport with the fc_remote_port_add() and fc_remote_port_rolechg() functiones. With this update, the fc_remote_port_del() call has been removed before re-establishing the connection, which prevents the race condition from occurring. BZ# 1210593 Due to a race condition in the build_id_cache__add_s() function, system files could be truncated. This update fixes the race condition, and system files are no longer truncated in the aforementioned situation. BZ# 1212057 Prior to this update, the "--queue-balance" option did not distribute traffic over multiple queues as the option ignored a request to balance among the given range and only used the first queue number given. As a consequence, the kernel traffic was limited to one queue. The underlying source code has been patched, and the kernel traffic is now balanced within the given range. Enhancements BZ# 1173501 , BZ# 1173562 This update introduces a set of patches with a new VLAN model to conform to upstream standards. In addition, this set of patches fixes other issues such as transmision of Internet Control Message Protocol (ICMP) fragments. Users of kernel are advised to upgrade to these updated packages, which contain backported patches to correct these issues. The system must be rebooted for this update to take effect. 8.104.2. RHSA-2015:0864 - Important: kernel security and bug fix update Updated kernel packages that fix multiple security issues and several bugs are now available for Red Hat Enterprise Linux 6. Red Hat Product Security has rated this update as having Important security impact. Common Vulnerability Scoring System (CVSS) base scores, which give detailed severity ratings, are available for each vulnerability from the CVE links below. The kernel packages contain the Linux kernel, the core of any Linux operating system. Security Fixes CVE-2014-3215 , Important A flaw was found in the way seunshare, a utility for running executables under a different security context, used the capng_lock functionality of the libcap-ng library. The subsequent invocation of suid root binaries that relied on the fact that the setuid() system call, among others, also sets the saved set-user-ID when dropping the binaries' process privileges, could allow a local, unprivileged user to potentially escalate their privileges on the system. Note: the fix for this issue is the kernel part of the overall fix, and introduces the PR_SET_NO_NEW_PRIVS functionality and the related SELinux exec transitions support. CVE-2015-1421 , Important A use-after-free flaw was found in the way the Linux kernel's SCTP implementation handled authentication key reference counting during INIT collisions. A remote attacker could use this flaw to crash the system or, potentially, escalate their privileges on the system. CVE-2014-3690 , Moderate It was found that the Linux kernel's KVM implementation did not ensure that the host CR4 control register value remained unchanged across VM entries on the same virtual CPU. A local, unprivileged user could use this flaw to cause a denial of service on the system. CVE-2014-7825 , Moderate An out-of-bounds memory access flaw was found in the syscall tracing functionality of the Linux kernel's perf subsystem. A local, unprivileged user could use this flaw to crash the system. CVE-2014-7826 , Moderate An out-of-bounds memory access flaw was found in the syscall tracing functionality of the Linux kernel's ftrace subsystem. On a system with ftrace syscall tracing enabled, a local, unprivileged user could use this flaw to crash the system, or escalate their privileges. CVE-2014-8171 , Moderate It was found that the Linux kernel memory resource controller's (memcg) handling of OOM (out of memory) conditions could lead to deadlocks. An attacker able to continuously spawn new processes within a single memory-constrained cgroup during an OOM event could use this flaw to lock up the system. CVE-2014-9529 , Moderate A race condition flaw was found in the way the Linux kernel keys management subsystem performed key garbage collection. A local attacker could attempt accessing a key while it was being garbage collected, which would cause the system to crash. CVE-2014-8884 , Low A stack-based buffer overflow flaw was found in the TechnoTrend/Hauppauge DEC USB device driver. A local user with write access to the corresponding device could use this flaw to crash the kernel or, potentially, elevate their privileges on the system. CVE-2014-9584 , Low An information leak flaw was found in the way the Linux kernel's ISO9660 file system implementation accessed data on an ISO9660 image with RockRidge Extension Reference (ER) records. An attacker with physical access to the system could use this flaw to disclose up to 255 bytes of kernel memory. Red Hat would like to thank Andy Lutomirski for reporting CVE-2014-3215 and CVE-2014-3690, Robert Swiecki for reporting CVE-2014-7825 and CVE-2014-7826, and Carl Henrik Lunde for reporting CVE-2014-9584. The CVE-2015-1421 issue was discovered by Sun Baoliang of Red Hat. Bug Fixes BZ# 1195747 Due to a regression, when large reads which partially extended beyond the end of the underlying device were done, the raw driver returned the EIO error code instead of returning a short read covering the valid part of the device. The underlying source code has been patched, and the raw driver now returns a short read for the remainder of the device. BZ# 1187639 Previously, a NULL pointer check that is needed to prevent an oops in the nfs_async_inode_return_delegation() function was removed. As a consequence, a NFS4 client could terminate unexpectedly. The missing NULL pointer check has been added back, and NFS4 client no longer crashes in this situation. BZ# 1187666 A failure to leave a multicast group which had previously been joined prevented the attempt to unregister from the "sa" service. Multiple locking issues in the IPoIB multicast join and leave processing have been fixed so that leaving a group that has completed its join process is successful. As a result, attempts to unregister from the "sa" service no longer lock up due to leaked resources. BZ# 1187664 Due to unbalanced multicast join and leave processing, the attempt to leave a multicast group that had not previously completed a join became unresponsive. This update resolves multiple locking issues in the IPoIB multicast code that allowed multicast groups to be left before the joining was entirely completed. Now, multicast join and leave failures or lockups no longer occur in the described situation. BZ# 1188339 The kernel source code contained two definitions of the cpu_logical_map() function, which maps logical CPU numbers to physical CPU addresses. When translating the logical CPU number to the corresponding physical CPU number, the kernel used the second definition of cpu_logical_map(), which always used a one-to-one mapping of logical to physical CPU addresses. This mapping was, however, wrong after a reboot, especially if the target CPU was in the "stopped" state. Consequently, the system became unresponsive or showed unexpected latencies. With this update, the second definition of cpu_logical_map() has been removed. As a result, the kernel now correctly translates the CPU number to its physical address, and no unexpected latencies occur in this scenario. BZ# 1188838 Previously, the kernel could under certain circumstances provide the tcp_collapse() function with a socket buffer (SKB) whose "headroom" was equal to the value of the PAGE_SIZE variable. Consequently, the copy value was zero in the loop, which could never exit because it was not making forward progress. To fix this problem, the loop has been rewritten to avoid the incorrect calculation. Instead, the loop copies either the value of the PAGE_SIZE variable or the size of the buffer, whichever is bigger. As a result, the tcp_collapse() function is no longer apt to get stuck in the loop, because the copy is always non-zero as long as long as the "end" differs from the "start". BZ# 1188941 Prior to this update, when using the fibre channel driver, a race condition occurred in the rport scsi_remove_target() function. As a consequence, the kernel terminated unexpectedly when dereferencing an invalid address. To fix this bug, the changes to the reference counting infrastructure have been reverted, and the system no longer crashes. BZ# 1191916 On older systems without the QCI instruction, all possible domains are probed via TAPQ instruction. Prior to this update, a specification exception could occur when this instruction was called for probing values greater than 16; for example, during the execution of the "insmod" command or the reset of the AP bus on machines without the QCI instruction (z10, z196, z114). zEC12 and newer systems were not affected. Consequently, loading the z90crypt kernel module caused a panic. Now, the domain checking function has been corrected to limit the allowed range if no QCI information is available. As a result, users are able to successfully load and perform cryptographic functions with the z90crypt device driver. BZ# 1192055 Previously, KVM took a page fault with interrupts disabled. Consequently, the page fault handler tried to take a lock, but KSM sent an IPI while taking the same lock. Then KSM waited for the IPI to be processed, but KVM would not process it until it took the lock. KSM and KVM would eventually encounter a deadlock, each waiting for the other. With this update, the kernel avoids operations that can page fault while interrupts are disabled. As a result, KVM and KSM are no longer prone to a deadlock in the aforementioned scenario. BZ# 1192105 The USB core uses the "hcpriv" member of the USB request block to determine whether a USB Request Block (URB) is active, but the ehci-hcd driver was not setting this correctly when it queued isochronous URBs. This in combination with a defect in the snd-usb-audio driver could cause URBs to be reused without waiting for them to complete. Consequently, list corruption followed by system freeze or a kernel crash occurred. To fix this problem, the ehci-hcd driver code has been updated to properly set the "hcpriv" variable for isochronous URBs, and the snd-usb-audio driver has been updated to synchronize pending stop operations on an endpoint before continuing with preparing the PCM stream. As a result, list corruption followed by system freeze or a crash no longer occurs. BZ# 1193639 Previously, the Hewlett Packard Smart Array (HPSA) driver in conjunction with an older version of the HPSA firmware and the hp-snmp-agent monitoring software used the "system work queue" shared resource for an extensively long time. Consequently, random other tasks were blocked until the HPSA driver released the work queue, and messages such as the following were logged: With this update, the HPSA driver creates its own local work queue, which fixes this problem. BZ# 1198329 Prior to this update, the GFS2 file system's "Splice Read" operation, which is used for functions such as sendfile(), was not properly allocating a required multi-block reservation structure in memory. As a consequence, when the GFS2 block allocator was called to assign blocks of data, it tried to dereference the structure, which resulted in a kernel panic. Now, GFS2's "Splice read" operation has been changed so that it properly allocates the necessary reservation structure in memory prior to calling the block allocator. As a result, sendfile() now works properly for GFS2. Users of kernel are advised to upgrade to these updated packages, which contain backported patches to correct these issues. The system must be rebooted for this update to take effect. 8.104.3. RHSA-2015:0087 - Important: kernel security and bug fix update Updated kernel packages that fix two security issues and several bugs are now available for Red Hat Enterprise Linux 6. Red Hat Product Security has rated this update as having Important security impact. Common Vulnerability Scoring System (CVSS) base scores, which give detailed severity ratings, are available for each vulnerability from the CVE links below. The kernel packages contain the Linux kernel, the core of any Linux operating system. Security Fixes CVE-2014-7841 , Important A flaw was found in the way the Linux kernel's SCTP implementation validated INIT chunks when performing Address Configuration Change (ASCONF). A remote attacker could use this flaw to crash the system by sending a specially crafted SCTP packet to trigger a NULL pointer dereference on the system. CVE-2014-4656 , Moderate An integer overflow flaw was found in the way the Linux kernel's Advanced Linux Sound Architecture (ALSA) implementation handled user controls. A local, privileged user could use this flaw to crash the system. The CVE-2014-7841 issue was discovered by Liu Wei of Red Hat. Bug Fixes BZ# 1161420 LVM2 thin provisioning is sensitive to I/Os within a full RAID stripe is issued to the controller's write-back cache in close proximity. This update improves LVM2 thin provisioning to work more efficiently on RAID devices. BZ# 1161421 Previously, under a heavy I/O load, a timeout of an unresponsive task could occur while using LVM2 thin provisioning. With this update, various infrastructure used by LVM2 thin provisioning have been improved to be more efficient and correct. This includes the use of more efficient data structures, throttling worker threads to prevent an application from sending more I/O than can be handled, and pre-fetching metadata. This update also fixes the eviction logic used by the metadata I/O buffering layer, which ensures that metadata blocks are not evicted prematurely. BZ# 1162072 Due to a malfunction in the USB controller driver, some data stream metadata was dropped. As a consequence, the integrated camera to failed to record with the following error message: This update fixes the data stream metadata handling, and the integrated camera works as expected. BZ# 1165986 Before this update, a race condition occurred in the spin-lock logic on the PowerPC platform. Consequently, workloads caused heavy use of Inter-Process Communication (IPC), and the kernel could terminate unexpectedly. This update adds proper synchronization to the PowerPC spin-lock framework. As a result, the kernel no longer crashes under heavy use of IPC. BZ# 1163214 Due to an overlooked piece of code that initializes the pre-operation change attribute, certain workloads could generate unnecessary cache invalidations and additional NFS read operations. This update fixes the initialization of the pre_change_attr field, which prevents unnecessarily cached data invalidation. BZ# 1165001 Previously, under certain error conditions gfs2_converter introduced incorrect values for the on-disk inode's di_goal_meta field. As a consequence, gfs2_converter returned the EBADSLT error on such inodes and did not allow creation of the new files in directories or new blocks in regular files. The fix allows gfs2_converter to set a sensible goal value if a corrupt one is encountered and proceed with normal operations. With this update, gfs2_converter implicitly fixes any corrupt goal values, and thus no longer disrupts normal operations. BZ# 1165002 Previously, under certain error conditions using the semaphore utility caused the kernel to become unresponsive. With update a patch has been applied to fix this bug. As a result, the the kernel hangs no longer occur while using semaphore. BZ# 1165985 Before this update, due to a coding error in the e100 Ethernet driver update, physical layers (PHYs) did not initialize correctly. This could cause RX errors and decreased throughput, especially when using long UTP cabling. This update fixes the coding error, and as a result, the aforementioned scenario no longer occurs on the e100 Ethernet device. BZ# 1168129 Before this update, a flaw in the duplicate reply cache in the NFS daemon allowed entries to be freed while they were still used. Consequently, a heavily loaded NFS daemon could terminate unexpectedly if an RPC call took a long time to process. The cache has been fixed to protect such entries from freeing, and the server now functions normally in the aforementioned scenario. BZ# 1168504 Previously, external journal blocks were handled as blocks causing an increase of processor usage. Consequently, on the ext4 file systems configured with an external journal device, the df command could show negative values due to the subtraction of such journal blocks amount from the used blocks amount. With this update, the external journal blocks are handled properly, and therefore df no longer returns negative values. BZ# 1169433 Before this update, raising the SIGBUS signal did not include a siginfo structure describing the cause of the SIGBUS exception. As a consequence, applications that use huge pages using the libhugetlbfs library failed. The PACKAGE_NAME has been updated to raise SIGBUS with a siginfo structure, to deliver BUS_ADRERR as si_code, and to deliver the address of the fault in the si_addr field. As a result, applications that use huge pages no longer fail in the aforementioned scenario. BZ# 1172022 Previously, accessing a FUSE-based file system from kernel space could cause the kernel to become unresponsive during an inode look-up operation. To fix this bug, existing flags are verified before dereference occurs in the FUSE look-up handler. As a result, accessing a FUSE-based file system from kernel space works as expected. BZ# 1172024 Previously, hot plugging of a USB EHCI controller could cause the kernel to become unresponsive. This update fixes the handling of a race condition in the EHCI error path during the hot-plugging event, and the kernel no longer hangs. BZ# 1172025 Previously, the system functions semop() and semtimedop() did not update the time of the semaphore update located in the structure sem_otime, which was inconsistent with the function description in the man pages. With this update, a patch has been applied to fix this bug. As a result, semop() and semtimedop() now properly update the sem_otime structure. BZ# 1172027 Before this update, when forwarding a packet, the iptables target TCPOPTSTRIP used the tcp_hdr() function to locate the option space. Consequently, TCPOPTSTRIP located the incorrect place in the packet, and therefore did not match options for stripping. With this update, TCPOPTSTRIP now uses the TCP header itself to locate the option space. As a result, the options are now properly stripped. BZ# 1172764 Prior to this update, the ipset utility computed incorrect values of timeouts from an old IP set, and these values were then supplied to a IP new set. As a consequence, a resize on a IP set with a timeouts option enabled could supply corrupted data from an old IP set. This bug has been fixed by properly reading timeout values from an old set before supplying them to a new set. BZ# 1172029 Previously, under certain conditions, a race condition between the semaphore creation code and the semop() function caused the kernel to become unresponsive. With this update, a patch has been applied, and the kernel hangs no longer occur. BZ# 1172030 Prior to this update, a NULL pointer dereference could occur when then usb_wwan device driver was performing a disconnect operation. The usb_wwan disconnect procedure has been replaced with the port_remove procedure and, as a result, the kernel no longer hangs when removing a WWAN USB device. BZ# 1175509 The usage of PCLMULQDQ instruction required the invocation of the kernel_fpu_begin() and kernel_fpu_end() functions. Consequently, the usage of the PCLMULQDQ instruction for the CRC32C checksum calculation incurred some increase of processor usage. With this update, a new function has been added in order to calculate the CRC32C checksum using the PCLMULQDQ instruction on processors that support this feature, which provides speedup over using the CRC32 instruction only. Users of kernel are advised to upgrade to these updated packages, which contain backported patches to correct these issues. The system must be rebooted for this update to take effect. 8.104.4. RHSA-2014:1392 - Important: kernel security, bug fix, and enhancement update Updated kernel packages that fix multiple security issues, address several hundred bugs, and add numerous enhancements are now available as part of the ongoing support and maintenance of Red Hat Enterprise Linux version 6. This is the sixth regular update. Red Hat Product Security has rated this update as having Important security impact. Common Vulnerability Scoring System (CVSS) base scores, which give detailed severity ratings, are available for each vulnerability from the CVE links below. The kernel packages contain the Linux kernel, the core of any Linux operating system. Security Fixes CVE-2014-5077 , Important A NULL pointer dereference flaw was found in the way the Linux kernel's Stream Control Transmission Protocol (SCTP) implementation handled simultaneous connections between the same hosts. A remote attacker could use this flaw to crash the system. CVE-2013-2596 , Important An integer overflow flaw was found in the way the Linux kernel's Frame Buffer device implementation mapped kernel memory to user space via the mmap syscall. A local user able to access a frame buffer device file (/dev/fb*) could possibly use this flaw to escalate their privileges on the system. CVE-2013-4483 , Moderate A flaw was found in the way the ipc_rcu_putref() function in the Linux kernel's IPC implementation handled reference counter decrementing. A local, unprivileged user could use this flaw to trigger an Out of Memory (OOM) condition and, potentially, crash the system. CVE-2014-0181 , Moderate It was found that the permission checks performed by the Linux kernel when a netlink message was received were not sufficient. A local, unprivileged user could potentially bypass these restrictions by passing a netlink socket as stdout or stderr to a more privileged process and altering the output of this process. CVE-2014-3122 , Moderate It was found that the try_to_unmap_cluster() function in the Linux kernel's Memory Managment subsystem did not properly handle page locking in certain cases, which could potentially trigger the BUG_ON() macro in the mlock_vma_page() function. A local, unprivileged user could use this flaw to crash the system. CVE-2014-3601 , Moderate A flaw was found in the way the Linux kernel's kvm_iommu_map_pages() function handled IOMMU mapping failures. A privileged user in a guest with an assigned host device could use this flaw to crash the host. CVE-2014-4653 , CVE-2014-4654 , CVE-2014-4655 , Moderate Multiple use-after-free flaws were found in the way the Linux kernel's Advanced Linux Sound Architecture (ALSA) implementation handled user controls. A local, privileged user could use either of these flaws to crash the system. CVE-2014-5045 , Moderate A flaw was found in the way the Linux kernel's VFS subsystem handled reference counting when performing unmount operations on symbolic links. A local, unprivileged user could use this flaw to exhaust all available memory on the system or, potentially, trigger a use-after-free error, resulting in a system crash or privilege escalation. CVE-2014-4608 , Low An integer overflow flaw was found in the way the lzo1x_decompress_safe() function of the Linux kernel's LZO implementation processed Literal Runs. A local attacker could, in extremely rare cases, use this flaw to crash the system or, potentially, escalate their privileges on the system. Red Hat would like to thank Vladimir Davydov of Parallels for reporting CVE-2013-4483, Jack Morgenstein of Mellanox for reporting CVE-2014-3601, Vasily Averin of Parallels for reporting CVE-2014-5045, and Don A. Bailey from Lab Mouse Security for reporting CVE-2014-4608. The security impact of the CVE-2014-3601 issue was discovered by Michael Tsirkin of Red Hat. Bug Fixes BZ# 1065187 A bug in the megaraid_sas driver could cause the driver to read the hardware status values incorrectly. As a consequence, the RAID card was disabled during the system boot and the system could fail to boot. With this update, the megaraid_sas driver has been corrected so that the RAID card is now enabled on system boot as expected. BZ# 1063699 Due to a ndlp list corruption bug in the lpfc driver, systems with Emulex LPe16002B-M6 PCIe 2-port 16Gb Fibre Channel Adapters could trigger a kernel panic during I/O operations. A series of patches has been backported to address this problem so the kernel no longer panics during I/O operations on the aforementioned systems. BZ# 704190 Previously, when using a bridge interface configured on top of a bonding interface, the bonding driver was not aware of IP addresses assigned to the bridge. Consequently, with ARP monitoring enabled, the ARP monitor could not target the IP address of the bridge when probing the same subnet. The bridge was thus always reported as being down and could not be reached. With this update, the bonding driver has been made aware of IP addresses assigned to a bridge configured on top of a bonding interface, and the ARP monitor can now probe the bridge as expected. Note that the problem still occurs if the arp_validate option is used. Therefore, do not use this option in this case until this issue is fully resolved. BZ# 1063478 , BZ# 1065398 , BZ# 1065404 , BZ# 1043540 , BZ# 1096328 Several concurrency problems, that could result in data corruption, were found in the implementation of CTR and CBC modes of operation for AES, DES, and DES3 algorithms on IBM S/390 systems. Specifically, a working page was not protected against concurrency invocation in CTR mode. The fallback solution for not getting a working page in CTR mode did not handle iv values correctly. The CBC mode used did not properly save and restore the key and iv values in some concurrency situations. All these problems have been addressed in the code and the concurrent use of the aforementioned algorithms no longer cause data corruption. BZ# 1061873 A change in the Advanced Programmable Interrupt Controller (APIC) code caused a regression on certain Intel CPUs using a Multiprocessor (MP) table. An attempt to read from the local APIC (LAPIC) could be performed before the LAPIC was mapped, resulting in a kernel crash during a system boot. A patch has been applied to fix this problem by mapping the LAPIC as soon as possible when parsing the MP table. BZ# 1060886 A miscalculation in the "radix_tree" swap encoding corrupted swap area indexes bigger than 8 by truncating lower bits of swap entries. Consequently, systems with more than 8 swap areas could trigger a bogus OOM scenario when swapping out to such a swap area. This update fixes this problem by reducing a return value of the SWP_TYPE_SHIFT() function and removing a broken function call from the read_swap_header() function. BZ# 1060381 Previously some device mapper kernel modules, such as dm-thin, dm-space-map-metadata, and dm-bufio, contained various bugs that had adverse effects on their proper functioning. This update backports several upstream patches that resolve these problems, including a fix for the metadata resizing feature of device mapper thin provisioning (thinp) and fixes for read-only mode for dm-thin and dm-bufio. As a result, the aforementioned kernel modules now contain the latest upstream changes and work as expected. BZ# 1059808 When an attempt to create a file on the GFS2 file system failed due to a file system quota violation, the relevant VFS inode was not completely uninitialized. This could result in a list corruption error. This update resolves this problem by correctly uninitializing the VFS inode in this situation. BZ# 1059777 In Red Hat Enterprise Linux 6.5, the TCP Segmentation Offload (TSO) feature is automatically disabled if the corresponding network device does not report any CSUM flag in the list of its features. Previously, VLAN devices that were configured over bonding devices did not propagate its NETIF_F_NO_CSUM flag as expected, and their feature lists thus did not contain any CSUM flags. As a consequence, the TSO feature was disabled for these VLAN devices, which led to poor bandwidth performance. With this update, the bonding driver propagates the aforementioned flag correctly so that network traffic now flows through VLAN devices over bonding without any performance problems. BZ# 1059586 Due to a bug in the mlx4_en module, a data structure related to time stamping could be accessed before being initialized. As a consequence, loading mlx4_en could result in a kernel crash. This problem has been fixed by moving the initiation of the time stamp mechanism to the correct place in the code. BZ# 1059402 Due to a change that was refactoring the Generic Routing Encapsulation (GRE) tunneling code, the ip_gre module did not work properly. As a consequence, GRE interfaces dropped every packet that had the Explicit Congestion Notification (ECN) bit set and did not have the ECN-Capable Transport (ECT) bit set. This update reintroduces the ipgre_ecn_decapsulate() function that is now used instead of the IP_ECN_decapsulate() function that was not properly implemented. The ip_gre module now works correctly and GRE devices process all packets as expected. BZ# 1059334 When removing an inode from a name space on an XFS file system, the file system could enter a deadlock situation and become unresponsive. This happened because the removal operation incorrectly used the AGF and AGI locks in the opposite order than was required by the ordering constraint, which led to a possible deadlock between the file removal and inode allocation and freeing operations. With this update, the inode's reference count is dropped before removing the inode entry with the first transaction of the removal operation. This ensures that the AGI and AGF locks are locked in the correct order, preventing any further deadlocks in this scenario. BZ# 1059325 Previously, the for_each_isci_host() macro was incorrectly defined so it accessed an out-of-range element for a 2-element array. This macro was also wrongly optimized by GCC 4.8 so that it was executed too many times on platforms with two SCU controllers. As a consequence, the system triggered a kernel panic when entering the S3 state, or a kernel oops when removing the isci module. This update corrects the aforementioned macro and the described problems no longer occur. BZ# 1067722 A change enabled receive acceleration for VLAN interfaces configured on a bridge interface. However, this change allowed VLAN-tagged packets to bypass the bridge and be delivered directly to the VLAN interfaces. This update ensures that the traffic is correctly processed by a bridge before it is passed to any VLAN interfaces configured on that bridge. BZ# 844450 The Completely Fair Scheduler (CFS) did not verify whether the CFS period timer is running while throttling tasks on the CFS run queue. Therefore under certain circumstances, the CFS run queue became stuck because the CFS period timer was inactive and could not be restarted. To fix this problem, the CFS now restarts the CFS period timer inside the throttling function if it is inactive. BZ# 1069028 Due to a bug in the ixgbevf driver, the stripped VLAN information from incoming packets on the ixgbevf interface could be lost, and such packets thus did not reach a related VLAN interface. This problem has been fixed by adding the packet's VLAN information to the Socket Buffer (skb) before passing it to the network stack. As a result, the ixgbevf driver now passes the VLAN-tagged packets to the appropriate VLAN interface. BZ# 1069737 patches to the CIFS code introduced a regression that prevented users from mounting a CIFS share using the NetBIOS over TCP service on the port 139. This problem has been fixed by masking off the top byte in the get_rfc1002_length() function. BZ# 880024 Previously, the locking of a semtimedop semaphore operation was not fine enough with remote non-uniform memory architecture (NUMA) node accesses. As a consequence, spinlock contention occurred, which caused delays in the semop() system call and high load on the server when running numerous parallel processes accessing the same semaphore. This update improves scalability and performance of workloads with a lot of semaphore operations, especially on larger NUMA systems. This improvement has been achieved by turning the global lock for each semaphore array into a per-semaphore lock for many semaphore operations, which allows multiple simultaneous semop() operations. As a result, performance degradation no longer occurs. BZ# 886723 A rare race between the file system unmount code and the file system notification code could lead to a kernel panic. With this update, a series of patches has been applied to the kernel to prevent this problem. BZ# 885517 A bug in the bio layer could prevent user space programs from writing data to disk when the system run under heavy RAM memory fragmentation conditions. This problem has been fixed by modifying a respective function in the bio layer to refuse to add a new memory page only if the page would start a new memory segment and the maximum number of memory segments has already been reached. BZ# 1070856 A bug in the qla2xxx driver caused the kernel to crash. This update resolves this problem by fixing an incorrect condition in the "for" statement in the qla2x00_alloc_iocbs() function. BZ# 1072373 A change that introduced global clock updates caused guest machines to boot slowly when the host Time Stamp Counter (TSC) was marked as unstable. The slow down increased with the number of vCPUs allocated. To resolve this problem, a patch has been applied to limit the rate of the global clock updates. BZ# 1055644 A previously backported patch to the XFS code added an unconditional call to the xlog_cil_empty() function. If the XFS file system was mounted with the unsupported nodelaylog option, that call resulted in access to an uninitialized spin lock and a consequent kernel panic. To avoid this problem, the nodelaylog option has been disabled; the option is still accepted but has no longer any effect. (The nodelaylog mount option was originally intended only as a testing option upstream, and has since been removed.) BZ# 1073129 Due to a bug in the hrtimers subsystem, the clock_was_set() function called an inter-processor interrupt (IPI) from soft IRQ context and waited for its completion, which could result in a deadlock situation. A patch has been applied to fix this problem by moving the clock_was_set() function call to the working context. Also during the resume process, the hrtimers_resume() function reprogrammed kernel timers only for the current CPU because it assumed that all other CPUs are offline. However, this assumption was incorrect in certain scenarios, such as when resuming a Xen guest with some non-boot CPUs being only stopped with IRQs disabled. As a consequence, kernel timers were not corrected on other than the boot CPU even though those CPUs were online. To resolve this problem, hrtimers_resume() has been modified to trigger an early soft IRQ to correctly reprogram kernel timers on all CPUs that are online. BZ# 1073218 A bug in the vmxnet3 driver allowed potential race conditions to be triggered when the driver was used with the netconsole module. The race conditions allowed the driver's internal NAPI poll routine to run concurrently with the netpoll controller routine, which resulted in data corruption and a subsequent kernel panic. To fix this problem, the vmxnet3 driver has been modified to call the appropriate interrupt handler to schedule NAPI poll requests properly. BZ# 1075713 The Red Hat GFS2 file system previously limited a number of ACL entries per inode to 25. However, this number was insufficient in some cases, causing the setfacl command to fail. This update increases this limit to maximum of 300 ACL entries for the 4 KB block size. If the block size is smaller, this value is adjusted accordingly. BZ# 1053547 The SCTP sctp_connectx() ABI did not work properly for 64-bit kernels compiled with 32-bit emulation. As a consequence, applications utilizing the sctp_connectx() function did not run in this case. To fix this problem, a new ABI has been implemented; the COMPAT ABI enables to copy and transform user data from a COMPAT-specific structure to a SCTP-specific structure. Applications that require sctp_connectx() now work without any problems on a system with a 64-bit kernel compiled with 32-bit emulation. BZ# 1075805 Previously, if a hrtimer interrupt was delayed, all future pending hrtimer events that were queued on the same processor were also delayed until the initial hrtimer event was handled. This could cause all hrtimer processing to stop for a significant period of time. To prevent this problem, the kernel has been modified to handle all expired hrtimer events when handling the initially delayed hrtimer event. BZ# 915862 A change in the NFSv4 code resulted in breaking the sync NFSv4 mount option. A patch has been applied that restores functionality of the sync mount option. BZ# 1045150 The code responsible for creating and binding of packet sockets was not optimized and therefore applications that utilized the socket() and bind() system calls did not perform as expected. A patch has been applied to the packet socket code so that latency for socket creating and binding is now significantly lower in certain cases. BZ# 919756 A race condition between completion and timeout handling in the block device code could sometimes trigger a BUG_ON() assertion, resulting in a kernel panic. This update resolves this problem by relocating a relevant function call and the BUG_ON() assertion in the code. BZ# 1044117 The context of the user's process could not be previously saved on PowerPC platforms if the VSX Machine State Register (MSR) bit was set but the user did not provide enough space to save the VSX state. This update allows to clear the VSX MSR bit in such a situation, indicating that there is no valid VSX state in the user context. BZ# 1043733 The kernel task scheduler could trigger a race condition while migrating tasks over CPU cgroups. The race could result in accessing a task that pointed to an incorrect parent task group, causing the system to behave unpredictably, for example to appear being unresponsive. This problem has been resolved by ensuring that the correct task group information is properly stored during the task's migration. BZ# 1043353 Previously, when hot adding memory to the system, the memory management subsystem always performed unconditional page-block scans for all memory sections being set online. The total duration of the hot add operation depends on both, the size of memory that the system already has and the size of memory that is being added. Therefore, the hot add operation took an excessive amount of time to complete if a large amount of memory was added or if the target node already had a considerable amount of memory. This update optimizes the code so that page-block scans are performed only when necessary, which greatly reduces the duration of the hot add operation. BZ# 1043051 Due to a bug in the SELinux socket receive hook, network traffic was not dropped upon receiving a peer:recv access control denial on some configurations. A broken labeled networking check in the SELinux socket receive hook has been corrected, and network traffic is now properly dropped in the described case. BZ# 1042731 Recent changes in the d_splice_alias() function introduced a bug that allowed d_splice_alias() to return a dentry from a different directory than was the directory being looked up. As a consequence in cluster environment, a kernel panic could be triggered when a directory was being removed while a concurrent cross-directory operation was performed on this directory on another cluster node. This update avoids the kernel panic in this situation by correcting the search logic in the d_splice_alias() function so that the function can no longer return a dentry from an incorrect directory. BZ# 1040385 When utilizing SCTP over the bonding device in Red Hat Enterprise Linux 6.5 and later, SCTP assumed offload capabilities on virtual devices where it was not guaranteed that underlying physical devices are equipped with these capabilities. As a consequence, checksums of the outgoing packets became corrupted and a network connection could not be properly established. A patch has been applied to ensure that checksums of the packages to the devices without SCTP checksum capabilities are properly calculated in software fallback. SCTP connections over the bonding devices can now be established as expected in Red Hat Enterprise Linux 6.5 and later. BZ# 1039723 A change in the TCP code that extended the "proto" struct with a new function, release_cb(), broke integrity of the kernel Application Binary Interface (kABI). If the core stack called a newly introduced pointer to this function for a module that was compiled against older kernel headers, the call resulted in out-of-bounds access and a subsequent kernel panic. To avoid this problem, the core stack has been modified to recognize a newly introduced slab flag, RHEL_EXTENDED_PROTO. This allows the core stack to safely access the release_cb pointer only for modules that support it. BZ# 1039534 A change removed the ZONE_RECLAIM_LOCKED flag from Linux memory management code in order to fix a NUMA node allocation problem in the memory zone reclaim logic. However, the flag removal allowed concurrent page reclaiming within one memory zone, which, under heavy system load, resulted in unwanted spin lock contention and subsequent performance problems (systems became slow or unresponsive). This update resolves this problem by preventing reclaim threads from scanning a memory zone if the zone does not satisfy scanning requirements. Systems under heavy load no longer suffer from CPU overloading but sustain their expected performance. BZ# 1082127 NFSv4 incorrectly handled a situation when an NFS client received an NFS4ERR_ADMIN_REVOKED error after sending a CLOSE operation. As a consequence, the client kept sending the same CLOSE operation indefinitely although it was receiving NFS4ERR_ADMIN_REVOKED errors. A patch has been applied to the NFSv4 code to ensure that the NFS client sends the particular CLOSE operation only once in this situation. BZ# 1037467 Due to recent changes in the Linux memory management, the kernel did not properly handle per-CPU LRU page vectors when hot unplugging CPUs. As a consequence, the page vector of the relevant offline CPU kept memory pages for memory accounting. This prevented the libvirtd daemon from removing the relevant memory cgroup directory upon system shutdown, rendering libvirtd unresponsive. To resolve this problem, the Linux memory management now properly flushes memory pages of offline CPUs from the relevant page vectors. BZ# 1037465 An incorrectly placed function call in the cgroup code prevented the notify_on_release functionality from working properly. This functionality is used to remove empty cgroup directories, however due to this bug, some empty cgroup directories were remaining on the system. This update ensures that the notify_on_release functionality is always correctly triggered by correctly ordering operations in the cgroup_task_migrate() function. BZ# 963785 Previously, NFSv4 allowed an NFSv4 client to resume an expired or lost file lock. This could result in file corruption if the file was modified in the meantime. This problem has been resolved by a series of patches ensuring that an NFSv4 client no longer attempts to recover expired or lost file locks. BZ# 1036972 Systems that use NFS file systems could become unresponsive or trigger a kernel oops due to a use-after-free bug in the duplicate reply cache (DRC) code in the nfsd daemon. This problem has been resolved by modifying nfsd to unhash DRC entries before attempting to use them and to prefer to allocate a new DRC entry from the slab instead of reusing an expired entry from the list. BZ# 1036312 Inefficient usage of Big Kernel Locks (BKLs) in the ptrace() system call could lead to BKL contention on certain systems that widely utilize ptrace(), such as User-mode Linux (UML) systems, resulting in degraded performance on these systems. This update removes the relevant BKLs from the ptrace() system call, thus resolving any related performance issues. BZ# 975248 A bug in the ixgbe driver caused that IPv6 hardware filtering tables were not correctly rewritten upon interface reset when using a bridge device over the PF interface in an SR-IOV environment. As a result, the IPv6 traffic between VFs was interrupted. An upstream patch has been backported to modify the ixgbe driver so that the update of the Multimedia Terminal Adapter (MTA) table is now unconditional, avoiding possible inconsistencies in the MTA table upon PF's reset. The IPv6 traffic between VFs proceeds as expected in this scenario. BZ# 1116947 Later Intel CPUs added a new "Condition Changed" bit to the MSR_CORE_PERF_GLOBAL_STATUS register. Previously, the kernel falsely assumed that this bit indicates a performance interrupt, which prevented other NMI handlers from running and executing. To fix this problem, a patch has been applied to the kernel to ignore this bit in the perf code, enabling other NMI handlers to run. BZ# 975908 Due to a bug in the mlx4 driver, Mellanox Ethernet cards were brought down unexpectedly while adjusting their Tx or Rx ring. A patch has been applied so that the mlx4 driver now properly verifies the state of the Ethernet card when the coalescing of the Tx or Rx ring is being set, which resolves this problem. BZ# 1083748 Previously, hardware could execute commands send by drivers in FIFO order instead of tagged order. Commands thus could be executed out of sequence, which could result in large latencies and degradation of throughput. With this update, the ATA subsystem tags each command sent to the hardware, ensuring that the hardware executes commands in tagged order. Performance on controllers supporting tagged commands can now increase by 30-50%. BZ# 980188 When transferring a large amount of data over the peer-to-peer (PPP) link, a rare race condition between the throttle() and unthrottle() functions in the tty driver could be triggered. As a consequence, the tty driver became unresponsive, remaining in the throttled state, which resulted in the traffic being stalled. Also, if the PPP link was heavily loaded, another race condition in the tty driver could has been triggered. This race allowed an unsafe update of the available buffer space, which could also result in the stalled traffic. A series of patches addressing both race conditions has been applied to the tty driver; if the first race is triggered, the driver loops and forces re-evaluation of the respective test condition, which ensures uninterrupted traffic flow in the described situation. The second race is now completely avoided due to a well-placed read lock, and the update of the available buffer space proceeds correctly. BZ# 1086450 Previously, the Huge Translation Lookaside Buffer (HugeTLB) unconditionally allowed access to huge pages. However, huge pages may be unsupported in some environments, such as a KVM guest on the PowerPC architecture when not backed by huge pages, and an attempt to use a base page as a huge page in memory would result in a kernel oops. This update ensures that HugeTLB denies access to huge pages if the huge pages are not supported on the system. BZ# 982770 The restart logic for the memory reclaiming with compaction was previously applied on the level of LRU page vectors. This could, however, cause significant latency in memory allocation because memory compaction does not require only memory pages of a certain cgroup but a whole memory zone. This performance issue has been fixed by moving the restart logic to the zone level and restarting the memory reclaim for all memory cgroups in a zone when the compaction requires more free pages from the zone. BZ# 987634 A bug in the mlx4 driver could trigger a race between the "blue flame" feature's traffic flow and the stamping mechanism in the Tx ring flow when processing Work Queue Elements (WQEs) in the Tx ring. Consequently, the related queue pair (QP) of the mlx4 Ethernet card entered an error state and the traffic on the related Tx ring was blocked. A patch has been applied to the mlx4 driver so that the driver does not stamp the last completed WQE in the Tx ring, and thus avoids the aforementioned race. BZ# 1034269 When a page table is upgraded, a new top level of the page table is added for the virtual address space, which results in a new Address Space Control Element (ASCE). However, the Translation Lookaside Buffer (TLB) of the virtual address space was not previously flushed on page table upgrade. As a consequence, the TLB contained entries associated with the old ASCE which led to unexpected program failures and random data corruption. To correct this problem, the TLB entries associated with the old ASCE are now flushed as expected upon page table upgrade. BZ# 1034268 A change in the Linux memory management on IBM System z removed the handler for the Address Space Control Element (ASCE) type of exception. As a consequence, the kernel was unable to handle ASCE exceptions, which led to a kernel panic. Such an exception was triggered, for example, if the kernel attempted to access user memory with an address that was larger than the current page table limit from a user-space program. This problem has been fixed by calling the standard page fault handler, do_dat_exception, if an ASCE exception is raised. BZ# 1104268 Due to a bug in the mount option parser, prefix paths on a CIFS DFS share could be prepended with a double backslash ('\\'), resulting in an incorrect "No such file" error in certain environments. The mount option parser has been fixed and prefix paths now starts with a single backslash as expected. BZ# 995300 Due to a bug in the Infiniband driver, the ip and ifconfig utilities reported the link status of the IP over Infiniband (IPoIB) interfaces incorrectly (as "RUNNING" in case of "ifconfig", and as "UP" in case of "ip") even if no cable was connected to the respective network card. The problem has been corrected by calling the respective netif_carrier_off() function on the right place in the code. The link status of the IPoIB interfaces is now reported correctly in the described situation. BZ# 995576 An earlier patch to the kernel added the dynamic queue depth throttling functionality to the QLogic's qla2xxx driver that allowed the driver to adjust queue depth for attached SCSI devices. However, the kernel might have crashed when having this functionality enabled in certain environments, such as on systems with EMC PowerPath Multipathing installed that were under heavy I/O load. To resolve this problem, the dynamic queue depth throttling functionality has been removed from the qla2xxx driver. BZ# 1032350 A bug in the Completely Fair Scheduler (CFS) could, under certain circumstances, trigger a race condition while moving a forking task between cgroups. This race could lead to a free-after-use error and a subsequent kernel panic when a child task was accessed while it was pointing to a stale cgroup of its parent task. A patch has been applied to the CFS to ensure that a child task always points to the valid parent's task group. BZ# 998625 When performing I/O operations on a heavily-fragmented GFS2 file system, significant performance degradation could occur. This was caused by the allocation strategy that GFS2 used to search for an ideal contiguous chunk of free blocks in all the available resource groups (rgrp). A series of patches has been applied that improves performance of GFS2 file systems in case of heavy fragmentation. GFS2 now allocates the biggest extent found in the rgrp if it fulfills the minimum requirements. GFS2 has also reduced the amount of bitmap searching in case of multi-block reservations by keeping track of the smallest extent for which the multi-block reservation would fail in the given rgrp. This improves GFS2 performance by avoiding unnecessary rgrp free block searches that would fail. Additionally, this patch series fixes a bug in the GFS2 block allocation code where a multi-block reservation was not properly removed from the rgrp's reservation tree when it was disqualified, which eventually triggered a BUG_ON() macro due to an incorrect count of reserved blocks. BZ# 1032347 Due to a race condition in the cgroup code, the kernel task scheduler could trigger a use-after-free bug when it was moving an exiting task between cgroups, which resulted in a kernel panic. This update avoids the kernel panic by introducing a new function, cpu_cgroup_exit(). This function ensures that the kernel does not release a cgroup that is not empty yet. BZ# 1032343 Due to a race condition in the cgroup code, the kernel task scheduler could trigger a kernel panic when it was moving an exiting task between cgroups. A patch has been applied to avoid this kernel panic by replacing several improperly used function calls in the cgroup code. BZ# 1111631 The automatic route cache rebuilding feature could incorrectly compute the length of a route hash chain if the cache contained multiple entries with the same key but a different TOS, mark, or OIF bit. Consequently, the feature could reach the rebuild limit and disable the routing cache on the system. This problem is fixed by using a helper function that avoids counting such duplicate routes. BZ# 1093819 NFS previously called the drop_nlink() function after removing a file to directly decrease a link count on the related inode. Consequently, NFS did not revalidate an inode cache, and could thus use a stale file handle, resulting in an ESTALE error. A patch has been applied to ensure that NFS validates the inode cache correctly after removing a file. BZ# 1002727 Previously, the vmw_pwscsi driver could attempt to complete a command to the SCSI mid-layer after reporting a successful abort of the command. This led to a double completion bug and a subsequent kernel panic. This update ensures that the pvscsi_abort() function returns SUCCESS only after the abort is completed, preventing the driver from invalid attempts to complete the command. BZ# 1030094 Due to several bugs in the IPv6 code, a soft lockup could occur when the number of cached IPv6 destination entries reached the garbage collector treshold on a high-traffic router. A series of patches has been applied to address this problem. These patches ensure that the route probing is performed asynchronously to prevent a dead lock with garbage collection. Also, the garbage collector is now run asynchronously, preventing CPUs that concurrently requested the garbage collector from waiting until all other CPUs finish the garbage collection. As a result, soft lockups no longer occur in the described situation. BZ# 1030049 Due to a bug in the NFS code, the state manager and the DELEGRETURN operation could enter a deadlock if an asynchronous session error was received while DELEGRETURN was being processed by the state manager. The state manager became unable to process the failing DELEGRETURN operation because it was waiting for an asynchronous RPC task to complete, which could not have been completed because the DELEGRETURN operation was cycling indefinitely with session errors. A series of patches has been applied to ensure that the asynchronous error handler waits for recovery when a session error is received and the deadlock no longer occurs. BZ# 1030046 The RPC client always retransmitted zero-copy of the page data if it timed out before the first RPC transmission completed. However, such a retransmission could cause data corruption if using the O_DIRECT buffer and the first RPC call completed while the respective TCP socket still held a reference to the pages. To prevent the data corruption, retransmission of the RPC call is, in this situation, performed using the sendmsg() function. The sendmsg() function retransmits an authentic reproduction of the first RPC transmission because the TCP socket holds the full copy of the page data. BZ# 1095796 Due to a bug in the nouveau kernel module, the wrong display output could be modified in certain multi-display configurations. Consequently, on Lenovo Thinkpad T420 and W530 laptops with an external display connected, this could result in the LVDS panel "bleeding" to white during startup, and the display controller might become non-functional until after a reboot. Changes to the display configuration could also trigger the bug under various circumstances. With this update, the nouveau kernel module has been corrected and the said configurations now work as expected. BZ# 1007164 When guest supports Supervisor Mode Execution Protection (SMEP), KVM sets the appropriate permissions bits on the guest page table entries (sptes) to emulate SMEP enforced access. Previously, KVM was incorrectly verifying whether the "smep" bit was set in the host cr4 register instead of the guest cr4 register. Consequently, if the host supported SMEP, it was enforced even though it was not requested, which could render the guest system unbootable. This update corrects the said "smep" bit check and the guest system boot as expected in this scenario. BZ# 1029585 If a statically defined gateway became unreachable and its corresponding neighbor entry entered a FAILED state, the gateway stayed in the FAILED state even after it became reachable again. This prevented routing of the traffic through that gateway. This update allows probing such a gateway automatically and routing the traffic through the gateway again once it becomes reachable. BZ# 1009332 Previously, certain network device drivers did not accept ethtool commands right after they were mounted. As a consequence, the current setting of the specified device driver was not applied and an error message was returned. The ETHTOOL_DELAY variable has been added, which makes sure the ethtool utility waits for some time before it tries to apply the options settings, thus fixing the bug. BZ# 1009626 A system could enter a deadlock situation when the Real-Time (RT) scheduler was moving RT tasks between CPUs and the wakeup_kswapd() function was called on multiple CPUs, resulting in a kernel panic. This problem has been fixed by removing a problematic memory allocation and therefore calling the wakeup_kswapd() function from a deadlock-safe context. BZ# 1029530 Previously, the e752x_edac module incorrectly handled the pci_dev usage count, which could reach zero and deallocate a PCI device structure. As a consequence, a kernel panic could occur when the module was loaded multiple times on some systems. This update fixes the usage count that is triggered by loading and unloading the module repeatedly, and a kernel panic no longer occurs. BZ# 1011214 The IPv4 and IPv6 code contained several issues related to the conntrack fragmentation handling that prevented fragmented packages from being properly reassembled. This update applies a series of patches and ensures that MTU discovery is handled properly, and fragments are correctly matched and packets reassembled. BZ# 1028682 The kernel did not handle environmental and power warning (EPOW) interrupts correctly. This prevented successful usage of the "virsh shutdown" command to shut down guests on IBM POWER8 systems. This update ensures that the kernel handles EPOW events correctly and also prints informative descriptions for the respective EPOW events. The detailed information about each encountered EPOW can be found in the Real-Time Abstraction Service (RTAS) error log. BZ# 1097915 The bridge MDB RTNL handlers were incorrectly removed after deleting a bridge from the system with more then one bridge configured. This led to various problems, such as that the multicast IGMP snooping data from the remaining bridges were not displayed. This update ensures that the bridge handlers are removed only after the bridge module is unloaded, and the multicast IGMP snooping data now displays correctly in the described situation. BZ# 1098658 A change to the SCSI code fixed a race condition that could occur when removing a SCSI device. However, that change caused performance degradation because it used a certain function from the block layer code that was returning different values compared with later versions of the kernel. This update alters the SCSI code to properly utilize the values returned by the block layer code. BZ# 1026864 A change to the md driver disabled the TRIM operation for RAID5 volumes in order to prevent a possible kernel oops. However, if a MD RAID volume was reshaped to a different RAID level, this could result in TRIM being disabled on the resulting volume, as the RAID4 personality is used for certain reshapes. A patch has been applied that corrects this problem by setting the stacking limits before changing a RAID level, and thus ensuring the correct discard (TRIM) granularity for the RAID array. BZ# 1025439 As a result of a recent fix preventing a deadlock upon an attempt to cover an active XFS log, the behavior of the xfs_log_need_covered() function has changed. However, xfs_log_need_covered() is also called to ensure that the XFS log tail is correctly updated as a part of the XFS journal sync operation. As a consequence, when shutting down an XFS file system, the sync operation failed and some files might have been lost. A patch has been applied to ensure that the tail of the XFS log is updated by logging a dummy record to the XFS journal. The sync operation completes successfully and files are properly written to the disk in this situation. BZ# 1025224 There was an error in the tag insertion logic, and the bonding handled cases when a slave device did not have a hardware VLAN acceleration. As a consequence, network packets were tagged twice when passing through slave devices without hardware VLAN tag insertion, and network cards using a VLAN over a bonding device did not work properly. This update removes the redundant VLAN tag insertion logic, and the unwanted behavior no longer occurs. BZ# 1024683 Due to a bug in the Emulex lpfc driver, the driver could not allocate a SCSI buffer properly, which resulted in severe performance degradation of lpfc adapters on 64-bit PowerPC systems. A patch addressing this problem has been applied so that lpfc allocates the SCSI buffer correctly and lpfc adapters now work as expected on 64-bit PowerPC systems. BZ# 1024631 Previously, certain SELinux functions did not correctly handle the TCP synchronize-acknowledgment (SYN-ACK) packets when processing IPv4 labeled traffic over an INET socket. The initial SYN-ACK packets were labeled incorrectly by SELinux, and as a result, the access control decision was made using the server socket's label instead of the new connection's label. In addition, SELinux was not properly inspecting outbound labeled IPsec traffic, which led to similar problems with incorrect access control decisions. A series of patches that addresses these problems has been applied to SELinux. The initial SYN-ACK packets are now labeled correctly and SELinux processes all SYN-ACK packets as expected. BZ# 1100127 A change to the Open vSwitch kernel module introduced a use-after-free problem that resulted in a kernel panic on systems that use this module. This update ensures that the affected object is freed on the correct place in the code, thus avoiding the problem. BZ# 1024024 Previously, the GFS2 kernel module leaked memory in the gfs2_bufdata slab cache and allowed a use-after-free race condition to be triggered in the gfs2_remove_from_journal() function. As a consequence after unmounting the GFS2 file system, the GFS2 slab cache could still contain some objects, which subsequently could, under certain circumstances, result in a kernel panic. A series of patches has been applied to the GFS2 kernel module, ensuring that all objects are freed from the slab cache properly and the kernel panic is avoided. BZ# 1023897 A bug in the RSXX DMA handling code allowed DISCARD operations to call the pci_unmap_page() function, which triggered a race condition on the PowerPC architecture when DISCARD, READ, and WRITE operations were issued simultaneously. However, DISCARD operations are always assigned a DMA address of 0 because they are never mapped. Therefore, this race could result in freeing memory that was mapped for another operation and a subsequent EEH event. A patch has been applied, preventing the DISCARD operations from calling pci_unmap_page(), and thus avoiding the aforementioned race condition. BZ# 1023272 Due to a regression bug in the mlx4 driver, Mellanox mlx4 adapters could become unresponsive on heavy load along with IOMMU allocation errors being logged to the systems logs. A patch has been applied to the mlx4 driver so that the driver now calculates the last memory page fragment when allocating memory in the Rx path. BZ# 1021325 When performing read operations on an XFS file system, failed buffer readahead can leave the buffer in the cache memory marked with an error. This could lead to incorrect detection of stale errors during completion of an I/O operation because most callers do not zero out the b_error field of the buffer on a subsequent read. To avoid this problem and ensure correct I/O error detection, the b_error field of the used buffer is now zeroed out before submitting an I/O operation on a file. BZ# 1034237 Due to the locking mechanism that the kernel used while handling Out of Memory (OOM) situations in memory control groups (cgroups), the OOM killer did not work as intended in case that many processes triggered an OOM. As a consequence, the entire system could become or appear to be unresponsive. A series of patches has been applied to improve this locking mechanism so that the OOM killer now works as expected in memory cgroups under heavy OOM load. BZ# 1104503 Due to a bug in the GRE tunneling code, it was impossible to create a GRE tunnel with a custom name. This update corrects behavior of the ip_tunnel_find() function, allowing users to create GRE tunnels with custom names. BZ# 1020685 When the system was under memory stress, a double-free bug in the tg3 driver could have been triggered, resulting in a NIC being brought down unexpectedly followed by a kernel panic. A patch has been applied that restructures the respective code so that the affected ring buffer is freed correctly. BZ# 1021044 If the BIOS returned a negative value for the critical trip point for the given thermal zone during a system boot, the whole thermal zone was invalidated and an ACPI error was printed. However, the thermal zone may still have been needed for cooling. With this update, the ACPI thermal management has been modified to only disable the relevant critical trip point in this situation. BZ# 1020461 Due to a missing part of the bcma driver, the brcmsmac kernel module did not have a list of internal aliases that was needed by the kernel to properly handle the related udev events. Consequently, when the bcma driver scanned for the devices at boot time, these udev events were ignored and the kernel did not load the brcmsmac module automatically. A patch that provides missing aliases has been applied so that the udev requests of the brcmsmac module are now handled as expected and the kernel loads the brcmsmac module automatically on boot. BZ# 1103471 Previously, KVM did not accept PCI domain (segment) number for host PCI devices, making it impossible to assign a PCI device that was a part of a non-zero PCI segment to a virtual machine. To resolve this problem, KVM has been extended to accept PCI domain number in addition to slot, device, and function numbers. BZ# 1020290 Due to a bug in the EDAC driver, the driver failed to decode and report errors on AMD family 16h processors correctly. This update incorporates a missing case statement to the code so that the EDAC driver now handles errors as expected. BZ# 1019578 changes to the igb driver caused the ethtool utility to determine and display some capabilities of the Ethernet devices incorrectly. This update fixes the igb driver so that the actual link capabilities are now determined properly, and ethtool displays values as accurate as possible in dependency on the data available to the driver. BZ# 1019346 Previously, devices using the ixgbevf driver that were assigned to a virtual machine could not adjust their Jumbo MTU value automatically if the Physical Function (PF) interface was down; when the PF device was brought up, the MTU value on the related Virtual Function (VF) device was set incorrectly. This was caused by the way the communication channel between PF and VF interfaces was set up and the first negotiation attempt between PF and VF was made. To fix this problem, structural changes to the ixgbevf driver have been made so that the kernel can now negotiate the correct API between PF and VF successfully and the MTU value is now set correctly on the VF interface in this situation. BZ# 1024006 A chunk of a patch was left out when backporting a batch of patches that fixed an infinite loop problem in the LOCK operation with zero state ID during NFSv4 state ID recovery. As a consequence, the system could become unresponsive on numerous occasions. The missing chunk of the patch has been added, resolving this hang issue. BZ# 1018138 The kernel previously did not reset the kernel ring buffer if the trace clock was changed during tracing. However, the new clock source could be inconsistent with the clock source, and the result trace record thus could contain incomparable time stamps. To ensure that the trace record contains only comparable time stamps, the ring buffer is now reset whenever the trace clock changes. BZ# 1024548 When using Haswell HDMI audio controllers with an unaligned DMA buffer size, these audio controllers could become locked up until the reboot for certain audio stream configurations. A patch has been applied to the Intel's High Definition Audio (HDA) driver that enforces the DMA buffer alignment setting for the Haswell HDMI audio controllers. These audio controllers now work as expected. BZ# 1024689 A change to the virtual file system (VFS) code included the reduction of the PATH_MAX variable by 32 bytes. However, this change was not propagated to the do_getname() function, which had a negative impact on interactions between the getname() and do_getname() functions. This update modifies do_getname() accordingly and this function now works as expected. BZ# 1028372 Previously, when removing an IPv6 address from an interface, unreachable routes related to that address were not removed from the IPv6 routing table. This happened because the IPv6 code used inappropriate function when searching for the routes. To avoid this problem, the IPv6 code has been modified to use the ip6_route_lookup() function instead of rt6_lookup() in this situation. All related routes are now properly deleted from the routing tables when an IPv6 address is removed. BZ# 1029200 A bug in the statistics flow in the bnx2x driver caused the card's DMA Engine (DMAE) to be accessed without taking a necessary lock. As a consequence, previously queued DMAE commands could be overwritten and the Virtual Functions then could timeout on requests to their respective Physical Functions. The likelihood of triggering the bug was higher with more SR-IOV Virtual Functions configured. Overwriting of the DMAE commands could also result in other problems even without using SR-IOV. This update ensures that all flows utilizing DMAE will use the same API and the proper locking scheme is kept by all these flows. BZ# 1029203 The bnx2x driver handled unsupported TLVs received from a Virtual Function (VF) using the VF-PF channel incorrectly; when a driver of the VF sent a known but unsupported TLV command to the Physical Function, the driver of the PF did not reply. As a consequence, the VF-PF channel was left in an unstable state and the VF eventually timed out. A patch has been applied to correct the VF-PF locking scheme so that unsupported TLVs are properly handled and responded to by the PF side. Also, unsupported TLVs could previously render a mutex used to lock the VF-PF operations. The mutex then stopped protecting critical sections of the code, which could result in error messages being generated when the PF received additional TLVs from the VF. A patch has been applied that corrects the VF-PF channel locking scheme, and unsupported TLVs thus can no longer break the VF-PF lock. BZ# 1007039 When performing buffered WRITE operations from multiple processes to a single file, the NFS code previously always verified whether the lock owner information is identical for the file being accessed even though no file locks were involved. This led to performance degradation because forked child processes had to synchronize dirty data written to a disk by the parent process before writing to a file. Also, when coalescing requests into a single READ or WRITE RPC call, NFS refused the request if the lock owner information did not match for the given file even though no file locks were involved. This also caused performance degradation. A series of patches has been applied that relax relevant test conditions so that lock owner compatibility is no longer verified in the described cases, which resolves these performance issues. BZ# 1005491 Due to a change that altered the format of the txselect parameter, the InfiniBand qib driver was unable to support HP branded QLogic QDR InfiniBand cards in HP Blade servers. To resolve this problem, the driver's parsing routine, setup_txselect(), has been modified to handle multi-value strings. BZ# 994724 Due to a race condition that allowed a RAID array to be written to while it was being stopped, the md driver could enter a deadlock situation. The deadlock prevented buffers from being written out to the disk, and all I/O operations to the device became unresponsive. With this update, the md driver has been modified so this deadlock is now avoided. BZ# 1090423 Previously, recovery of a double-degraded RAID6 array could, under certain circumstances, result in data corruption. This could happen because the md driver was using an optimization that is safe to use only for single-degraded arrays. This update ensures that this optimization is skipped during the recovery of double-degraded RAID6 arrays. BZ# 1034348 NFS previously allowed a race between "silly rename" operations and the rmdir() function to occur when removing a directory right after an unlinked file in the directory was closed. As a result, rmdir() could fail with an EBUSY error. This update applies a patch ensuring that NFS waits for any asynchronous operations to complete before performing the rmdir() operation. BZ# 1034487 A deadlock between the state manager, kswapd daemon, and the sys_open() function could occur when the state manager was recovering from an expired state and recovery OPEN operations were being processed. To fix this problem, NFS has been modified to ignore all errors from the LAYOUTRETURN operation (a pNFS operation) except for "NFS4ERR_DELAY" in this situation. BZ# 980621 Previously, in certain environments, such as an HP BladeSystem Enclosure with several Blade servers, the kdump kernel could experience a kernel panic or become unresponsive during boot due to lack of available interrupt vectors. As a consequence, kdump failed to capture a core dump. To increase a number of available interrupt vectors, the kdump kernel can boot up with more CPUs. However, the kdump kernel always tries to boot up with the bootstrap processor (BSP), which can cause the kernel to fail to bring up more than one CPU under certain circumstances. This update introduces a new kernel parameter, disable_cpu_acipid, which allows the kdump kernel to disable BSP during boot and then to successfully boot up with multiple processors. This resolves the problem of lack of available interrupt vectors for systems with a high number of devices and ensures that kdump can now successfully capture a core dump on these systems. BZ# 1036814 The ext4_releasepage() function previously emitted an unnecessary warning message when it was passed a page with the PageChecked flag set. To avoid irrelevant warnings in the kernel log, this update removes the related WARN_ON() from the ext4 code. BZ# 960275 Previously, user space packet capturing libraries, such as libcap, had a limited possibility to determine which Berkeley Packet Filter (BPF) extensions are supported by the current kernel. This limitation had a negative effect on VLAN packet filtering that is performed by the tcpdump utility and tcpdump sometimes was not able to capture filtered packets correctly. Therefore, this update introduces a new option, SO_BPF_EXTENSIONS, which can be specified as an argument of the getsockopt() function. This option enables packet capturing tools to obtain information about which BPF extensions are supported by the current kernel. As a result, the tcpdump utility can now capture packets properly. BZ# 1081282 The RTM_NEWLINK messages can contain information about every virtual function (VF) for the given network interface (NIC) and can become very large if this information is not filtered. Previously, the kernel netlink interface allowed the getifaddr() function to process RTM_NEWLINK messages with unfiltered content. Under certain circumstances, the kernel netlink interface would omit data for the given group of NICs, causing getifaddr() to loop indefinitely being unable to return information about the affected NICs. This update resolves this problem by supplying only the RTM_NEWLINK messages with filtered content. BZ# 1040349 When booting a guest in the Hyper-V environment and enough of Programmable Interval Timer (PIT) interrupts were lost or not injected into the guest on time, the kernel panicked and the guest failed to boot. This problem has been fixed by bypassing the relevant PIT check when the guest is running under the Hyper-V environment. BZ# 1040393 The isci driver previously triggered an erroneous BUG_ON() assertion in case of a hard reset timeout in the sci_apc_agent_link_up() function. If a SATA device was unable to restore the link in time after the reset, the isci port had to return to the "awaiting link-up" state. However in such a case, the port may not have been in the "resetting" state, causing a kernel panic. This problem has been fixed by removing that incorrect BUG_ON() assertion. BZ# 1049052 Due to several bugs in the network console logging, a race condition between the network console send operation and the driver's IRQ handler could occur, or the network console could access invalid memory content. As a consequence, the respective driver, such as vmxnet3, triggered a BUG_ON() assertion and the system terminated unexpectedly. A patch addressing these bugs has been applied so that driver's IRQs are disabled before processing the send operation and the network console now accesses the RCU-protected (read-copy update) data properly. Systems using the network console logging no longer crashes due to the aforementioned conditions. BZ# 1057704 When a network interface is running in promiscuous (PROMISC) mode, the interface may receive and process VLAN-tagged frames even though no VLAN is attached to the interface. However, the enic driver did not handle processing of the packets with the VLAN-tagged frames in PROMISC mode correctly if the frames had no VLAN group assigned, which led to various problems. To handle the VLAN-tagged frames without a VLAN group properly, the frames have to be processed by the VLAN code, and the enic driver thus no longer verifies whether the packet's VLAN group field is empty. BZ# 1058528 The dm-bufio driver did not call the blk_unplug() function to flush plugged I/O requests. Therefore, the requests submitted by dm-bufio were delayed by 3 ms, which could cause performance degradation. With this update, dm-bufio calls blk_unplug() as expected, avoiding any related performance issues. BZ# 1059943 A change that modified the linkat() system call introduced a mount point reference leak and a subsequent memory leak in case that a file system link operation returned the ESTALE error code. These problems have been fixed by properly freeing the old mount point reference in such a case. BZ# 1062494 When allocating kernel memory, the SCSI device handlers called the sizeof() function with a structure name as its argument. However, the modified files were using an incorrect structure name, which resulted in an insufficient amount of memory being allocated and subsequent memory corruption. This update modifies the relevant sizeof() function calls to rather use a pointer to the structure instead of the structure name so that the memory is now always allocated correctly. BZ# 1065304 A patch to the kernel scheduler fixed a kernel panic caused by a divide-by-zero bug in the init_numa_sched_groups_power() function. However, that patch introduced a regression on systems with standard Non-Uniform Memory Access (NUMA) topology so that cpu_power in all but one NUMA domains was set to twice the expected value. This resulted in incorrect task scheduling and some processors being left idle even though there were enough queued tasks to handle, which had a negative impact on system performance. This update ensures that cpu_power on systems with standard NUMA topology is set to expected values by adding an estimate to cpu_power for every uncounted CPU.Task scheduling now works as expected on these systems without performance issues related to the said bug. BZ# 1018581 Microsoft Windows 7 KVM guests could become unresponsive during reboot because KVM did not manage to inject an Non-Maskable Interrupt (NMI) to the guest while the guest was running in user mode. To resolve this problem, a series of patches has been applied to the KVM code, ensuring that KVM handles NMIs correctly during the reboot of the guest machine. BZ# 1029381 Prior to this update, a guest-provided value was used as the head length of the socket buffer allocated on the host. If the host was under heavy memory load and the guest-provided value was too large, the allocation could have failed, resulting in stalls and packet drops in the guest's Tx path. With this update, the guest-provided value has been limited to a reasonable size so that socket buffer allocations on the host succeed regardless of the memory load on the host, and guests can send packets without experiencing packet drops or stalls. BZ# 1080637 The turbostat utility produced error messages when used on systems with the fourth generation of Intel Core Processors. To fix this problem, the kernel has been updated to provide the C-state residency information for the C8, C9, and C10 C-states. Enhancements BZ# 876275 The kernel now supports memory configurations with more than 1TB of RAM on AMD systems. BZ# 990694 Users can now set ToS, TTL, and priority values in IPv4 on per-packet basis. BZ# 1038227 Several significant enhancements to device-mapper have been introduced in Red Hat Enterprise Linux 6.6: The dm-cache device-mapper target, which allows fast storage devices to act as a cache for slower storage devices, has been added as a Technology Preview. The device-mapper-multipath ALUA priority checker no longer places the preferred path device in its own path group if there are other paths that could be used for load balancing. The fast_io_fail_tmo parameter in the multipath.conf file now works on iSCSI devices in addition to Fibre Channel devices. Better performance can now be achieved in setups with a large number of multipath devices due to an improved way in which the device-mapper multipath handles sysfs files. A new force_sync parameter in multipath.conf has been introduced. The parameter disables asynchronous path checks, which can help limit the number of CPU contention issues on setups with a large number of multipath devices. BZ# 922970 Support for the generation of Intel's mobile platform has been added to Red Hat Enterprise Linux 6.6, and the relevant drivers have been updated. BZ# 922929 A future AMD processor provides a new bank of Model Specific Registers (MSRs) for L2 events which are be used for critical event types. These L2 cache performance counters are highly beneficial in performance and debugging. BZ# 1076147 The dm-crypt module has been modified to use multiple CPUs, which improves its encryption performance significantly. BZ# 1054299 The qla2xxx driver has been upgraded to version 8.05.00.03.06.5-k2, which provides a number of bug fixes over the version in order to correct various timeout problems with the mailbox command. BZ# 1053831 Keywords for the IPL device (ipldev) and console device (condev) on IBM System z has been enabled to ease the installation when the system uses the cio_ignore command to blacklist all devices at install time and does not have a default CCW console device number, has no devices other than the IPL device as a base to clone Linux guests, or with ramdisk based installations with no devices other than the CCW console. BZ# 872311 The cifs kernel module has been updated to handle FIPS mode cipher filtering efficiently in CIFS. All Red Hat Enterprise Linux 6 users are advised to install these updated packages, which correct these issues, fix these bugs, and add these enhancements. The system must be rebooted for this update to take effect.
|
[
"INFO: task sshd:6425 blocked for more than 120 seconds. INFO: task ptymonitor:22510 blocked for more than 120 seconds.",
"libv4l2: error turning on stream: No space left on device"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/kernel
|
Authorization
|
Authorization Red Hat Developer Hub 1.3 Configuring authorization by using role based access control (RBAC) in Red Hat Developer Hub Red Hat Customer Content Services
| null |
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.3/html/authorization/index
|
Streams for Apache Kafka API Reference
|
Streams for Apache Kafka API Reference Red Hat Streams for Apache Kafka 2.7 Configure a deployment of Streams for Apache Kafka 2.7 on OpenShift Container Platform
| null |
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/index
|
Chapter 10. Advanced managed cluster configuration with PolicyGenTemplate resources
|
Chapter 10. Advanced managed cluster configuration with PolicyGenTemplate resources You can use PolicyGenTemplate CRs to deploy custom functionality in your managed clusters. 10.1. Deploying additional changes to clusters If you require cluster configuration changes outside of the base GitOps Zero Touch Provisioning (ZTP) pipeline configuration, there are three options: Apply the additional configuration after the GitOps ZTP pipeline is complete When the GitOps ZTP pipeline deployment is complete, the deployed cluster is ready for application workloads. At this point, you can install additional Operators and apply configurations specific to your requirements. Ensure that additional configurations do not negatively affect the performance of the platform or allocated CPU budget. Add content to the GitOps ZTP library The base source custom resources (CRs) that you deploy with the GitOps ZTP pipeline can be augmented with custom content as required. Create extra manifests for the cluster installation Extra manifests are applied during installation and make the installation process more efficient. Important Providing additional source CRs or modifying existing source CRs can significantly impact the performance or CPU profile of OpenShift Container Platform. Additional resources Customizing extra installation manifests in the GitOps ZTP pipeline 10.2. Using PolicyGenTemplate CRs to override source CRs content PolicyGenTemplate custom resources (CRs) allow you to overlay additional configuration details on top of the base source CRs provided with the GitOps plugin in the ztp-site-generate container. You can think of PolicyGenTemplate CRs as a logical merge or patch to the base CR. Use PolicyGenTemplate CRs to update a single field of the base CR, or overlay the entire contents of the base CR. You can update values and insert fields that are not in the base CR. The following example procedure describes how to update fields in the generated PerformanceProfile CR for the reference configuration based on the PolicyGenTemplate CR in the group-du-sno-ranGen.yaml file. Use the procedure as a basis for modifying other parts of the PolicyGenTemplate based on your requirements. Prerequisites Create a Git repository where you manage your custom site configuration data. The repository must be accessible from the hub cluster and be defined as a source repository for Argo CD. Procedure Review the baseline source CR for existing content. You can review the source CRs listed in the reference PolicyGenTemplate CRs by extracting them from the GitOps Zero Touch Provisioning (ZTP) container. Create an /out folder: USD mkdir -p ./out Extract the source CRs: USD podman run --log-driver=none --rm registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.15.1 extract /home/ztp --tar | tar x -C ./out Review the baseline PerformanceProfile CR in ./out/source-crs/PerformanceProfile.yaml : apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: USDname annotations: ran.openshift.io/ztp-deploy-wave: "10" spec: additionalKernelArgs: - "idle=poll" - "rcupdate.rcu_normal_after_boot=0" cpu: isolated: USDisolated reserved: USDreserved hugepages: defaultHugepagesSize: USDdefaultHugepagesSize pages: - size: USDsize count: USDcount node: USDnode machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/USDmcp: "" net: userLevelNetworking: true nodeSelector: node-role.kubernetes.io/USDmcp: '' numa: topologyPolicy: "restricted" realTimeKernel: enabled: true Note Any fields in the source CR which contain USD... are removed from the generated CR if they are not provided in the PolicyGenTemplate CR. Update the PolicyGenTemplate entry for PerformanceProfile in the group-du-sno-ranGen.yaml reference file. The following example PolicyGenTemplate CR stanza supplies appropriate CPU specifications, sets the hugepages configuration, and adds a new field that sets globallyDisableIrqLoadBalancing to false. - fileName: PerformanceProfile.yaml policyName: "config-policy" metadata: name: openshift-node-performance-profile spec: cpu: # These must be tailored for the specific hardware platform isolated: "2-19,22-39" reserved: "0-1,20-21" hugepages: defaultHugepagesSize: 1G pages: - size: 1G count: 10 globallyDisableIrqLoadBalancing: false Commit the PolicyGenTemplate change in Git, and then push to the Git repository being monitored by the GitOps ZTP argo CD application. Example output The GitOps ZTP application generates an RHACM policy that contains the generated PerformanceProfile CR. The contents of that CR are derived by merging the metadata and spec contents from the PerformanceProfile entry in the PolicyGenTemplate onto the source CR. The resulting CR has the following content: --- apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: openshift-node-performance-profile spec: additionalKernelArgs: - idle=poll - rcupdate.rcu_normal_after_boot=0 cpu: isolated: 2-19,22-39 reserved: 0-1,20-21 globallyDisableIrqLoadBalancing: false hugepages: defaultHugepagesSize: 1G pages: - count: 10 size: 1G machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/master: "" net: userLevelNetworking: true nodeSelector: node-role.kubernetes.io/master: "" numa: topologyPolicy: restricted realTimeKernel: enabled: true Note In the /source-crs folder that you extract from the ztp-site-generate container, the USD syntax is not used for template substitution as implied by the syntax. Rather, if the policyGen tool sees the USD prefix for a string and you do not specify a value for that field in the related PolicyGenTemplate CR, the field is omitted from the output CR entirely. An exception to this is the USDmcp variable in /source-crs YAML files that is substituted with the specified value for mcp from the PolicyGenTemplate CR. For example, in example/policygentemplates/group-du-standard-ranGen.yaml , the value for mcp is worker : spec: bindingRules: group-du-standard: "" mcp: "worker" The policyGen tool replace instances of USDmcp with worker in the output CRs. 10.3. Adding custom content to the GitOps ZTP pipeline Perform the following procedure to add new content to the GitOps ZTP pipeline. Procedure Create a subdirectory named source-crs in the directory that contains the kustomization.yaml file for the PolicyGenTemplate custom resource (CR). Add your user-provided CRs to the source-crs subdirectory, as shown in the following example: example βββ policygentemplates βββ dev.yaml βββ kustomization.yaml βββ mec-edge-sno1.yaml βββ sno.yaml βββ source-crs 1 βββ PaoCatalogSource.yaml βββ PaoSubscription.yaml βββ custom-crs | βββ apiserver-config.yaml | βββ disable-nic-lldp.yaml βββ elasticsearch βββ ElasticsearchNS.yaml βββ ElasticsearchOperatorGroup.yaml 1 The source-crs subdirectory must be in the same directory as the kustomization.yaml file. Update the required PolicyGenTemplate CRs to include references to the content you added in the source-crs/custom-crs and source-crs/elasticsearch directories. For example: apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: "group-dev" namespace: "ztp-clusters" spec: bindingRules: dev: "true" mcp: "master" sourceFiles: # These policies/CRs come from the internal container Image #Cluster Logging - fileName: ClusterLogNS.yaml remediationAction: inform policyName: "group-dev-cluster-log-ns" - fileName: ClusterLogOperGroup.yaml remediationAction: inform policyName: "group-dev-cluster-log-operator-group" - fileName: ClusterLogSubscription.yaml remediationAction: inform policyName: "group-dev-cluster-log-sub" #Local Storage Operator - fileName: StorageNS.yaml remediationAction: inform policyName: "group-dev-lso-ns" - fileName: StorageOperGroup.yaml remediationAction: inform policyName: "group-dev-lso-operator-group" - fileName: StorageSubscription.yaml remediationAction: inform policyName: "group-dev-lso-sub" #These are custom local polices that come from the source-crs directory in the git repo # Performance Addon Operator - fileName: PaoSubscriptionNS.yaml remediationAction: inform policyName: "group-dev-pao-ns" - fileName: PaoSubscriptionCatalogSource.yaml remediationAction: inform policyName: "group-dev-pao-cat-source" spec: image: <image_URL_here> - fileName: PaoSubscription.yaml remediationAction: inform policyName: "group-dev-pao-sub" #Elasticsearch Operator - fileName: elasticsearch/ElasticsearchNS.yaml 1 remediationAction: inform policyName: "group-dev-elasticsearch-ns" - fileName: elasticsearch/ElasticsearchOperatorGroup.yaml remediationAction: inform policyName: "group-dev-elasticsearch-operator-group" #Custom Resources - fileName: custom-crs/apiserver-config.yaml 2 remediationAction: inform policyName: "group-dev-apiserver-config" - fileName: custom-crs/disable-nic-lldp.yaml remediationAction: inform policyName: "group-dev-disable-nic-lldp" 1 2 Set fileName to include the relative path to the file from the /source-crs parent directory. Commit the PolicyGenTemplate change in Git, and then push to the Git repository that is monitored by the GitOps ZTP Argo CD policies application. Update the ClusterGroupUpgrade CR to include the changed PolicyGenTemplate and save it as cgu-test.yaml . The following example shows a generated cgu-test.yaml file. apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: custom-source-cr namespace: ztp-clusters spec: managedPolicies: - group-dev-config-policy enable: true clusters: - cluster1 remediationStrategy: maxConcurrency: 2 timeout: 240 Apply the updated ClusterGroupUpgrade CR by running the following command: USD oc apply -f cgu-test.yaml Verification Check that the updates have succeeded by running the following command: USD oc get cgu -A Example output NAMESPACE NAME AGE STATE DETAILS ztp-clusters custom-source-cr 6s InProgress Remediating non-compliant policies ztp-install cluster1 19h Completed All clusters are compliant with all the managed policies 10.4. Configuring policy compliance evaluation timeouts for PolicyGenTemplate CRs Use Red Hat Advanced Cluster Management (RHACM) installed on a hub cluster to monitor and report on whether your managed clusters are compliant with applied policies. RHACM uses policy templates to apply predefined policy controllers and policies. Policy controllers are Kubernetes custom resource definition (CRD) instances. You can override the default policy evaluation intervals with PolicyGenTemplate custom resources (CRs). You configure duration settings that define how long a ConfigurationPolicy CR can be in a state of policy compliance or non-compliance before RHACM re-evaluates the applied cluster policies. The GitOps Zero Touch Provisioning (ZTP) policy generator generates ConfigurationPolicy CR policies with pre-defined policy evaluation intervals. The default value for the noncompliant state is 10 seconds. The default value for the compliant state is 10 minutes. To disable the evaluation interval, set the value to never . Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. You have created a Git repository where you manage your custom site configuration data. Procedure To configure the evaluation interval for all policies in a PolicyGenTemplate CR, add evaluationInterval to the spec field, and then set the appropriate compliant and noncompliant values. For example: spec: evaluationInterval: compliant: 30m noncompliant: 20s To configure the evaluation interval for the spec.sourceFiles object in a PolicyGenTemplate CR, add evaluationInterval to the sourceFiles field, for example: spec: sourceFiles: - fileName: SriovSubscription.yaml policyName: "sriov-sub-policy" evaluationInterval: compliant: never noncompliant: 10s Commit the PolicyGenTemplate CRs files in the Git repository and push your changes. Verification Check that the managed spoke cluster policies are monitored at the expected intervals. Log in as a user with cluster-admin privileges on the managed cluster. Get the pods that are running in the open-cluster-management-agent-addon namespace. Run the following command: USD oc get pods -n open-cluster-management-agent-addon Example output NAME READY STATUS RESTARTS AGE config-policy-controller-858b894c68-v4xdb 1/1 Running 22 (5d8h ago) 10d Check the applied policies are being evaluated at the expected interval in the logs for the config-policy-controller pod: USD oc logs -n open-cluster-management-agent-addon config-policy-controller-858b894c68-v4xdb Example output 2022-05-10T15:10:25.280Z info configuration-policy-controller controllers/configurationpolicy_controller.go:166 Skipping the policy evaluation due to the policy not reaching the evaluation interval {"policy": "compute-1-config-policy-config"} 2022-05-10T15:10:25.280Z info configuration-policy-controller controllers/configurationpolicy_controller.go:166 Skipping the policy evaluation due to the policy not reaching the evaluation interval {"policy": "compute-1-common-compute-1-catalog-policy-config"} 10.5. Signalling GitOps ZTP cluster deployment completion with validator inform policies Create a validator inform policy that signals when the GitOps Zero Touch Provisioning (ZTP) installation and configuration of the deployed cluster is complete. This policy can be used for deployments of single-node OpenShift clusters, three-node clusters, and standard clusters. Procedure Create a standalone PolicyGenTemplate custom resource (CR) that contains the source file validatorCRs/informDuValidator.yaml . You only need one standalone PolicyGenTemplate CR for each cluster type. For example, this CR applies a validator inform policy for single-node OpenShift clusters: Example single-node cluster validator inform policy CR (group-du-sno-validator-ranGen.yaml) apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: "group-du-sno-validator" 1 namespace: "ztp-group" 2 spec: bindingRules: group-du-sno: "" 3 bindingExcludedRules: ztp-done: "" 4 mcp: "master" 5 sourceFiles: - fileName: validatorCRs/informDuValidator.yaml remediationAction: inform 6 policyName: "du-policy" 7 1 The name of PolicyGenTemplates object. This name is also used as part of the names for the placementBinding , placementRule , and policy that are created in the requested namespace . 2 This value should match the namespace used in the group PolicyGenTemplates . 3 The group-du-* label defined in bindingRules must exist in the SiteConfig files. 4 The label defined in bindingExcludedRules must be`ztp-done:`. The ztp-done label is used in coordination with the Topology Aware Lifecycle Manager. 5 mcp defines the MachineConfigPool object that is used in the source file validatorCRs/informDuValidator.yaml . It should be master for single node and three-node cluster deployments and worker for standard cluster deployments. 6 Optional. The default value is inform . 7 This value is used as part of the name for the generated RHACM policy. The generated validator policy for the single node example is group-du-sno-validator-du-policy . Commit the PolicyGenTemplate CR file in your Git repository and push the changes. Additional resources Upgrading GitOps ZTP 10.6. Configuring power states using PolicyGenTemplates CRs For low latency and high-performance edge deployments, it is necessary to disable or limit C-states and P-states. With this configuration, the CPU runs at a constant frequency, which is typically the maximum turbo frequency. This ensures that the CPU is always running at its maximum speed, which results in high performance and low latency. This leads to the best latency for workloads. However, this also leads to the highest power consumption, which might not be necessary for all workloads. Workloads can be classified as critical or non-critical, with critical workloads requiring disabled C-state and P-state settings for high performance and low latency, while non-critical workloads use C-state and P-state settings for power savings at the expense of some latency and performance. You can configure the following three power states using GitOps Zero Touch Provisioning (ZTP): High-performance mode provides ultra low latency at the highest power consumption. Performance mode provides low latency at a relatively high power consumption. Power saving balances reduced power consumption with increased latency. The default configuration is for a low latency, performance mode. PolicyGenTemplate custom resources (CRs) allow you to overlay additional configuration details onto the base source CRs provided with the GitOps plugin in the ztp-site-generate container. Configure the power states by updating the workloadHints fields in the generated PerformanceProfile CR for the reference configuration, based on the PolicyGenTemplate CR in the group-du-sno-ranGen.yaml . The following common prerequisites apply to configuring all three power states. Prerequisites You have created a Git repository where you manage your custom site configuration data. The repository must be accessible from the hub cluster and be defined as a source repository for Argo CD. You have followed the procedure described in "Preparing the GitOps ZTP site configuration repository". Additional resources Configuring node power consumption and realtime processing with workload hints 10.6.1. Configuring performance mode using PolicyGenTemplate CRs Follow this example to set performance mode by updating the workloadHints fields in the generated PerformanceProfile CR for the reference configuration, based on the PolicyGenTemplate CR in the group-du-sno-ranGen.yaml . Performance mode provides low latency at a relatively high power consumption. Prerequisites You have configured the BIOS with performance related settings by following the guidance in "Configuring host firmware for low latency and high performance". Procedure Update the PolicyGenTemplate entry for PerformanceProfile in the group-du-sno-ranGen.yaml reference file in out/argocd/example/policygentemplates as follows to set performance mode. - fileName: PerformanceProfile.yaml policyName: "config-policy" metadata: [...] spec: [...] workloadHints: realTime: true highPowerConsumption: false perPodPowerManagement: false Commit the PolicyGenTemplate change in Git, and then push to the Git repository being monitored by the GitOps ZTP Argo CD application. 10.6.2. Configuring high-performance mode using PolicyGenTemplate CRs Follow this example to set high performance mode by updating the workloadHints fields in the generated PerformanceProfile CR for the reference configuration, based on the PolicyGenTemplate CR in the group-du-sno-ranGen.yaml . High performance mode provides ultra low latency at the highest power consumption. Prerequisites You have configured the BIOS with performance related settings by following the guidance in "Configuring host firmware for low latency and high performance". Procedure Update the PolicyGenTemplate entry for PerformanceProfile in the group-du-sno-ranGen.yaml reference file in out/argocd/example/policygentemplates as follows to set high-performance mode. - fileName: PerformanceProfile.yaml policyName: "config-policy" metadata: [...] spec: [...] workloadHints: realTime: true highPowerConsumption: true perPodPowerManagement: false Commit the PolicyGenTemplate change in Git, and then push to the Git repository being monitored by the GitOps ZTP Argo CD application. 10.6.3. Configuring power saving mode using PolicyGenTemplate CRs Follow this example to set power saving mode by updating the workloadHints fields in the generated PerformanceProfile CR for the reference configuration, based on the PolicyGenTemplate CR in the group-du-sno-ranGen.yaml . The power saving mode balances reduced power consumption with increased latency. Prerequisites You enabled C-states and OS-controlled P-states in the BIOS. Procedure Update the PolicyGenTemplate entry for PerformanceProfile in the group-du-sno-ranGen.yaml reference file in out/argocd/example/policygentemplates as follows to configure power saving mode. It is recommended to configure the CPU governor for the power saving mode through the additional kernel arguments object. - fileName: PerformanceProfile.yaml policyName: "config-policy" metadata: [...] spec: [...] workloadHints: realTime: true highPowerConsumption: false perPodPowerManagement: true [...] additionalKernelArgs: - [...] - "cpufreq.default_governor=schedutil" 1 1 The schedutil governor is recommended, however, other governors that can be used include ondemand and powersave . Commit the PolicyGenTemplate change in Git, and then push to the Git repository being monitored by the GitOps ZTP Argo CD application. Verification Select a worker node in your deployed cluster from the list of nodes identified by using the following command: USD oc get nodes Log in to the node by using the following command: USD oc debug node/<node-name> Replace <node-name> with the name of the node you want to verify the power state on. Set /host as the root directory within the debug shell. The debug pod mounts the host's root file system in /host within the pod. By changing the root directory to /host , you can run binaries contained in the host's executable paths as shown in the following example: # chroot /host Run the following command to verify the applied power state: # cat /proc/cmdline Expected output For power saving mode the intel_pstate=passive . Additional resources Configuring power saving for nodes that run colocated high and low priority workloads Configuring host firmware for low latency and high performance Preparing the GitOps ZTP site configuration repository 10.6.4. Maximizing power savings Limiting the maximum CPU frequency is recommended to achieve maximum power savings. Enabling C-states on the non-critical workload CPUs without restricting the maximum CPU frequency negates much of the power savings by boosting the frequency of the critical CPUs. Maximize power savings by updating the sysfs plugin fields, setting an appropriate value for max_perf_pct in the TunedPerformancePatch CR for the reference configuration. This example based on the group-du-sno-ranGen.yaml describes the procedure to follow to restrict the maximum CPU frequency. Prerequisites You have configured power savings mode as described in "Using PolicyGenTemplate CRs to configure power savings mode". Procedure Update the PolicyGenTemplate entry for TunedPerformancePatch in the group-du-sno-ranGen.yaml reference file in out/argocd/example/policygentemplates . To maximize power savings, add max_perf_pct as shown in the following example: - fileName: TunedPerformancePatch.yaml policyName: "config-policy" spec: profile: - name: performance-patch data: | [...] [sysfs] /sys/devices/system/cpu/intel_pstate/max_perf_pct=<x> 1 1 The max_perf_pct controls the maximum frequency the cpufreq driver is allowed to set as a percentage of the maximum supported CPU frequency. This value applies to all CPUs. You can check the maximum supported frequency in /sys/devices/system/cpu/cpu0/cpufreq/cpuinfo_max_freq . As a starting point, you can use a percentage that caps all CPUs at the All Cores Turbo frequency. The All Cores Turbo frequency is the frequency that all cores will run at when the cores are all fully occupied. Note To maximize power savings, set a lower value. Setting a lower value for max_perf_pct limits the maximum CPU frequency, thereby reducing power consumption, but also potentially impacting performance. Experiment with different values and monitor the system's performance and power consumption to find the optimal setting for your use-case. Commit the PolicyGenTemplate change in Git, and then push to the Git repository being monitored by the GitOps ZTP Argo CD application. 10.7. Configuring LVM Storage using PolicyGenTemplate CRs You can configure Logical Volume Manager (LVM) Storage for managed clusters that you deploy with GitOps Zero Touch Provisioning (ZTP). Note You use LVM Storage to persist event subscriptions when you use PTP events or bare-metal hardware events with HTTP transport. Use the Local Storage Operator for persistent storage that uses local volumes in distributed units. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Create a Git repository where you manage your custom site configuration data. Procedure To configure LVM Storage for new managed clusters, add the following YAML to spec.sourceFiles in the common-ranGen.yaml file: - fileName: StorageLVMOSubscriptionNS.yaml policyName: subscription-policies - fileName: StorageLVMOSubscriptionOperGroup.yaml policyName: subscription-policies - fileName: StorageLVMOSubscription.yaml spec: name: lvms-operator channel: stable-4.15 policyName: subscription-policies Note The Storage LVMO subscription is deprecated. In future releases of OpenShift Container Platform, the storage LVMO subscription will not be available. Instead, you must use the Storage LVMS subscription. In OpenShift Container Platform 4.15, you can use the Storage LVMS subscription instead of the LVMO subscription. The LVMS subscription does not require manual overrides in the common-ranGen.yaml file. Add the following YAML to spec.sourceFiles in the common-ranGen.yaml file to use the Storage LVMS subscription: - fileName: StorageLVMSubscriptionNS.yaml policyName: subscription-policies - fileName: StorageLVMSubscriptionOperGroup.yaml policyName: subscription-policies - fileName: StorageLVMSubscription.yaml policyName: subscription-policies Add the LVMCluster CR to spec.sourceFiles in your specific group or individual site configuration file. For example, in the group-du-sno-ranGen.yaml file, add the following: - fileName: StorageLVMCluster.yaml policyName: "lvms-config" 1 spec: storage: deviceClasses: - name: vg1 thinPoolConfig: name: thin-pool-1 sizePercent: 90 overprovisionRatio: 10 1 This example configuration creates a volume group ( vg1 ) with all the available devices, except the disk where OpenShift Container Platform is installed. A thin-pool logical volume is also created. Merge any other required changes and files with your custom site repository. Commit the PolicyGenTemplate changes in Git, and then push the changes to your site configuration repository to deploy LVM Storage to new sites using GitOps ZTP. 10.8. Configuring PTP events with PolicyGenTemplate CRs You can use the GitOps ZTP pipeline to configure PTP events that use HTTP or AMQP transport. Note HTTP transport is the default transport for PTP and bare-metal events. Use HTTP transport instead of AMQP for PTP and bare-metal events where possible. AMQ Interconnect is EOL from 30 June 2024. Extended life cycle support (ELS) for AMQ Interconnect ends 29 November 2029. For more information see, Red Hat AMQ Interconnect support status . 10.8.1. Configuring PTP events that use HTTP transport You can configure PTP events that use HTTP transport on managed clusters that you deploy with the GitOps Zero Touch Provisioning (ZTP) pipeline. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in as a user with cluster-admin privileges. You have created a Git repository where you manage your custom site configuration data. Procedure Apply the following PolicyGenTemplate changes to group-du-3node-ranGen.yaml , group-du-sno-ranGen.yaml , or group-du-standard-ranGen.yaml files according to your requirements: In .sourceFiles , add the PtpOperatorConfig CR file that configures the transport host: - fileName: PtpOperatorConfigForEvent.yaml policyName: "config-policy" spec: daemonNodeSelector: {} ptpEventConfig: enableEventPublisher: true transportHost: http://ptp-event-publisher-service-NODE_NAME.openshift-ptp.svc.cluster.local:9043 Note In OpenShift Container Platform 4.13 or later, you do not need to set the transportHost field in the PtpOperatorConfig resource when you use HTTP transport with PTP events. Configure the linuxptp and phc2sys for the PTP clock type and interface. For example, add the following stanza into .sourceFiles : - fileName: PtpConfigSlave.yaml 1 policyName: "config-policy" metadata: name: "du-ptp-slave" spec: profile: - name: "slave" interface: "ens5f1" 2 ptp4lOpts: "-2 -s --summary_interval -4" 3 phc2sysOpts: "-a -r -m -n 24 -N 8 -R 16" 4 ptpClockThreshold: 5 holdOverTimeout: 30 #secs maxOffsetThreshold: 100 #nano secs minOffsetThreshold: -100 #nano secs 1 Can be PtpConfigMaster.yaml or PtpConfigSlave.yaml depending on your requirements. For configurations based on group-du-sno-ranGen.yaml or group-du-3node-ranGen.yaml , use PtpConfigSlave.yaml . 2 Device specific interface name. 3 You must append the --summary_interval -4 value to ptp4lOpts in .spec.sourceFiles.spec.profile to enable PTP fast events. 4 Required phc2sysOpts values. -m prints messages to stdout . The linuxptp-daemon DaemonSet parses the logs and generates Prometheus metrics. 5 Optional. If the ptpClockThreshold stanza is not present, default values are used for the ptpClockThreshold fields. The stanza shows default ptpClockThreshold values. The ptpClockThreshold values configure how long after the PTP master clock is disconnected before PTP events are triggered. holdOverTimeout is the time value in seconds before the PTP clock event state changes to FREERUN when the PTP master clock is disconnected. The maxOffsetThreshold and minOffsetThreshold settings configure offset values in nanoseconds that compare against the values for CLOCK_REALTIME ( phc2sys ) or master offset ( ptp4l ). When the ptp4l or phc2sys offset value is outside this range, the PTP clock state is set to FREERUN . When the offset value is within this range, the PTP clock state is set to LOCKED . Merge any other required changes and files with your custom site repository. Push the changes to your site configuration repository to deploy PTP fast events to new sites using GitOps ZTP. Additional resources Using PolicyGenTemplate CRs to override source CRs content 10.8.2. Configuring PTP events that use AMQP transport You can configure PTP events that use AMQP transport on managed clusters that you deploy with the GitOps Zero Touch Provisioning (ZTP) pipeline. Note HTTP transport is the default transport for PTP and bare-metal events. Use HTTP transport instead of AMQP for PTP and bare-metal events where possible. AMQ Interconnect is EOL from 30 June 2024. Extended life cycle support (ELS) for AMQ Interconnect ends 29 November 2029. For more information see, Red Hat AMQ Interconnect support status . Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in as a user with cluster-admin privileges. You have created a Git repository where you manage your custom site configuration data. Procedure Add the following YAML into .spec.sourceFiles in the common-ranGen.yaml file to configure the AMQP Operator: #AMQ interconnect operator for fast events - fileName: AmqSubscriptionNS.yaml policyName: "subscriptions-policy" - fileName: AmqSubscriptionOperGroup.yaml policyName: "subscriptions-policy" - fileName: AmqSubscription.yaml policyName: "subscriptions-policy" Apply the following PolicyGenTemplate changes to group-du-3node-ranGen.yaml , group-du-sno-ranGen.yaml , or group-du-standard-ranGen.yaml files according to your requirements: In .sourceFiles , add the PtpOperatorConfig CR file that configures the AMQ transport host to the config-policy : - fileName: PtpOperatorConfigForEvent.yaml policyName: "config-policy" spec: daemonNodeSelector: {} ptpEventConfig: enableEventPublisher: true transportHost: "amqp://amq-router.amq-router.svc.cluster.local" Configure the linuxptp and phc2sys for the PTP clock type and interface. For example, add the following stanza into .sourceFiles : - fileName: PtpConfigSlave.yaml 1 policyName: "config-policy" metadata: name: "du-ptp-slave" spec: profile: - name: "slave" interface: "ens5f1" 2 ptp4lOpts: "-2 -s --summary_interval -4" 3 phc2sysOpts: "-a -r -m -n 24 -N 8 -R 16" 4 ptpClockThreshold: 5 holdOverTimeout: 30 #secs maxOffsetThreshold: 100 #nano secs minOffsetThreshold: -100 #nano secs 1 Can be PtpConfigMaster.yaml or PtpConfigSlave.yaml depending on your requirements. For configurations based on group-du-sno-ranGen.yaml or group-du-3node-ranGen.yaml , use PtpConfigSlave.yaml . 2 Device specific interface name. 3 You must append the --summary_interval -4 value to ptp4lOpts in .spec.sourceFiles.spec.profile to enable PTP fast events. 4 Required phc2sysOpts values. -m prints messages to stdout . The linuxptp-daemon DaemonSet parses the logs and generates Prometheus metrics. 5 Optional. If the ptpClockThreshold stanza is not present, default values are used for the ptpClockThreshold fields. The stanza shows default ptpClockThreshold values. The ptpClockThreshold values configure how long after the PTP master clock is disconnected before PTP events are triggered. holdOverTimeout is the time value in seconds before the PTP clock event state changes to FREERUN when the PTP master clock is disconnected. The maxOffsetThreshold and minOffsetThreshold settings configure offset values in nanoseconds that compare against the values for CLOCK_REALTIME ( phc2sys ) or master offset ( ptp4l ). When the ptp4l or phc2sys offset value is outside this range, the PTP clock state is set to FREERUN . When the offset value is within this range, the PTP clock state is set to LOCKED . Apply the following PolicyGenTemplate changes to your specific site YAML files, for example, example-sno-site.yaml : In .sourceFiles , add the Interconnect CR file that configures the AMQ router to the config-policy : - fileName: AmqInstance.yaml policyName: "config-policy" Merge any other required changes and files with your custom site repository. Push the changes to your site configuration repository to deploy PTP fast events to new sites using GitOps ZTP. Additional resources Installing the AMQ messaging bus For more information about container image registries, see OpenShift image registry overview . 10.9. Configuring bare-metal events with PolicyGenTemplate CRs You can use the GitOps ZTP pipeline to configure bare-metal events that use HTTP or AMQP transport. Note HTTP transport is the default transport for PTP and bare-metal events. Use HTTP transport instead of AMQP for PTP and bare-metal events where possible. AMQ Interconnect is EOL from 30 June 2024. Extended life cycle support (ELS) for AMQ Interconnect ends 29 November 2029. For more information see, Red Hat AMQ Interconnect support status . 10.9.1. Configuring bare-metal events that use HTTP transport You can configure bare-metal events that use HTTP transport on managed clusters that you deploy with the GitOps Zero Touch Provisioning (ZTP) pipeline. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in as a user with cluster-admin privileges. You have created a Git repository where you manage your custom site configuration data. Procedure Configure the Bare Metal Event Relay Operator by adding the following YAML to spec.sourceFiles in the common-ranGen.yaml file: # Bare Metal Event Relay operator - fileName: BareMetalEventRelaySubscriptionNS.yaml policyName: "subscriptions-policy" - fileName: BareMetalEventRelaySubscriptionOperGroup.yaml policyName: "subscriptions-policy" - fileName: BareMetalEventRelaySubscription.yaml policyName: "subscriptions-policy" Add the HardwareEvent CR to spec.sourceFiles in your specific group configuration file, for example, in the group-du-sno-ranGen.yaml file: - fileName: HardwareEvent.yaml 1 policyName: "config-policy" spec: nodeSelector: {} transportHost: "http://hw-event-publisher-service.openshift-bare-metal-events.svc.cluster.local:9043" logLevel: "info" 1 Each baseboard management controller (BMC) requires a single HardwareEvent CR only. Note In OpenShift Container Platform 4.13 or later, you do not need to set the transportHost field in the HardwareEvent custom resource (CR) when you use HTTP transport with bare-metal events. Merge any other required changes and files with your custom site repository. Push the changes to your site configuration repository to deploy bare-metal events to new sites with GitOps ZTP. Create the Redfish Secret by running the following command: USD oc -n openshift-bare-metal-events create secret generic redfish-basic-auth \ --from-literal=username=<bmc_username> --from-literal=password=<bmc_password> \ --from-literal=hostaddr="<bmc_host_ip_addr>" Additional resources Installing the Bare Metal Event Relay using the CLI Creating the bare-metal event and Secret CRs 10.9.2. Configuring bare-metal events that use AMQP transport You can configure bare-metal events that use AMQP transport on managed clusters that you deploy with the GitOps Zero Touch Provisioning (ZTP) pipeline. Note HTTP transport is the default transport for PTP and bare-metal events. Use HTTP transport instead of AMQP for PTP and bare-metal events where possible. AMQ Interconnect is EOL from 30 June 2024. Extended life cycle support (ELS) for AMQ Interconnect ends 29 November 2029. For more information see, Red Hat AMQ Interconnect support status . Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in as a user with cluster-admin privileges. You have created a Git repository where you manage your custom site configuration data. Procedure To configure the AMQ Interconnect Operator and the Bare Metal Event Relay Operator, add the following YAML to spec.sourceFiles in the common-ranGen.yaml file: # AMQ interconnect operator for fast events - fileName: AmqSubscriptionNS.yaml policyName: "subscriptions-policy" - fileName: AmqSubscriptionOperGroup.yaml policyName: "subscriptions-policy" - fileName: AmqSubscription.yaml policyName: "subscriptions-policy" # Bare Metal Event Rely operator - fileName: BareMetalEventRelaySubscriptionNS.yaml policyName: "subscriptions-policy" - fileName: BareMetalEventRelaySubscriptionOperGroup.yaml policyName: "subscriptions-policy" - fileName: BareMetalEventRelaySubscription.yaml policyName: "subscriptions-policy" Add the Interconnect CR to .spec.sourceFiles in the site configuration file, for example, the example-sno-site.yaml file: - fileName: AmqInstance.yaml policyName: "config-policy" Add the HardwareEvent CR to spec.sourceFiles in your specific group configuration file, for example, in the group-du-sno-ranGen.yaml file: - fileName: HardwareEvent.yaml policyName: "config-policy" spec: nodeSelector: {} transportHost: "amqp://<amq_interconnect_name>.<amq_interconnect_namespace>.svc.cluster.local" 1 logLevel: "info" 1 The transportHost URL is composed of the existing AMQ Interconnect CR name and namespace . For example, in transportHost: "amqp://amq-router.amq-router.svc.cluster.local" , the AMQ Interconnect name and namespace are both set to amq-router . Note Each baseboard management controller (BMC) requires a single HardwareEvent resource only. Commit the PolicyGenTemplate change in Git, and then push the changes to your site configuration repository to deploy bare-metal events monitoring to new sites using GitOps ZTP. Create the Redfish Secret by running the following command: USD oc -n openshift-bare-metal-events create secret generic redfish-basic-auth \ --from-literal=username=<bmc_username> --from-literal=password=<bmc_password> \ --from-literal=hostaddr="<bmc_host_ip_addr>" 10.10. Configuring the Image Registry Operator for local caching of images OpenShift Container Platform manages image caching using a local registry. In edge computing use cases, clusters are often subject to bandwidth restrictions when communicating with centralized image registries, which might result in long image download times. Long download times are unavoidable during initial deployment. Over time, there is a risk that CRI-O will erase the /var/lib/containers/storage directory in the case of an unexpected shutdown. To address long image download times, you can create a local image registry on remote managed clusters using GitOps Zero Touch Provisioning (ZTP). This is useful in Edge computing scenarios where clusters are deployed at the far edge of the network. Before you can set up the local image registry with GitOps ZTP, you need to configure disk partitioning in the SiteConfig CR that you use to install the remote managed cluster. After installation, you configure the local image registry using a PolicyGenTemplate CR. Then, the GitOps ZTP pipeline creates Persistent Volume (PV) and Persistent Volume Claim (PVC) CRs and patches the imageregistry configuration. Note The local image registry can only be used for user application images and cannot be used for the OpenShift Container Platform or Operator Lifecycle Manager operator images. Additional resources OpenShift Container Platform registry overview . 10.10.1. Configuring disk partitioning with SiteConfig Configure disk partitioning for a managed cluster using a SiteConfig CR and GitOps Zero Touch Provisioning (ZTP). The disk partition details in the SiteConfig CR must match the underlying disk. Important You must complete this procedure at installation time. Prerequisites Install Butane. Procedure Create the storage.bu file by using the following example YAML file: variant: fcos version: 1.3.0 storage: disks: - device: /dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0 1 wipe_table: false partitions: - label: var-lib-containers start_mib: <start_of_partition> 2 size_mib: <partition_size> 3 filesystems: - path: /var/lib/containers device: /dev/disk/by-partlabel/var-lib-containers format: xfs wipe_filesystem: true with_mount_unit: true mount_options: - defaults - prjquota 1 Specify the root disk. 2 Specify the start of the partition in MiB. If the value is too small, the installation fails. 3 Specify the size of the partition. If the value is too small, the deployments fails. Convert the storage.bu to an Ignition file by running the following command: USD butane storage.bu Example output {"ignition":{"version":"3.2.0"},"storage":{"disks":[{"device":"/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0","partitions":[{"label":"var-lib-containers","sizeMiB":0,"startMiB":250000}],"wipeTable":false}],"filesystems":[{"device":"/dev/disk/by-partlabel/var-lib-containers","format":"xfs","mountOptions":["defaults","prjquota"],"path":"/var/lib/containers","wipeFilesystem":true}]},"systemd":{"units":[{"contents":"# # Generated by Butane\n[Unit]\nRequires=systemd-fsck@dev-disk-by\\x2dpartlabel-var\\x2dlib\\x2dcontainers.service\nAfter=systemd-fsck@dev-disk-by\\x2dpartlabel-var\\x2dlib\\x2dcontainers.service\n\n[Mount]\nWhere=/var/lib/containers\nWhat=/dev/disk/by-partlabel/var-lib-containers\nType=xfs\nOptions=defaults,prjquota\n\n[Install]\nRequiredBy=local-fs.target","enabled":true,"name":"var-lib-containers.mount"}]}} Use a tool such as JSON Pretty Print to convert the output into JSON format. Copy the output into the .spec.clusters.nodes.ignitionConfigOverride field in the SiteConfig CR. Example [...] spec: clusters: - nodes: - ignitionConfigOverride: | { "ignition": { "version": "3.2.0" }, "storage": { "disks": [ { "device": "/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0", "partitions": [ { "label": "var-lib-containers", "sizeMiB": 0, "startMiB": 250000 } ], "wipeTable": false } ], "filesystems": [ { "device": "/dev/disk/by-partlabel/var-lib-containers", "format": "xfs", "mountOptions": [ "defaults", "prjquota" ], "path": "/var/lib/containers", "wipeFilesystem": true } ] }, "systemd": { "units": [ { "contents": "# # Generated by Butane\n[Unit]\nRequires=systemd-fsck@dev-disk-by\\x2dpartlabel-var\\x2dlib\\x2dcontainers.service\nAfter=systemd-fsck@dev-disk-by\\x2dpartlabel-var\\x2dlib\\x2dcontainers.service\n\n[Mount]\nWhere=/var/lib/containers\nWhat=/dev/disk/by-partlabel/var-lib-containers\nType=xfs\nOptions=defaults,prjquota\n\n[Install]\nRequiredBy=local-fs.target", "enabled": true, "name": "var-lib-containers.mount" } ] } } [...] Note If the .spec.clusters.nodes.ignitionConfigOverride field does not exist, create it. Verification During or after installation, verify on the hub cluster that the BareMetalHost object shows the annotation by running the following command: USD oc get bmh -n my-sno-ns my-sno -ojson | jq '.metadata.annotations["bmac.agent-install.openshift.io/ignition-config-overrides"] Example output "{\"ignition\":{\"version\":\"3.2.0\"},\"storage\":{\"disks\":[{\"device\":\"/dev/disk/by-id/wwn-0x6b07b250ebb9d0002a33509f24af1f62\",\"partitions\":[{\"label\":\"var-lib-containers\",\"sizeMiB\":0,\"startMiB\":250000}],\"wipeTable\":false}],\"filesystems\":[{\"device\":\"/dev/disk/by-partlabel/var-lib-containers\",\"format\":\"xfs\",\"mountOptions\":[\"defaults\",\"prjquota\"],\"path\":\"/var/lib/containers\",\"wipeFilesystem\":true}]},\"systemd\":{\"units\":[{\"contents\":\"# Generated by Butane\\n[Unit]\\nRequires=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\nAfter=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\n\\n[Mount]\\nWhere=/var/lib/containers\\nWhat=/dev/disk/by-partlabel/var-lib-containers\\nType=xfs\\nOptions=defaults,prjquota\\n\\n[Install]\\nRequiredBy=local-fs.target\",\"enabled\":true,\"name\":\"var-lib-containers.mount\"}]}}" After installation, check the single-node OpenShift disk status. Enter into a debug session on the single-node OpenShift node by running the following command. This step instantiates a debug pod called <node_name>-debug : USD oc debug node/my-sno-node Set /host as the root directory within the debug shell by running the following command. The debug pod mounts the host's root file system in /host within the pod. By changing the root directory to /host , you can run binaries contained in the host's executable paths: # chroot /host List information about all available block devices by running the following command: # lsblk Example output NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS sda 8:0 0 446.6G 0 disk ββsda1 8:1 0 1M 0 part ββsda2 8:2 0 127M 0 part ββsda3 8:3 0 384M 0 part /boot ββsda4 8:4 0 243.6G 0 part /var β /sysroot/ostree/deploy/rhcos/var β /usr β /etc β / β /sysroot ββsda5 8:5 0 202.5G 0 part /var/lib/containers Display information about the file system disk space usage by running the following command: # df -h Example output Filesystem Size Used Avail Use% Mounted on devtmpfs 4.0M 0 4.0M 0% /dev tmpfs 126G 84K 126G 1% /dev/shm tmpfs 51G 93M 51G 1% /run /dev/sda4 244G 5.2G 239G 3% /sysroot tmpfs 126G 4.0K 126G 1% /tmp /dev/sda5 203G 119G 85G 59% /var/lib/containers /dev/sda3 350M 110M 218M 34% /boot tmpfs 26G 0 26G 0% /run/user/1000 10.10.2. Configuring the image registry using PolicyGenTemplate CRs Use PolicyGenTemplate (PGT) CRs to apply the CRs required to configure the image registry and patch the imageregistry configuration. Prerequisites You have configured a disk partition in the managed cluster. You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. You have created a Git repository where you manage your custom site configuration data for use with GitOps Zero Touch Provisioning (ZTP). Procedure Configure the storage class, persistent volume claim, persistent volume, and image registry configuration in the appropriate PolicyGenTemplate CR. For example, to configure an individual site, add the following YAML to the file example-sno-site.yaml : sourceFiles: # storage class - fileName: StorageClass.yaml policyName: "sc-for-image-registry" metadata: name: image-registry-sc annotations: ran.openshift.io/ztp-deploy-wave: "100" 1 # persistent volume claim - fileName: StoragePVC.yaml policyName: "pvc-for-image-registry" metadata: name: image-registry-pvc namespace: openshift-image-registry annotations: ran.openshift.io/ztp-deploy-wave: "100" spec: accessModes: - ReadWriteMany resources: requests: storage: 100Gi storageClassName: image-registry-sc volumeMode: Filesystem # persistent volume - fileName: ImageRegistryPV.yaml 2 policyName: "pv-for-image-registry" metadata: annotations: ran.openshift.io/ztp-deploy-wave: "100" - fileName: ImageRegistryConfig.yaml policyName: "config-for-image-registry" complianceType: musthave metadata: annotations: ran.openshift.io/ztp-deploy-wave: "100" spec: storage: pvc: claim: "image-registry-pvc" 1 Set the appropriate value for ztp-deploy-wave depending on whether you are configuring image registries at the site, common, or group level. ztp-deploy-wave: "100" is suitable for development or testing because it allows you to group the referenced source files together. 2 In ImageRegistryPV.yaml , ensure that the spec.local.path field is set to /var/imageregistry to match the value set for the mount_point field in the SiteConfig CR. Important Do not set complianceType: mustonlyhave for the - fileName: ImageRegistryConfig.yaml configuration. This can cause the registry pod deployment to fail. Commit the PolicyGenTemplate change in Git, and then push to the Git repository being monitored by the GitOps ZTP ArgoCD application. Verification Use the following steps to troubleshoot errors with the local image registry on the managed clusters: Verify successful login to the registry while logged in to the managed cluster. Run the following commands: Export the managed cluster name: USD cluster=<managed_cluster_name> Get the managed cluster kubeconfig details: USD oc get secret -n USDcluster USDcluster-admin-password -o jsonpath='{.data.password}' | base64 -d > kubeadmin-password-USDcluster Download and export the cluster kubeconfig : USD oc get secret -n USDcluster USDcluster-admin-kubeconfig -o jsonpath='{.data.kubeconfig}' | base64 -d > kubeconfig-USDcluster && export KUBECONFIG=./kubeconfig-USDcluster Verify access to the image registry from the managed cluster. See "Accessing the registry". Check that the Config CRD in the imageregistry.operator.openshift.io group instance is not reporting errors. Run the following command while logged in to the managed cluster: USD oc get image.config.openshift.io cluster -o yaml Example output apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: include.release.openshift.io/ibm-cloud-managed: "true" include.release.openshift.io/self-managed-high-availability: "true" include.release.openshift.io/single-node-developer: "true" release.openshift.io/create-only: "true" creationTimestamp: "2021-10-08T19:02:39Z" generation: 5 name: cluster resourceVersion: "688678648" uid: 0406521b-39c0-4cda-ba75-873697da75a4 spec: additionalTrustedCA: name: acm-ice Check that the PersistentVolumeClaim on the managed cluster is populated with data. Run the following command while logged in to the managed cluster: USD oc get pv image-registry-sc Check that the registry* pod is running and is located under the openshift-image-registry namespace. USD oc get pods -n openshift-image-registry | grep registry* Example output cluster-image-registry-operator-68f5c9c589-42cfg 1/1 Running 0 8d image-registry-5f8987879-6nx6h 1/1 Running 0 8d Check that the disk partition on the managed cluster is correct: Open a debug shell to the managed cluster: USD oc debug node/sno-1.example.com Run lsblk to check the host disk partitions: sh-4.4# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 446.6G 0 disk |-sda1 8:1 0 1M 0 part |-sda2 8:2 0 127M 0 part |-sda3 8:3 0 384M 0 part /boot |-sda4 8:4 0 336.3G 0 part /sysroot `-sda5 8:5 0 100.1G 0 part /var/imageregistry 1 sdb 8:16 0 446.6G 0 disk sr0 11:0 1 104M 0 rom 1 /var/imageregistry indicates that the disk is correctly partitioned. Additional resources Accessing the registry 10.11. Using hub templates in PolicyGenTemplate CRs Topology Aware Lifecycle Manager supports partial Red Hat Advanced Cluster Management (RHACM) hub cluster template functions in configuration policies used with GitOps Zero Touch Provisioning (ZTP). Hub-side cluster templates allow you to define configuration policies that can be dynamically customized to the target clusters. This reduces the need to create separate policies for many clusters with similiar configurations but with different values. Important Policy templates are restricted to the same namespace as the namespace where the policy is defined. This means that you must create the objects referenced in the hub template in the same namespace where the policy is created. The following supported hub template functions are available for use in GitOps ZTP with TALM: fromConfigmap returns the value of the provided data key in the named ConfigMap resource. Note There is a 1 MiB size limit for ConfigMap CRs. The effective size for ConfigMap CRs is further limited by the last-applied-configuration annotation. To avoid the last-applied-configuration limitation, add the following annotation to the template ConfigMap : argocd.argoproj.io/sync-options: Replace=true base64enc returns the base64-encoded value of the input string base64dec returns the decoded value of the base64-encoded input string indent returns the input string with added indent spaces autoindent returns the input string with added indent spaces based on the spacing used in the parent template toInt casts and returns the integer value of the input value toBool converts the input string into a boolean value, and returns the boolean Various Open source community functions are also available for use with GitOps ZTP. Additional resources RHACM support for hub cluster templates in configuration policies 10.11.1. Example hub templates The following code examples are valid hub templates. Each of these templates return values from the ConfigMap CR with the name test-config in the default namespace. Returns the value with the key common-key : {{hub fromConfigMap "default" "test-config" "common-key" hub}} Returns a string by using the concatenated value of the .ManagedClusterName field and the string -name : {{hub fromConfigMap "default" "test-config" (printf "%s-name" .ManagedClusterName) hub}} Casts and returns a boolean value from the concatenated value of the .ManagedClusterName field and the string -name : {{hub fromConfigMap "default" "test-config" (printf "%s-name" .ManagedClusterName) | toBool hub}} Casts and returns an integer value from the concatenated value of the .ManagedClusterName field and the string -name : {{hub (printf "%s-name" .ManagedClusterName) | fromConfigMap "default" "test-config" | toInt hub}} 10.11.2. Specifying group and site configuration in group PolicyGenTemplate CRs with hub templates You can manage the configuration of fleets of clusters with ConfigMap CRs by using hub templates to populate the group and site values in the generated policies that get applied to the managed clusters. Using hub templates in site PolicyGenTemplate (PGT) CRs means that you do not need to create a PolicyGenTemplate CR for each site. You can group the clusters in a fleet in various categories, depending on the use case, for example hardware type or region. Each cluster should have a label corresponding to the group or groups that the cluster is in. If you manage the configuration values for each group in different ConfigMap CRs, then you require only one group PolicyGenTemplate CR to apply the changes to all the clusters in the group by using hub templates. The following example shows you how to use three ConfigMap CRs and one group PolicyGenTemplate CR to apply both site and group configuration to clusters grouped by hardware type and region. Note When you use the fromConfigmap function, the printf variable is only available for the template resource data key fields. You cannot use it with name and namespace fields. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. You have created a Git repository where you manage your custom site configuration data. The repository must be accessible from the hub cluster and be defined as a source repository for the GitOps ZTP ArgoCD application. Procedure Create three ConfigMap CRs that contain the group and site configuration: Create a ConfigMap CR named group-hardware-types-configmap to hold the hardware-specific configuration. For example: apiVersion: v1 kind: ConfigMap metadata: name: group-hardware-types-configmap namespace: ztp-group annotations: argocd.argoproj.io/sync-options: Replace=true 1 data: # SriovNetworkNodePolicy.yaml hardware-type-1-sriov-node-policy-pfNames-1: "[\"ens5f0\"]" hardware-type-1-sriov-node-policy-pfNames-2: "[\"ens7f0\"]" # PerformanceProfile.yaml hardware-type-1-cpu-isolated: "2-31,34-63" hardware-type-1-cpu-reserved: "0-1,32-33" hardware-type-1-hugepages-default: "1G" hardware-type-1-hugepages-size: "1G" hardware-type-1-hugepages-count: "32" 1 The argocd.argoproj.io/sync-options annotation is required only if the ConfigMap is larger than 1 MiB in size. Create a ConfigMap CR named group-zones-configmap to hold the regional configuration. For example: apiVersion: v1 kind: ConfigMap metadata: name: group-zones-configmap namespace: ztp-group data: # ClusterLogForwarder.yaml zone-1-cluster-log-fwd-outputs: "[{\"type\":\"kafka\", \"name\":\"kafka-open\", \"url\":\"tcp://10.46.55.190:9092/test\"}]" zone-1-cluster-log-fwd-pipelines: "[{\"inputRefs\":[\"audit\", \"infrastructure\"], \"labels\": {\"label1\": \"test1\", \"label2\": \"test2\", \"label3\": \"test3\", \"label4\": \"test4\"}, \"name\": \"all-to-default\", \"outputRefs\": [\"kafka-open\"]}]" Create a ConfigMap CR named site-data-configmap to hold the site-specific configuration. For example: apiVersion: v1 kind: ConfigMap metadata: name: site-data-configmap namespace: ztp-group data: # SriovNetwork.yaml du-sno-1-zone-1-sriov-network-vlan-1: "140" du-sno-1-zone-1-sriov-network-vlan-2: "150" Note Each ConfigMap CR must be in the same namespace as the policy to be generated from the group PolicyGenTemplate CR. Commit the ConfigMap CRs in Git, and then push to the Git repository being monitored by the Argo CD application. Apply the hardware type and region labels to the clusters. The following command applies to a single cluster named du-sno-1-zone-1 and the labels chosen are "hardware-type": "hardware-type-1" and "group-du-sno-zone": "zone-1" : USD oc patch managedclusters.cluster.open-cluster-management.io/du-sno-1-zone-1 --type merge -p '{"metadata":{"labels":{"hardware-type": "hardware-type-1", "group-du-sno-zone": "zone-1"}}}' Create a group PolicyGenTemplate CR that uses hub templates to obtain the required data from the ConfigMap objects. This example PolicyGenTemplate CR configures logging, VLAN IDs, NICs and Performance Profile for the clusters that match the labels listed under spec.bindingRules : apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: group-du-sno-pgt namespace: ztp-group spec: bindingRules: # These policies will correspond to all clusters with these labels group-du-sno-zone: "zone-1" hardware-type: "hardware-type-1" mcp: "master" sourceFiles: - fileName: ClusterLogForwarder.yaml # wave 10 policyName: "group-du-sno-cfg-policy" spec: outputs: '{{hub fromConfigMap "" "group-zones-configmap" (printf "%s-cluster-log-fwd-outputs" (index .ManagedClusterLabels "group-du-sno-zone")) | toLiteral hub}}' pipelines: '{{hub fromConfigMap "" "group-zones-configmap" (printf "%s-cluster-log-fwd-pipelines" (index .ManagedClusterLabels "group-du-sno-zone")) | toLiteral hub}}' - fileName: PerformanceProfile.yaml # wave 10 policyName: "group-du-sno-cfg-policy" metadata: name: openshift-node-performance-profile spec: additionalKernelArgs: - rcupdate.rcu_normal_after_boot=0 - vfio_pci.enable_sriov=1 - vfio_pci.disable_idle_d3=1 - efi=runtime cpu: isolated: '{{hub fromConfigMap "" "group-hardware-types-configmap" (printf "%s-cpu-isolated" (index .ManagedClusterLabels "hardware-type")) hub}}' reserved: '{{hub fromConfigMap "" "group-hardware-types-configmap" (printf "%s-cpu-reserved" (index .ManagedClusterLabels "hardware-type")) hub}}' hugepages: defaultHugepagesSize: '{{hub fromConfigMap "" "group-hardware-types-configmap" (printf "%s-hugepages-default" (index .ManagedClusterLabels "hardware-type")) hub}}' pages: - size: '{{hub fromConfigMap "" "group-hardware-types-configmap" (printf "%s-hugepages-size" (index .ManagedClusterLabels "hardware-type")) hub}}' count: '{{hub fromConfigMap "" "group-hardware-types-configmap" (printf "%s-hugepages-count" (index .ManagedClusterLabels "hardware-type")) | toInt hub}}' realTimeKernel: enabled: true - fileName: SriovNetwork.yaml # wave 100 policyName: "group-du-sno-sriov-policy" metadata: name: sriov-nw-du-fh spec: resourceName: du_fh vlan: '{{hub fromConfigMap "" "site-data-configmap" (printf "%s-sriov-network-vlan-1" .ManagedClusterName) | toInt hub}}' - fileName: SriovNetworkNodePolicy.yaml # wave 100 policyName: "group-du-sno-sriov-policy" metadata: name: sriov-nnp-du-fh spec: deviceType: netdevice isRdma: false nicSelector: pfNames: '{{hub fromConfigMap "" "group-hardware-types-configmap" (printf "%s-sriov-node-policy-pfNames-1" (index .ManagedClusterLabels "hardware-type")) | toLiteral hub}}' numVfs: 8 priority: 10 resourceName: du_fh - fileName: SriovNetwork.yaml # wave 100 policyName: "group-du-sno-sriov-policy" metadata: name: sriov-nw-du-mh spec: resourceName: du_mh vlan: '{{hub fromConfigMap "" "site-data-configmap" (printf "%s-sriov-network-vlan-2" .ManagedClusterName) | toInt hub}}' - fileName: SriovNetworkNodePolicy.yaml # wave 100 policyName: "group-du-sno-sriov-policy" metadata: name: sriov-nw-du-fh spec: deviceType: netdevice isRdma: false nicSelector: pfNames: '{{hub fromConfigMap "" "group-hardware-types-configmap" (printf "%s-sriov-node-policy-pfNames-2" (index .ManagedClusterLabels "hardware-type")) | toLiteral hub}}' numVfs: 8 priority: 10 resourceName: du_fh Note To retrieve site-specific configuration values, use the .ManagedClusterName field. This is a template context value set to the name of the target managed cluster. To retrieve group-specific configuration, use the .ManagedClusterLabels field. This is a template context value set to the value of the managed cluster's labels. Commit the site PolicyGenTemplate CR in Git and push to the Git repository that is monitored by the ArgoCD application. Note Subsequent changes to the referenced ConfigMap CR are not automatically synced to the applied policies. You need to manually sync the new ConfigMap changes to update existing PolicyGenTemplate CRs. See "Syncing new ConfigMap changes to existing PolicyGenTemplate CRs". You can use the same PolicyGenTemplate CR for multiple clusters. If there is a configuration change, then the only modifications you need to make are to the ConfigMap objects that hold the configuration for each cluster and the labels of the managed clusters. 10.11.3. Syncing new ConfigMap changes to existing PolicyGenTemplate CRs Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. You have created a PolicyGenTemplate CR that pulls information from a ConfigMap CR using hub cluster templates. Procedure Update the contents of your ConfigMap CR, and apply the changes in the hub cluster. To sync the contents of the updated ConfigMap CR to the deployed policy, do either of the following: Option 1: Delete the existing policy. ArgoCD uses the PolicyGenTemplate CR to immediately recreate the deleted policy. For example, run the following command: USD oc delete policy <policy_name> -n <policy_namespace> Option 2: Apply a special annotation policy.open-cluster-management.io/trigger-update to the policy with a different value every time when you update the ConfigMap . For example: USD oc annotate policy <policy_name> -n <policy_namespace> policy.open-cluster-management.io/trigger-update="1" Note You must apply the updated policy for the changes to take effect. For more information, see Special annotation for reprocessing . Optional: If it exists, delete the ClusterGroupUpdate CR that contains the policy. For example: USD oc delete clustergroupupgrade <cgu_name> -n <cgu_namespace> Create a new ClusterGroupUpdate CR that includes the policy to apply with the updated ConfigMap changes. For example, add the following YAML to the file cgr-example.yaml : apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: <cgr_name> namespace: <policy_namespace> spec: managedPolicies: - <managed_policy> enable: true clusters: - <managed_cluster_1> - <managed_cluster_2> remediationStrategy: maxConcurrency: 2 timeout: 240 Apply the updated policy: USD oc apply -f cgr-example.yaml
|
[
"mkdir -p ./out",
"podman run --log-driver=none --rm registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.15.1 extract /home/ztp --tar | tar x -C ./out",
"apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: USDname annotations: ran.openshift.io/ztp-deploy-wave: \"10\" spec: additionalKernelArgs: - \"idle=poll\" - \"rcupdate.rcu_normal_after_boot=0\" cpu: isolated: USDisolated reserved: USDreserved hugepages: defaultHugepagesSize: USDdefaultHugepagesSize pages: - size: USDsize count: USDcount node: USDnode machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/USDmcp: \"\" net: userLevelNetworking: true nodeSelector: node-role.kubernetes.io/USDmcp: '' numa: topologyPolicy: \"restricted\" realTimeKernel: enabled: true",
"- fileName: PerformanceProfile.yaml policyName: \"config-policy\" metadata: name: openshift-node-performance-profile spec: cpu: # These must be tailored for the specific hardware platform isolated: \"2-19,22-39\" reserved: \"0-1,20-21\" hugepages: defaultHugepagesSize: 1G pages: - size: 1G count: 10 globallyDisableIrqLoadBalancing: false",
"--- apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: openshift-node-performance-profile spec: additionalKernelArgs: - idle=poll - rcupdate.rcu_normal_after_boot=0 cpu: isolated: 2-19,22-39 reserved: 0-1,20-21 globallyDisableIrqLoadBalancing: false hugepages: defaultHugepagesSize: 1G pages: - count: 10 size: 1G machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/master: \"\" net: userLevelNetworking: true nodeSelector: node-role.kubernetes.io/master: \"\" numa: topologyPolicy: restricted realTimeKernel: enabled: true",
"spec: bindingRules: group-du-standard: \"\" mcp: \"worker\"",
"example βββ policygentemplates βββ dev.yaml βββ kustomization.yaml βββ mec-edge-sno1.yaml βββ sno.yaml βββ source-crs 1 βββ PaoCatalogSource.yaml βββ PaoSubscription.yaml βββ custom-crs | βββ apiserver-config.yaml | βββ disable-nic-lldp.yaml βββ elasticsearch βββ ElasticsearchNS.yaml βββ ElasticsearchOperatorGroup.yaml",
"apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: \"group-dev\" namespace: \"ztp-clusters\" spec: bindingRules: dev: \"true\" mcp: \"master\" sourceFiles: # These policies/CRs come from the internal container Image #Cluster Logging - fileName: ClusterLogNS.yaml remediationAction: inform policyName: \"group-dev-cluster-log-ns\" - fileName: ClusterLogOperGroup.yaml remediationAction: inform policyName: \"group-dev-cluster-log-operator-group\" - fileName: ClusterLogSubscription.yaml remediationAction: inform policyName: \"group-dev-cluster-log-sub\" #Local Storage Operator - fileName: StorageNS.yaml remediationAction: inform policyName: \"group-dev-lso-ns\" - fileName: StorageOperGroup.yaml remediationAction: inform policyName: \"group-dev-lso-operator-group\" - fileName: StorageSubscription.yaml remediationAction: inform policyName: \"group-dev-lso-sub\" #These are custom local polices that come from the source-crs directory in the git repo # Performance Addon Operator - fileName: PaoSubscriptionNS.yaml remediationAction: inform policyName: \"group-dev-pao-ns\" - fileName: PaoSubscriptionCatalogSource.yaml remediationAction: inform policyName: \"group-dev-pao-cat-source\" spec: image: <image_URL_here> - fileName: PaoSubscription.yaml remediationAction: inform policyName: \"group-dev-pao-sub\" #Elasticsearch Operator - fileName: elasticsearch/ElasticsearchNS.yaml 1 remediationAction: inform policyName: \"group-dev-elasticsearch-ns\" - fileName: elasticsearch/ElasticsearchOperatorGroup.yaml remediationAction: inform policyName: \"group-dev-elasticsearch-operator-group\" #Custom Resources - fileName: custom-crs/apiserver-config.yaml 2 remediationAction: inform policyName: \"group-dev-apiserver-config\" - fileName: custom-crs/disable-nic-lldp.yaml remediationAction: inform policyName: \"group-dev-disable-nic-lldp\"",
"apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: custom-source-cr namespace: ztp-clusters spec: managedPolicies: - group-dev-config-policy enable: true clusters: - cluster1 remediationStrategy: maxConcurrency: 2 timeout: 240",
"oc apply -f cgu-test.yaml",
"oc get cgu -A",
"NAMESPACE NAME AGE STATE DETAILS ztp-clusters custom-source-cr 6s InProgress Remediating non-compliant policies ztp-install cluster1 19h Completed All clusters are compliant with all the managed policies",
"spec: evaluationInterval: compliant: 30m noncompliant: 20s",
"spec: sourceFiles: - fileName: SriovSubscription.yaml policyName: \"sriov-sub-policy\" evaluationInterval: compliant: never noncompliant: 10s",
"oc get pods -n open-cluster-management-agent-addon",
"NAME READY STATUS RESTARTS AGE config-policy-controller-858b894c68-v4xdb 1/1 Running 22 (5d8h ago) 10d",
"oc logs -n open-cluster-management-agent-addon config-policy-controller-858b894c68-v4xdb",
"2022-05-10T15:10:25.280Z info configuration-policy-controller controllers/configurationpolicy_controller.go:166 Skipping the policy evaluation due to the policy not reaching the evaluation interval {\"policy\": \"compute-1-config-policy-config\"} 2022-05-10T15:10:25.280Z info configuration-policy-controller controllers/configurationpolicy_controller.go:166 Skipping the policy evaluation due to the policy not reaching the evaluation interval {\"policy\": \"compute-1-common-compute-1-catalog-policy-config\"}",
"apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: \"group-du-sno-validator\" 1 namespace: \"ztp-group\" 2 spec: bindingRules: group-du-sno: \"\" 3 bindingExcludedRules: ztp-done: \"\" 4 mcp: \"master\" 5 sourceFiles: - fileName: validatorCRs/informDuValidator.yaml remediationAction: inform 6 policyName: \"du-policy\" 7",
"- fileName: PerformanceProfile.yaml policyName: \"config-policy\" metadata: [...] spec: [...] workloadHints: realTime: true highPowerConsumption: false perPodPowerManagement: false",
"- fileName: PerformanceProfile.yaml policyName: \"config-policy\" metadata: [...] spec: [...] workloadHints: realTime: true highPowerConsumption: true perPodPowerManagement: false",
"- fileName: PerformanceProfile.yaml policyName: \"config-policy\" metadata: [...] spec: [...] workloadHints: realTime: true highPowerConsumption: false perPodPowerManagement: true [...] additionalKernelArgs: - [...] - \"cpufreq.default_governor=schedutil\" 1",
"oc get nodes",
"oc debug node/<node-name>",
"chroot /host",
"cat /proc/cmdline",
"- fileName: TunedPerformancePatch.yaml policyName: \"config-policy\" spec: profile: - name: performance-patch data: | [...] [sysfs] /sys/devices/system/cpu/intel_pstate/max_perf_pct=<x> 1",
"- fileName: StorageLVMOSubscriptionNS.yaml policyName: subscription-policies - fileName: StorageLVMOSubscriptionOperGroup.yaml policyName: subscription-policies - fileName: StorageLVMOSubscription.yaml spec: name: lvms-operator channel: stable-4.15 policyName: subscription-policies",
"- fileName: StorageLVMSubscriptionNS.yaml policyName: subscription-policies - fileName: StorageLVMSubscriptionOperGroup.yaml policyName: subscription-policies - fileName: StorageLVMSubscription.yaml policyName: subscription-policies",
"- fileName: StorageLVMCluster.yaml policyName: \"lvms-config\" 1 spec: storage: deviceClasses: - name: vg1 thinPoolConfig: name: thin-pool-1 sizePercent: 90 overprovisionRatio: 10",
"- fileName: PtpOperatorConfigForEvent.yaml policyName: \"config-policy\" spec: daemonNodeSelector: {} ptpEventConfig: enableEventPublisher: true transportHost: http://ptp-event-publisher-service-NODE_NAME.openshift-ptp.svc.cluster.local:9043",
"- fileName: PtpConfigSlave.yaml 1 policyName: \"config-policy\" metadata: name: \"du-ptp-slave\" spec: profile: - name: \"slave\" interface: \"ens5f1\" 2 ptp4lOpts: \"-2 -s --summary_interval -4\" 3 phc2sysOpts: \"-a -r -m -n 24 -N 8 -R 16\" 4 ptpClockThreshold: 5 holdOverTimeout: 30 #secs maxOffsetThreshold: 100 #nano secs minOffsetThreshold: -100 #nano secs",
"#AMQ interconnect operator for fast events - fileName: AmqSubscriptionNS.yaml policyName: \"subscriptions-policy\" - fileName: AmqSubscriptionOperGroup.yaml policyName: \"subscriptions-policy\" - fileName: AmqSubscription.yaml policyName: \"subscriptions-policy\"",
"- fileName: PtpOperatorConfigForEvent.yaml policyName: \"config-policy\" spec: daemonNodeSelector: {} ptpEventConfig: enableEventPublisher: true transportHost: \"amqp://amq-router.amq-router.svc.cluster.local\"",
"- fileName: PtpConfigSlave.yaml 1 policyName: \"config-policy\" metadata: name: \"du-ptp-slave\" spec: profile: - name: \"slave\" interface: \"ens5f1\" 2 ptp4lOpts: \"-2 -s --summary_interval -4\" 3 phc2sysOpts: \"-a -r -m -n 24 -N 8 -R 16\" 4 ptpClockThreshold: 5 holdOverTimeout: 30 #secs maxOffsetThreshold: 100 #nano secs minOffsetThreshold: -100 #nano secs",
"- fileName: AmqInstance.yaml policyName: \"config-policy\"",
"Bare Metal Event Relay operator - fileName: BareMetalEventRelaySubscriptionNS.yaml policyName: \"subscriptions-policy\" - fileName: BareMetalEventRelaySubscriptionOperGroup.yaml policyName: \"subscriptions-policy\" - fileName: BareMetalEventRelaySubscription.yaml policyName: \"subscriptions-policy\"",
"- fileName: HardwareEvent.yaml 1 policyName: \"config-policy\" spec: nodeSelector: {} transportHost: \"http://hw-event-publisher-service.openshift-bare-metal-events.svc.cluster.local:9043\" logLevel: \"info\"",
"oc -n openshift-bare-metal-events create secret generic redfish-basic-auth --from-literal=username=<bmc_username> --from-literal=password=<bmc_password> --from-literal=hostaddr=\"<bmc_host_ip_addr>\"",
"AMQ interconnect operator for fast events - fileName: AmqSubscriptionNS.yaml policyName: \"subscriptions-policy\" - fileName: AmqSubscriptionOperGroup.yaml policyName: \"subscriptions-policy\" - fileName: AmqSubscription.yaml policyName: \"subscriptions-policy\" Bare Metal Event Rely operator - fileName: BareMetalEventRelaySubscriptionNS.yaml policyName: \"subscriptions-policy\" - fileName: BareMetalEventRelaySubscriptionOperGroup.yaml policyName: \"subscriptions-policy\" - fileName: BareMetalEventRelaySubscription.yaml policyName: \"subscriptions-policy\"",
"- fileName: AmqInstance.yaml policyName: \"config-policy\"",
"- fileName: HardwareEvent.yaml policyName: \"config-policy\" spec: nodeSelector: {} transportHost: \"amqp://<amq_interconnect_name>.<amq_interconnect_namespace>.svc.cluster.local\" 1 logLevel: \"info\"",
"oc -n openshift-bare-metal-events create secret generic redfish-basic-auth --from-literal=username=<bmc_username> --from-literal=password=<bmc_password> --from-literal=hostaddr=\"<bmc_host_ip_addr>\"",
"variant: fcos version: 1.3.0 storage: disks: - device: /dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0 1 wipe_table: false partitions: - label: var-lib-containers start_mib: <start_of_partition> 2 size_mib: <partition_size> 3 filesystems: - path: /var/lib/containers device: /dev/disk/by-partlabel/var-lib-containers format: xfs wipe_filesystem: true with_mount_unit: true mount_options: - defaults - prjquota",
"butane storage.bu",
"{\"ignition\":{\"version\":\"3.2.0\"},\"storage\":{\"disks\":[{\"device\":\"/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0\",\"partitions\":[{\"label\":\"var-lib-containers\",\"sizeMiB\":0,\"startMiB\":250000}],\"wipeTable\":false}],\"filesystems\":[{\"device\":\"/dev/disk/by-partlabel/var-lib-containers\",\"format\":\"xfs\",\"mountOptions\":[\"defaults\",\"prjquota\"],\"path\":\"/var/lib/containers\",\"wipeFilesystem\":true}]},\"systemd\":{\"units\":[{\"contents\":\"# # Generated by Butane\\n[Unit]\\nRequires=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\nAfter=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\n\\n[Mount]\\nWhere=/var/lib/containers\\nWhat=/dev/disk/by-partlabel/var-lib-containers\\nType=xfs\\nOptions=defaults,prjquota\\n\\n[Install]\\nRequiredBy=local-fs.target\",\"enabled\":true,\"name\":\"var-lib-containers.mount\"}]}}",
"[...] spec: clusters: - nodes: - ignitionConfigOverride: | { \"ignition\": { \"version\": \"3.2.0\" }, \"storage\": { \"disks\": [ { \"device\": \"/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0\", \"partitions\": [ { \"label\": \"var-lib-containers\", \"sizeMiB\": 0, \"startMiB\": 250000 } ], \"wipeTable\": false } ], \"filesystems\": [ { \"device\": \"/dev/disk/by-partlabel/var-lib-containers\", \"format\": \"xfs\", \"mountOptions\": [ \"defaults\", \"prjquota\" ], \"path\": \"/var/lib/containers\", \"wipeFilesystem\": true } ] }, \"systemd\": { \"units\": [ { \"contents\": \"# # Generated by Butane\\n[Unit]\\nRequires=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\nAfter=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\n\\n[Mount]\\nWhere=/var/lib/containers\\nWhat=/dev/disk/by-partlabel/var-lib-containers\\nType=xfs\\nOptions=defaults,prjquota\\n\\n[Install]\\nRequiredBy=local-fs.target\", \"enabled\": true, \"name\": \"var-lib-containers.mount\" } ] } } [...]",
"oc get bmh -n my-sno-ns my-sno -ojson | jq '.metadata.annotations[\"bmac.agent-install.openshift.io/ignition-config-overrides\"]",
"\"{\\\"ignition\\\":{\\\"version\\\":\\\"3.2.0\\\"},\\\"storage\\\":{\\\"disks\\\":[{\\\"device\\\":\\\"/dev/disk/by-id/wwn-0x6b07b250ebb9d0002a33509f24af1f62\\\",\\\"partitions\\\":[{\\\"label\\\":\\\"var-lib-containers\\\",\\\"sizeMiB\\\":0,\\\"startMiB\\\":250000}],\\\"wipeTable\\\":false}],\\\"filesystems\\\":[{\\\"device\\\":\\\"/dev/disk/by-partlabel/var-lib-containers\\\",\\\"format\\\":\\\"xfs\\\",\\\"mountOptions\\\":[\\\"defaults\\\",\\\"prjquota\\\"],\\\"path\\\":\\\"/var/lib/containers\\\",\\\"wipeFilesystem\\\":true}]},\\\"systemd\\\":{\\\"units\\\":[{\\\"contents\\\":\\\"# Generated by Butane\\\\n[Unit]\\\\nRequires=systemd-fsck@dev-disk-by\\\\\\\\x2dpartlabel-var\\\\\\\\x2dlib\\\\\\\\x2dcontainers.service\\\\nAfter=systemd-fsck@dev-disk-by\\\\\\\\x2dpartlabel-var\\\\\\\\x2dlib\\\\\\\\x2dcontainers.service\\\\n\\\\n[Mount]\\\\nWhere=/var/lib/containers\\\\nWhat=/dev/disk/by-partlabel/var-lib-containers\\\\nType=xfs\\\\nOptions=defaults,prjquota\\\\n\\\\n[Install]\\\\nRequiredBy=local-fs.target\\\",\\\"enabled\\\":true,\\\"name\\\":\\\"var-lib-containers.mount\\\"}]}}\"",
"oc debug node/my-sno-node",
"chroot /host",
"lsblk",
"NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS sda 8:0 0 446.6G 0 disk ββsda1 8:1 0 1M 0 part ββsda2 8:2 0 127M 0 part ββsda3 8:3 0 384M 0 part /boot ββsda4 8:4 0 243.6G 0 part /var β /sysroot/ostree/deploy/rhcos/var β /usr β /etc β / β /sysroot ββsda5 8:5 0 202.5G 0 part /var/lib/containers",
"df -h",
"Filesystem Size Used Avail Use% Mounted on devtmpfs 4.0M 0 4.0M 0% /dev tmpfs 126G 84K 126G 1% /dev/shm tmpfs 51G 93M 51G 1% /run /dev/sda4 244G 5.2G 239G 3% /sysroot tmpfs 126G 4.0K 126G 1% /tmp /dev/sda5 203G 119G 85G 59% /var/lib/containers /dev/sda3 350M 110M 218M 34% /boot tmpfs 26G 0 26G 0% /run/user/1000",
"sourceFiles: # storage class - fileName: StorageClass.yaml policyName: \"sc-for-image-registry\" metadata: name: image-registry-sc annotations: ran.openshift.io/ztp-deploy-wave: \"100\" 1 # persistent volume claim - fileName: StoragePVC.yaml policyName: \"pvc-for-image-registry\" metadata: name: image-registry-pvc namespace: openshift-image-registry annotations: ran.openshift.io/ztp-deploy-wave: \"100\" spec: accessModes: - ReadWriteMany resources: requests: storage: 100Gi storageClassName: image-registry-sc volumeMode: Filesystem # persistent volume - fileName: ImageRegistryPV.yaml 2 policyName: \"pv-for-image-registry\" metadata: annotations: ran.openshift.io/ztp-deploy-wave: \"100\" - fileName: ImageRegistryConfig.yaml policyName: \"config-for-image-registry\" complianceType: musthave metadata: annotations: ran.openshift.io/ztp-deploy-wave: \"100\" spec: storage: pvc: claim: \"image-registry-pvc\"",
"cluster=<managed_cluster_name>",
"oc get secret -n USDcluster USDcluster-admin-password -o jsonpath='{.data.password}' | base64 -d > kubeadmin-password-USDcluster",
"oc get secret -n USDcluster USDcluster-admin-kubeconfig -o jsonpath='{.data.kubeconfig}' | base64 -d > kubeconfig-USDcluster && export KUBECONFIG=./kubeconfig-USDcluster",
"oc get image.config.openshift.io cluster -o yaml",
"apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: include.release.openshift.io/ibm-cloud-managed: \"true\" include.release.openshift.io/self-managed-high-availability: \"true\" include.release.openshift.io/single-node-developer: \"true\" release.openshift.io/create-only: \"true\" creationTimestamp: \"2021-10-08T19:02:39Z\" generation: 5 name: cluster resourceVersion: \"688678648\" uid: 0406521b-39c0-4cda-ba75-873697da75a4 spec: additionalTrustedCA: name: acm-ice",
"oc get pv image-registry-sc",
"oc get pods -n openshift-image-registry | grep registry*",
"cluster-image-registry-operator-68f5c9c589-42cfg 1/1 Running 0 8d image-registry-5f8987879-6nx6h 1/1 Running 0 8d",
"oc debug node/sno-1.example.com",
"sh-4.4# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 446.6G 0 disk |-sda1 8:1 0 1M 0 part |-sda2 8:2 0 127M 0 part |-sda3 8:3 0 384M 0 part /boot |-sda4 8:4 0 336.3G 0 part /sysroot `-sda5 8:5 0 100.1G 0 part /var/imageregistry 1 sdb 8:16 0 446.6G 0 disk sr0 11:0 1 104M 0 rom",
"argocd.argoproj.io/sync-options: Replace=true",
"{{hub fromConfigMap \"default\" \"test-config\" \"common-key\" hub}}",
"{{hub fromConfigMap \"default\" \"test-config\" (printf \"%s-name\" .ManagedClusterName) hub}}",
"{{hub fromConfigMap \"default\" \"test-config\" (printf \"%s-name\" .ManagedClusterName) | toBool hub}}",
"{{hub (printf \"%s-name\" .ManagedClusterName) | fromConfigMap \"default\" \"test-config\" | toInt hub}}",
"apiVersion: v1 kind: ConfigMap metadata: name: group-hardware-types-configmap namespace: ztp-group annotations: argocd.argoproj.io/sync-options: Replace=true 1 data: # SriovNetworkNodePolicy.yaml hardware-type-1-sriov-node-policy-pfNames-1: \"[\\\"ens5f0\\\"]\" hardware-type-1-sriov-node-policy-pfNames-2: \"[\\\"ens7f0\\\"]\" # PerformanceProfile.yaml hardware-type-1-cpu-isolated: \"2-31,34-63\" hardware-type-1-cpu-reserved: \"0-1,32-33\" hardware-type-1-hugepages-default: \"1G\" hardware-type-1-hugepages-size: \"1G\" hardware-type-1-hugepages-count: \"32\"",
"apiVersion: v1 kind: ConfigMap metadata: name: group-zones-configmap namespace: ztp-group data: # ClusterLogForwarder.yaml zone-1-cluster-log-fwd-outputs: \"[{\\\"type\\\":\\\"kafka\\\", \\\"name\\\":\\\"kafka-open\\\", \\\"url\\\":\\\"tcp://10.46.55.190:9092/test\\\"}]\" zone-1-cluster-log-fwd-pipelines: \"[{\\\"inputRefs\\\":[\\\"audit\\\", \\\"infrastructure\\\"], \\\"labels\\\": {\\\"label1\\\": \\\"test1\\\", \\\"label2\\\": \\\"test2\\\", \\\"label3\\\": \\\"test3\\\", \\\"label4\\\": \\\"test4\\\"}, \\\"name\\\": \\\"all-to-default\\\", \\\"outputRefs\\\": [\\\"kafka-open\\\"]}]\"",
"apiVersion: v1 kind: ConfigMap metadata: name: site-data-configmap namespace: ztp-group data: # SriovNetwork.yaml du-sno-1-zone-1-sriov-network-vlan-1: \"140\" du-sno-1-zone-1-sriov-network-vlan-2: \"150\"",
"oc patch managedclusters.cluster.open-cluster-management.io/du-sno-1-zone-1 --type merge -p '{\"metadata\":{\"labels\":{\"hardware-type\": \"hardware-type-1\", \"group-du-sno-zone\": \"zone-1\"}}}'",
"apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: group-du-sno-pgt namespace: ztp-group spec: bindingRules: # These policies will correspond to all clusters with these labels group-du-sno-zone: \"zone-1\" hardware-type: \"hardware-type-1\" mcp: \"master\" sourceFiles: - fileName: ClusterLogForwarder.yaml # wave 10 policyName: \"group-du-sno-cfg-policy\" spec: outputs: '{{hub fromConfigMap \"\" \"group-zones-configmap\" (printf \"%s-cluster-log-fwd-outputs\" (index .ManagedClusterLabels \"group-du-sno-zone\")) | toLiteral hub}}' pipelines: '{{hub fromConfigMap \"\" \"group-zones-configmap\" (printf \"%s-cluster-log-fwd-pipelines\" (index .ManagedClusterLabels \"group-du-sno-zone\")) | toLiteral hub}}' - fileName: PerformanceProfile.yaml # wave 10 policyName: \"group-du-sno-cfg-policy\" metadata: name: openshift-node-performance-profile spec: additionalKernelArgs: - rcupdate.rcu_normal_after_boot=0 - vfio_pci.enable_sriov=1 - vfio_pci.disable_idle_d3=1 - efi=runtime cpu: isolated: '{{hub fromConfigMap \"\" \"group-hardware-types-configmap\" (printf \"%s-cpu-isolated\" (index .ManagedClusterLabels \"hardware-type\")) hub}}' reserved: '{{hub fromConfigMap \"\" \"group-hardware-types-configmap\" (printf \"%s-cpu-reserved\" (index .ManagedClusterLabels \"hardware-type\")) hub}}' hugepages: defaultHugepagesSize: '{{hub fromConfigMap \"\" \"group-hardware-types-configmap\" (printf \"%s-hugepages-default\" (index .ManagedClusterLabels \"hardware-type\")) hub}}' pages: - size: '{{hub fromConfigMap \"\" \"group-hardware-types-configmap\" (printf \"%s-hugepages-size\" (index .ManagedClusterLabels \"hardware-type\")) hub}}' count: '{{hub fromConfigMap \"\" \"group-hardware-types-configmap\" (printf \"%s-hugepages-count\" (index .ManagedClusterLabels \"hardware-type\")) | toInt hub}}' realTimeKernel: enabled: true - fileName: SriovNetwork.yaml # wave 100 policyName: \"group-du-sno-sriov-policy\" metadata: name: sriov-nw-du-fh spec: resourceName: du_fh vlan: '{{hub fromConfigMap \"\" \"site-data-configmap\" (printf \"%s-sriov-network-vlan-1\" .ManagedClusterName) | toInt hub}}' - fileName: SriovNetworkNodePolicy.yaml # wave 100 policyName: \"group-du-sno-sriov-policy\" metadata: name: sriov-nnp-du-fh spec: deviceType: netdevice isRdma: false nicSelector: pfNames: '{{hub fromConfigMap \"\" \"group-hardware-types-configmap\" (printf \"%s-sriov-node-policy-pfNames-1\" (index .ManagedClusterLabels \"hardware-type\")) | toLiteral hub}}' numVfs: 8 priority: 10 resourceName: du_fh - fileName: SriovNetwork.yaml # wave 100 policyName: \"group-du-sno-sriov-policy\" metadata: name: sriov-nw-du-mh spec: resourceName: du_mh vlan: '{{hub fromConfigMap \"\" \"site-data-configmap\" (printf \"%s-sriov-network-vlan-2\" .ManagedClusterName) | toInt hub}}' - fileName: SriovNetworkNodePolicy.yaml # wave 100 policyName: \"group-du-sno-sriov-policy\" metadata: name: sriov-nw-du-fh spec: deviceType: netdevice isRdma: false nicSelector: pfNames: '{{hub fromConfigMap \"\" \"group-hardware-types-configmap\" (printf \"%s-sriov-node-policy-pfNames-2\" (index .ManagedClusterLabels \"hardware-type\")) | toLiteral hub}}' numVfs: 8 priority: 10 resourceName: du_fh",
"oc delete policy <policy_name> -n <policy_namespace>",
"oc annotate policy <policy_name> -n <policy_namespace> policy.open-cluster-management.io/trigger-update=\"1\"",
"oc delete clustergroupupgrade <cgu_name> -n <cgu_namespace>",
"apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: <cgr_name> namespace: <policy_namespace> spec: managedPolicies: - <managed_policy> enable: true clusters: - <managed_cluster_1> - <managed_cluster_2> remediationStrategy: maxConcurrency: 2 timeout: 240",
"oc apply -f cgr-example.yaml"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/edge_computing/ztp-advanced-policy-config
|
Chapter 16. Using Webhooks
|
Chapter 16. Using Webhooks A webhook is a way for a web page or web application to provide other applications with information in real time. Webhooks are only triggered after an event occurs. The request usually contains details of the event. An event triggers callbacks, such as sending an e-mail confirming a host has been provisioned. Webhooks enable you to define a call to an external API based on Satellite internal event using a fire-and-forget message exchange pattern. The application sending the request does not wait for the response, or ignores it. Payload of a webhook is created from webhook templates. Webhook templates use the same ERB syntax as Provisioning templates. Available variables: @event_name : Name of an event. @webhook_id : Unique event ID. @payload : Payload data, different for each event type. To access individual fields, use @payload[:key_name] Ruby hash syntax. @payload[:object] : Database object for events triggered by database actions (create, update, delete). Not available for custom events. @payload[:context] : Additional information as hash like request and session UUID, remote IP address, user, organization and location. Because webhooks use HTTP, no new infrastructure needs be added to existing web services. The typical use case for webhooks in Satellite is making a call to a monitoring system when a host is created or deleted. Webhooks are useful where the action you want to perform in the external system can be achieved through its API. Where it is necessary to run additional commands or edit files, the shellhooks plugin for Capsules is available. The shellhooks plugin enables you to define a shell script on the Capsule that can be executed through the API. You can use webhooks successfully without installing the shellhooks plugin. For a list of available events, see Available webhook events . 16.1. Migrating to Webhooks The legacy foreman_hooks plugin provided full access to model objects that the webhooks plugin does not intentionally provide. The scope of what is available is limited by the safemode and all objects and macros are both subject to an API stability promise and are fully documented. The number of events triggered by webhooks is substantially fewer than with foreman_hooks . Webhooks are processed asynchronously so there is minimal risk of tampering with internals of the system. It is not possible to migrate from foreman_hooks without creating payloads for each individual webhook script. However, the webhooks plugin comes with several example payload templates. You can also use the example payloads with shellhooks to simplify migration. Both script and payload templates must be customized to achieve similar results. 16.2. Installing Webhooks Use the following procedure to install webhooks. After installing webhooks, you can configure Satellite Server to send webhook requests. Procedure Install webhooks using the following command: Optional: you can install the CLI plugin using the following command: 16.3. Creating a Webhook Template Webhook templates are used to generate the body of HTTP request to a configured target when a webhook is triggered. Use the following procedure to create a webhook template in the Satellite web UI. Procedure In the Satellite web UI, navigate to Administer > Webhook Templates . Click Clone an existing template or Create Template . Enter a name for the template. Use the editor to make changes to the template payload. A webhook HTTP payload must be created using Satellite template syntax. The webhook template can use a special variable called @object that can represent the main object of the event. @object can be missing in case of certain events. You can determine what data are actually available with the @payload variable. For more information, see Template Writing Reference in Managing Hosts and for available template macros and methods, visit /templates_doc on Satellite Server. Optional: Enter the description and audit comment. Assign organizations and locations. Click Submit . 16.4. Creating a Webhook You can customize events, payloads, HTTP authentication, content type, and headers through the Satellite web UI. Use the following procedure to create a webhook in the Satellite web UI. Procedure In the Satellite web UI, navigate to Administer > Webhooks . Click Create new . From the Subscribe to list, select an event. Enter a Name for your webhook. Enter a Target URL . Webhooks make HTTP requests to pre-configured URLs. The target URL can be a dynamic URL. Click Template to select a template. Webhook templates are used to generate the body of the HTTP request to Satellite Server when a webhook is triggered. Enter an HTTP method. Optional: If you do not want activate the webhook when you create it, uncheck the Enabled flag. Click the Credentials tab. Optional: If HTTP authentication is required, enter User and Password . Optional: Uncheck Verify SSL if you do not want to verify the server certificate against the system certificate store or Satellite CA. On the Additional tab, enter the HTTP Content Type . For example, application/json , application/xml or text/plain on the payload you define. The application does not attempt to convert the content to match the specified content type. Optional: Provide HTTP headers as JSON. ERB is also allowed. When configuring webhooks with endpoints with non-standard HTTP or HTTPS ports, an SELinux port must be assigned, see Configuring SELinux to Ensure Access to Satellite on Custom Ports in Installing Satellite Server in a Connected Network Environment . 16.5. Available Webhook Events The following table contains a list of webhook events that are available from the Satellite web UI. Action events trigger webhooks only on success , so if an action fails, a webhook is not triggered. For more information about payload, go to Administer > About > Support > Templates DSL . A list of available types is provided in the following table. Some events are marked as custom , in that case, the payload is an object object but a Ruby hash (key-value data structure) so syntax is different. Event name Description Payload Actions Katello Content View Promote Succeeded A Content View was successfully promoted. Actions::Katello::ContentView::Promote Actions Katello Content View Publish Succeeded A repository was successfully synchronized. Actions::Katello::ContentView::Publish Actions Remote Execution Run Host Job Succeeded A generic remote execution job succeeded for a host. This event is emitted for all Remote Execution jobs, when complete. Actions::RemoteExecution::RunHostJob Actions Remote Execution Run Host Job Katello Errata Install Succeeded Install errata using the Katello interface. Actions::RemoteExecution::RunHostJob Actions Remote Execution Run Host Job Katello Group Install Succeeded Install package group using the Katello interface. Actions::RemoteExecution::RunHostJob Actions Remote Execution Run Host Job Katello Package Install Succeeded Install package using the Katello interface. Actions::RemoteExecution::RunHostJob Actions Remote Execution Run Host Job Katello Group Remove Remove package group using the Katello interface. Actions::RemoteExecution::RunHostJob Actions Remote Execution Run Host Job Katello Package Remove Succeeded Remove package using the Katello interface. Actions::RemoteExecution::RunHostJob Actions Remote Execution Run Host Job Katello Service Restart Succeeded Restart Services using the Katello interface. Actions::RemoteExecution::RunHostJob Actions Remote Execution Run Host Job Katello Group Update Succeeded Update package group using the Katello interface. Actions::RemoteExecution::RunHostJob Actions Remote Execution Run Host Job Katello Package Update Succeeded Update package using the Katello interface. Actions::RemoteExecution::RunHostJob Actions Remote Execution Run Host Job Foreman OpenSCAP Run Scans Succeeded Run OpenSCAP scan. Actions::RemoteExecution::RunHostJob Actions Remote Execution Run Host Job Ansible Run Host Succeeded Runs an Ansible playbook containing all the roles defined for a host. Actions::RemoteExecution::RunHostJob Actions Remote Execution Run Host Job Ansible Run Capsule Upgrade Succeeded Upgrade Capsules on given Capsule server hosts. Actions::RemoteExecution::RunHostJob Actions Remote Execution Run Host Job Ansible Configure Cloud Connector Succeeded Configure Cloud Connector on given hosts. Actions::RemoteExecution::RunHostJob Actions Remote Execution Run Host Job Ansible Run Insights Plan Succeeded Runs a given maintenance plan from Red Hat Access Insights given an ID. Actions::RemoteExecution::RunHostJob Actions Remote Execution Run Host Job Ansible Run Playbook Succeeded Run an Ansible playbook against given hosts. Actions::RemoteExecution::RunHostJob Actions Remote Execution Run Host Job Ansible Enable Web Console Succeeded Run an Ansible playbook to enable the web console on given hosts. Actions::RemoteExecution::RunHostJob Actions Remote Execution Run Host Job Puppet Run Host Succeeded Perform a single Puppet run. Actions::RemoteExecution::RunHostJob Actions Remote Execution Run Host Job Katello Module Stream Action Succeeded Perform a module stream action using the Katello interface. Actions::RemoteExecution::RunHostJob Actions Remote Execution Run Host Job Leapp Pre-upgrade Succeeded Upgradeability check for RHEL 7 host. Actions::RemoteExecution::RunHostJob Actions Remote Execution Run Host Job Leapp Remediation Plan Succeeded Run Remediation plan with Leapp. Actions::RemoteExecution::RunHostJob Actions Remote Execution Run Host Job Leapp Upgrade Succeeded Run Leapp upgrade job for RHEL 7 host. Actions::RemoteExecution::RunHostJob Build Entered A host entered the build mode. Custom event: @payload[:id] (host id), @payload[:hostname] (host name). Build Exited A host build mode was canceled, either it was successfully provisioned or the user canceled the build manually. Custom event: @payload[:id] (host id), @payload[:hostname] (host name). Content View Created/Updated/Destroyed Common database operations on a Content View. Katello::ContentView Domain Created/Updated/Destroyed Common database operations on a domain. Domain Host Created/Updated/Destroyed Common database operations on a host. Host Hostgroup Created/Updated/Destroyed Common database operations on a hostgroup. Hostgroup Model Created/Updated/Destroyed Common database operations on a model. Model Status Changed Global host status of a host changed. Custom event: @payload[:id] (host id), @payload[:hostname] , @payload[:global_status] (hash) Subnet Created/Updated/Destroyed Common database operations on a subnet. Subnet Template Render Performed A report template was rendered. Template User Created/Updated/Destroyed Common database operations on a user. User 16.6. Shellhooks With webhooks, you can only map one Satellite event to one API call. For advanced integrations, where a single shell script can contain multiple commands, you can install a Capsule shellhooks plugin that exposes executables using a REST HTTP API. You can then configure a webhook to reach out to a Capsule API to run a predefined shellhook. A shellhook is an executable script that can be written in any language as long as it can be executed. The shellhook can for example contain commands or edit files. You must place your executable scripts in /var/lib/foreman-proxy/shellhooks with only alphanumeric characters and underscores in their name. You can pass input to shellhook script through the webhook payload. This input is redirected to standard input of the shellhook script. You can pass arguments to shellhook script using HTTP headers in format X-Shellhook-Arg-1 to X-Shellhook-Arg-99 . For more information on passing arguments to shellhook script, see: Section 16.8, "Passing Arguments to Shellhook Script Using Webhooks" Section 16.9, "Passing Arguments to Shellhook Script Using Curl" The HTTP method must be POST. An example URL would be: https://capsule.example.com:9090/shellhook/My_Script . Note Unlike the shellhooks directory, the URL must contain /shellhook/ in singular to be valid. You must enable Capsule Authorization for each webhook connected to a shellhook to enable it to authorize a call. Standard output and standard error output are redirected to the Capsule logs as messages with debug or warning levels respectively. The shellhook HTTPS calls do not return a value. For an example on creating a shellhook script, see Section 16.10, "Creating a Shellhook to Print Arguments" . 16.7. Installing the Shellhooks Plugin Optionally, you can install and enable the shellhooks plugin on each Capsule used for shellhooks, using the following command: 16.8. Passing Arguments to Shellhook Script Using Webhooks Use this procedure to pass arguments to a shellhook script using webhooks. Procedure When creating a webhook, on the Additional tab, create HTTP headers in the following format: Ensure that the headers have a valid JSON or ERB format. Only pass safe fields like database ID, name, or labels that do not include new lines or quote characters. For more information, see Section 16.4, "Creating a Webhook" . Example 16.9. Passing Arguments to Shellhook Script Using Curl Use this procedure to pass arguments to a shellhook script using curl. Procedure When executing a shellhook script using curl , create HTTP headers in the following format: Example 16.10. Creating a Shellhook to Print Arguments Create a simple shellhook script that prints "Hello World!" when you run a remote execution job. Prerequisite You have the webhooks and shellhooks plug-ins installed. For more information, see: Section 16.2, "Installing Webhooks" Section 16.7, "Installing the Shellhooks Plugin" Procedure Modify the /var/lib/foreman-proxy/shellhooks/print_args script to print arguments to standard error output so you can see them in the Capsule logs: In the Satellite web UI, navigate to Administer > Webhooks . Click Create new . From the Subscribe to list, select Actions Remote Execution Run Host Job Succeeded . Enter a Name for your webhook. In the Target URL field, enter the URL of your Capsule Server followed by :9090/shellhook/print_args : Note that shellhook in the URL is singular, unlike the shellhooks directory. From the Template list, select Empty Payload . On the Credentials tab, check Capsule Authorization . On the Additional tab, enter the following text in the Optional HTTP headers field: Click Submit . You now have successfully created a shellhook that prints "Hello World!" to Capsule logs every time you a remote execution job succeeds. Verification Run a remote execution job on any host. You can use time as a command. For more information, see Executing a Remote Job in Managing Hosts . Verify that the shellhook script was triggered and printed "Hello World!" to Capsule Server logs: You should find the following lines at the end of the log:
|
[
"satellite-installer --enable-foreman-plugin-webhooks",
"satellite-installer --enable-foreman-cli-webhooks",
"satellite-installer --enable-foreman-proxy-plugin-shellhooks",
"{ \"X-Shellhook-Arg-1\": \" VALUE \", \"X-Shellhook-Arg-2\": \" VALUE \" }",
"{ \"X-Shellhook-Arg-1\": \"<%= @object.content_view_version_id %>\", \"X-Shellhook-Arg-2\": \"<%= @object.content_view_name %>\" }",
"\"X-Shellhook-Arg-1: VALUE \" \"X-Shellhook-Arg-2: VALUE \"",
"curl -sX POST -H 'Content-Type: text/plain' -H \"X-Shellhook-Arg-1: Version 1.0\" -H \"X-Shellhook-Arg-2: My Content View\" --data \"\" https://capsule.example.com:9090/shellhook/My_Script",
"#!/bin/sh # Prints all arguments to stderr # echo \"USD@\" >&2",
"https:// capsule.example.com :9090/shellhook/print_args",
"{ \"X-Shellhook-Arg-1\": \"Hello\", \"X-Shellhook-Arg-2\": \"World!\" }",
"tail /var/log/foreman-proxy/proxy.log",
"[I] Started POST /shellhook/print_args [I] Finished POST /shellhook/print_args with 200 (0.33 ms) [I] [3520] Started task /var/lib/foreman-proxy/shellhooks/print_args\\ Hello\\ World\\! [W] [3520] Hello World!"
] |
https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/administering_red_hat_satellite/Using_Webhooks_admin
|
Chapter 4. Testing Evacuation with Instance HA
|
Chapter 4. Testing Evacuation with Instance HA Warning The following procedure involves deliberately crashing a Compute node. Doing this forces the automated evacuation of instances through Instance HA. Boot one or more instances on the overcloud before crashing the Compute node that hosts the instances to test. Log in to the Compute node that hosts the instances, using the compute-n format. Crash the Compute node. Wait a few minutes and then verify that these instances re-spawned on another Compute nodes.
|
[
"stack@director USD . overcloudrc stack@director USD nova boot --image cirros --flavor 2 test-failover stack@director USD nova list --fields name,status,host",
"stack@director USD . stackrc stack@director USD ssh -l heat-admin compute-n heat-admin@ compute-n USD",
"heat-admin@ compute-n USD echo c > /proc/sysrq-trigger",
"stack@director USD nova list --fields name,status,host stack@director USD nova service-list"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/high_availability_for_compute_instances/instanceha-testing
|
Chapter 5. Setting up Data Grid cluster transport
|
Chapter 5. Setting up Data Grid cluster transport Data Grid requires a transport layer so nodes can automatically join and leave clusters. The transport layer also enables Data Grid nodes to replicate or distribute data across the network and perform operations such as re-balancing and state transfer. 5.1. Default JGroups stacks Data Grid provides default JGroups stack files, default-jgroups-*.xml , in the default-configs directory inside the infinispan-core-14.0.21.Final-redhat-00001.jar file. File name Stack name Description default-jgroups-udp.xml udp Uses UDP for transport and UDP multicast for discovery. Suitable for larger clusters (over 100 nodes) or if you are using replicated caches or invalidation mode. Minimizes the number of open sockets. default-jgroups-tcp.xml tcp Uses TCP for transport and the MPING protocol for discovery, which uses UDP multicast. Suitable for smaller clusters (under 100 nodes) only if you are using distributed caches because TCP is more efficient than UDP as a point-to-point protocol. default-jgroups-kubernetes.xml kubernetes Uses TCP for transport and DNS_PING for discovery. Suitable for Kubernetes and Red Hat OpenShift nodes where UDP multicast is not always available. default-jgroups-ec2.xml ec2 Uses TCP for transport and aws.S3_PING for discovery. Suitable for Amazon EC2 nodes where UDP multicast is not available. Requires additional dependencies. default-jgroups-google.xml google Uses TCP for transport and GOOGLE_PING2 for discovery. Suitable for Google Cloud Platform nodes where UDP multicast is not available. Requires additional dependencies. default-jgroups-azure.xml azure Uses TCP for transport and AZURE_PING for discovery. Suitable for Microsoft Azure nodes where UDP multicast is not available. Requires additional dependencies. Additional resources JGroups Protocols 5.2. Cluster discovery protocols Data Grid supports different protocols that allow nodes to automatically find each other on the network and form clusters. There are two types of discovery mechanisms that Data Grid can use: Generic discovery protocols that work on most networks and do not rely on external services. Discovery protocols that rely on external services to store and retrieve topology information for Data Grid clusters. For instance the DNS_PING protocol performs discovery through DNS server records. Note Running Data Grid on hosted platforms requires using discovery mechanisms that are adapted to network constraints that individual cloud providers impose. Additional resources JGroups Discovery Protocols JGroups cluster transport configuration for Data Grid 8.x (Red Hat knowledgebase article) 5.2.1. PING PING, or UDPPING is a generic JGroups discovery mechanism that uses dynamic multicasting with the UDP protocol. When joining, nodes send PING requests to an IP multicast address to discover other nodes already in the Data Grid cluster. Each node responds to the PING request with a packet that contains the address of the coordinator node and its own address. C=coordinator's address and A=own address. If no nodes respond to the PING request, the joining node becomes the coordinator node in a new cluster. PING configuration example <PING num_discovery_runs="3"/> Additional resources JGroups PING 5.2.2. TCPPING TCPPING is a generic JGroups discovery mechanism that uses a list of static addresses for cluster members. With TCPPING, you manually specify the IP address or hostname of each node in the Data Grid cluster as part of the JGroups stack, rather than letting nodes discover each other dynamically. TCPPING configuration example <TCP bind_port="7800" /> <TCPPING timeout="3000" initial_hosts="USD{jgroups.tcpping.initial_hosts:hostname1[port1],hostname2[port2]}" port_range="0" num_initial_members="3"/> Additional resources JGroups TCPPING 5.2.3. MPING MPING uses IP multicast to discover the initial membership of Data Grid clusters. You can use MPING to replace TCPPING discovery with TCP stacks and use multicasing for discovery instead of static lists of initial hosts. However, you can also use MPING with UDP stacks. MPING configuration example <MPING mcast_addr="USD{jgroups.mcast_addr:239.6.7.8}" mcast_port="USD{jgroups.mcast_port:46655}" num_discovery_runs="3" ip_ttl="USD{jgroups.udp.ip_ttl:2}"/> Additional resources JGroups MPING 5.2.4. TCPGOSSIP Gossip routers provide a centralized location on the network from which your Data Grid cluster can retrieve addresses of other nodes. You inject the address ( IP:PORT ) of the Gossip router into Data Grid nodes as follows: Pass the address as a system property to the JVM; for example, -DGossipRouterAddress="10.10.2.4[12001]" . Reference that system property in the JGroups configuration file. Gossip router configuration example <TCP bind_port="7800" /> <TCPGOSSIP timeout="3000" initial_hosts="USD{GossipRouterAddress}" num_initial_members="3" /> Additional resources JGroups Gossip Router 5.2.5. JDBC_PING JDBC_PING uses shared databases to store information about Data Grid clusters. This protocol supports any database that can use a JDBC connection. Nodes write their IP addresses to the shared database so joining nodes can find the Data Grid cluster on the network. When nodes leave Data Grid clusters, they delete their IP addresses from the shared database. JDBC_PING configuration example <JDBC_PING connection_url="jdbc:mysql://localhost:3306/database_name" connection_username="user" connection_password="password" connection_driver="com.mysql.jdbc.Driver"/> Important Add the appropriate JDBC driver to the classpath so Data Grid can use JDBC_PING. Additional resources JDBC_PING JDBC_PING Wiki 5.2.6. DNS_PING JGroups DNS_PING queries DNS servers to discover Data Grid cluster members in Kubernetes environments such as OKD and Red Hat OpenShift. DNS_PING configuration example <dns.DNS_PING dns_query="myservice.myproject.svc.cluster.local" /> Additional resources JGroups DNS_PING DNS for Services and Pods (Kubernetes documentation for adding DNS entries) 5.2.7. Cloud discovery protocols Data Grid includes default JGroups stacks that use discovery protocol implementations that are specific to cloud providers. Discovery protocol Default stack file Artifact Version aws.S3_PING default-jgroups-ec2.xml org.jgroups.aws:jgroups-aws 2.0.1.Final GOOGLE_PING2 default-jgroups-google.xml org.jgroups.google:jgroups-google 1.0.0.Final azure.AZURE_PING default-jgroups-azure.xml org.jgroups.azure:jgroups-azure 2.0.0.Final Providing dependencies for cloud discovery protocols To use aws.S3_PING , GOOGLE_PING2 , or azure.AZURE_PING cloud discovery protocols, you need to provide dependent libraries to Data Grid. Procedure Add the artifact dependencies to your project pom.xml . You can then configure the cloud discovery protocol as part of a JGroups stack file or with system properties. Additional resources JGroups aws.S3_PING JGroups GOOGLE_PING2 JGroups azure.AZURE_PING 5.3. Using the default JGroups stacks Data Grid uses JGroups protocol stacks so nodes can send each other messages on dedicated cluster channels. Data Grid provides preconfigured JGroups stacks for UDP and TCP protocols. You can use these default stacks as a starting point for building custom cluster transport configuration that is optimized for your network requirements. Procedure Do one of the following to use one of the default JGroups stacks: Use the stack attribute in your infinispan.xml file. <infinispan> <cache-container default-cache="replicatedCache"> <!-- Use the default UDP stack for cluster transport. --> <transport cluster="USD{infinispan.cluster.name}" stack="udp" node-name="USD{infinispan.node.name:}"/> </cache-container> </infinispan> Use the addProperty() method to set the JGroups stack file: GlobalConfiguration globalConfig = new GlobalConfigurationBuilder().transport() .defaultTransport() .clusterName("qa-cluster") //Uses the default-jgroups-udp.xml stack for cluster transport. .addProperty("configurationFile", "default-jgroups-udp.xml") .build(); Verification Data Grid logs the following message to indicate which stack it uses: Additional resources JGroups cluster transport configuration for Data Grid 8.x (Red Hat knowledgebase article) 5.4. Customizing JGroups stacks Adjust and tune properties to create a cluster transport configuration that works for your network requirements. Data Grid provides attributes that let you extend the default JGroups stacks for easier configuration. You can inherit properties from the default stacks while combining, removing, and replacing other properties. Procedure Create a new JGroups stack declaration in your infinispan.xml file. Add the extends attribute and specify a JGroups stack to inherit properties from. Use the stack.combine attribute to modify properties for protocols configured in the inherited stack. Use the stack.position attribute to define the location for your custom stack. Specify the stack name as the value for the stack attribute in the transport configuration. For example, you might evaluate using a Gossip router and symmetric encryption with the default TCP stack as follows: <infinispan> <jgroups> <!-- Creates a custom JGroups stack named "my-stack". --> <!-- Inherits properties from the default TCP stack. --> <stack name="my-stack" extends="tcp"> <!-- Uses TCPGOSSIP as the discovery mechanism instead of MPING --> <TCPGOSSIP initial_hosts="USD{jgroups.tunnel.gossip_router_hosts:localhost[12001]}" stack.combine="REPLACE" stack.position="MPING" /> <!-- Removes the FD_SOCK2 protocol from the stack. --> <FD_SOCK2 stack.combine="REMOVE"/> <!-- Modifies the timeout value for the VERIFY_SUSPECT2 protocol. --> <VERIFY_SUSPECT2 timeout="2000"/> <!-- Adds SYM_ENCRYPT to the stack after VERIFY_SUSPECT2. --> <SYM_ENCRYPT sym_algorithm="AES" keystore_name="mykeystore.p12" keystore_type="PKCS12" store_password="changeit" key_password="changeit" alias="myKey" stack.combine="INSERT_AFTER" stack.position="VERIFY_SUSPECT2" /> </stack> </jgroups> <cache-container name="default" statistics="true"> <!-- Uses "my-stack" for cluster transport. --> <transport cluster="USD{infinispan.cluster.name}" stack="my-stack" node-name="USD{infinispan.node.name:}"/> </cache-container> </infinispan> Check Data Grid logs to ensure it uses the stack. Reference JGroups cluster transport configuration for Data Grid 8.x (Red Hat knowledgebase article) 5.4.1. Inheritance attributes When you extend a JGroups stack, inheritance attributes let you adjust protocols and properties in the stack you are extending. stack.position specifies protocols to modify. stack.combine uses the following values to extend JGroups stacks: Value Description COMBINE Overrides protocol properties. REPLACE Replaces protocols. INSERT_AFTER Adds a protocol into the stack after another protocol. Does not affect the protocol that you specify as the insertion point. Protocols in JGroups stacks affect each other based on their location in the stack. For example, you should put a protocol such as NAKACK2 after the SYM_ENCRYPT or ASYM_ENCRYPT protocol so that NAKACK2 is secured. INSERT_BEFORE Inserts a protocols into the stack before another protocol. Affects the protocol that you specify as the insertion point. REMOVE Removes protocols from the stack. 5.5. Using JGroups system properties Pass system properties to Data Grid at startup to tune cluster transport. Procedure Use -D<property-name>=<property-value> arguments to set JGroups system properties as required. For example, set a custom bind port and IP address as follows: Note When you embed Data Grid clusters in clustered Red Hat JBoss EAP applications, JGroups system properties can clash or override each other. For example, you do not set a unique bind address for either your Data Grid cluster or your Red Hat JBoss EAP application. In this case both Data Grid and your Red Hat JBoss EAP application use the JGroups default property and attempt to form clusters using the same bind address. 5.5.1. Cluster transport properties Use the following properties to customize JGroups cluster transport. System Property Description Default Value Required/Optional jgroups.bind.address Bind address for cluster transport. SITE_LOCAL Optional jgroups.bind.port Bind port for the socket. 7800 Optional jgroups.mcast_addr IP address for multicast, both discovery and inter-cluster communication. The IP address must be a valid "class D" address that is suitable for IP multicast. 239.6.7.8 Optional jgroups.mcast_port Port for the multicast socket. 46655 Optional jgroups.ip_ttl Time-to-live (TTL) for IP multicast packets. The value defines the number of network hops a packet can make before it is dropped. 2 Optional jgroups.thread_pool.min_threads Minimum number of threads for the thread pool. 0 Optional jgroups.thread_pool.max_threads Maximum number of threads for the thread pool. 200 Optional jgroups.join_timeout Maximum number of milliseconds to wait for join requests to succeed. 2000 Optional jgroups.thread_dumps_threshold Number of times a thread pool needs to be full before a thread dump is logged. 10000 Optional jgroups.fd.port-offset Offset from jgroups.bind.port port for the FD (failure detection protocol) socket. 50000 (port 57800 ) Optional jgroups.frag_size Maximum number of bytes in a message. Messages larger than that are fragmented. 60000 Optional jgroups.diag.enabled Enables JGroups diagnostic probing. false Optional Additional resources JGroups system properties JGroups protocol list 5.5.2. System properties for cloud discovery protocols Use the following properties to configure JGroups discovery protocols for hosted platforms. 5.5.2.1. Amazon EC2 System properties for configuring aws.S3_PING . System Property Description Default Value Required/Optional jgroups.s3.region_name Name of the Amazon S3 region. No default value. Optional jgroups.s3.bucket_name Name of the Amazon S3 bucket. The name must exist and be unique. No default value. Optional 5.5.2.2. Google Cloud Platform System properties for configuring GOOGLE_PING2 . System Property Description Default Value Required/Optional jgroups.google.bucket_name Name of the Google Compute Engine bucket. The name must exist and be unique. No default value. Required 5.5.2.3. Azure System properties for azure.AZURE_PING`. System Property Description Default Value Required/Optional jboss.jgroups.azure_ping.storage_account_name Name of the Azure storage account. The name must exist and be unique. No default value. Required jboss.jgroups.azure_ping.storage_access_key Name of the Azure storage access key. No default value. Required jboss.jgroups.azure_ping.container Valid DNS name of the container that stores ping information. No default value. Required 5.5.2.4. OpenShift System properties for DNS_PING . System Property Description Default Value Required/Optional jgroups.dns.query Sets the DNS record that returns cluster members. No default value. Required jgroups.dns.record Sets the DNS record type. A Optional 5.6. Using inline JGroups stacks You can insert complete JGroups stack definitions into infinispan.xml files. Procedure Embed a custom JGroups stack declaration in your infinispan.xml file. <infinispan> <!-- Contains one or more JGroups stack definitions. --> <jgroups> <!-- Defines a custom JGroups stack named "prod". --> <stack name="prod"> <TCP bind_port="7800" port_range="30" recv_buf_size="20000000" send_buf_size="640000"/> <RED/> <MPING break_on_coord_rsp="true" mcast_addr="USD{jgroups.mping.mcast_addr:239.2.4.6}" mcast_port="USD{jgroups.mping.mcast_port:43366}" num_discovery_runs="3" ip_ttl="USD{jgroups.udp.ip_ttl:2}"/> <MERGE3 /> <FD_SOCK2 /> <FD_ALL3 timeout="3000" interval="1000" timeout_check_interval="1000" /> <VERIFY_SUSPECT2 timeout="1000" /> <pbcast.NAKACK2 use_mcast_xmit="false" xmit_interval="200" xmit_table_num_rows="50" xmit_table_msgs_per_row="1024" xmit_table_max_compaction_time="30000" /> <UNICAST3 conn_close_timeout="5000" xmit_interval="200" xmit_table_num_rows="50" xmit_table_msgs_per_row="1024" xmit_table_max_compaction_time="30000" /> <pbcast.STABLE desired_avg_gossip="2000" max_bytes="1M" /> <pbcast.GMS print_local_addr="false" join_timeout="USD{jgroups.join_timeout:2000}" /> <UFC max_credits="4m" min_threshold="0.40" /> <MFC max_credits="4m" min_threshold="0.40" /> <FRAG4 /> </stack> </jgroups> <cache-container default-cache="replicatedCache"> <!-- Uses "prod" for cluster transport. --> <transport cluster="USD{infinispan.cluster.name}" stack="prod" node-name="USD{infinispan.node.name:}"/> </cache-container> </infinispan> 5.7. Using external JGroups stacks Reference external files that define custom JGroups stacks in infinispan.xml files. Procedure Put custom JGroups stack files on the application classpath. Alternatively you can specify an absolute path when you declare the external stack file. Reference the external stack file with the stack-file element. <infinispan> <jgroups> <!-- Creates a "prod-tcp" stack that references an external file. --> <stack-file name="prod-tcp" path="prod-jgroups-tcp.xml"/> </jgroups> <cache-container default-cache="replicatedCache"> <!-- Use the "prod-tcp" stack for cluster transport. --> <transport stack="prod-tcp" /> <replicated-cache name="replicatedCache"/> </cache-container> <!-- Cache configuration goes here. --> </infinispan> You can also use the addProperty() method in the TransportConfigurationBuilder class to specify a custom JGroups stack file as follows: GlobalConfiguration globalConfig = new GlobalConfigurationBuilder().transport() .defaultTransport() .clusterName("prod-cluster") //Uses a custom JGroups stack for cluster transport. .addProperty("configurationFile", "my-jgroups-udp.xml") .build(); In this example, my-jgroups-udp.xml references a UDP stack with custom properties such as the following: Custom UDP stack example <config xmlns="urn:org:jgroups" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:org:jgroups http://www.jgroups.org/schema/jgroups-4.2.xsd"> <UDP bind_addr="USD{jgroups.bind_addr:127.0.0.1}" mcast_addr="USD{jgroups.udp.mcast_addr:239.0.2.0}" mcast_port="USD{jgroups.udp.mcast_port:46655}" tos="8" ucast_recv_buf_size="20000000" ucast_send_buf_size="640000" mcast_recv_buf_size="25000000" mcast_send_buf_size="640000" bundler.max_size="64000" ip_ttl="USD{jgroups.udp.ip_ttl:2}" diag.enabled="false" thread_naming_pattern="pl" thread_pool.enabled="true" thread_pool.min_threads="2" thread_pool.max_threads="30" thread_pool.keep_alive_time="5000" /> <!-- Other JGroups stack configuration goes here. --> </config> Additional resources org.infinispan.configuration.global.TransportConfigurationBuilder 5.8. Using custom JChannels Construct custom JGroups JChannels as in the following example: GlobalConfigurationBuilder global = new GlobalConfigurationBuilder(); JChannel jchannel = new JChannel(); // Configure the jchannel as needed. JGroupsTransport transport = new JGroupsTransport(jchannel); global.transport().transport(transport); new DefaultCacheManager(global.build()); Note Data Grid cannot use custom JChannels that are already connected. Additional resources JGroups JChannel 5.9. Encrypting cluster transport Secure cluster transport so that nodes communicate with encrypted messages. You can also configure Data Grid clusters to perform certificate authentication so that only nodes with valid identities can join. 5.9.1. JGroups encryption protocols To secure cluster traffic, you can configure Data Grid nodes to encrypt JGroups message payloads with secret keys. Data Grid nodes can obtain secret keys from either: The coordinator node (asymmetric encryption). A shared keystore (symmetric encryption). Retrieving secret keys from coordinator nodes You configure asymmetric encryption by adding the ASYM_ENCRYPT protocol to a JGroups stack in your Data Grid configuration. This allows Data Grid clusters to generate and distribute secret keys. Important When using asymmetric encryption, you should also provide keystores so that nodes can perform certificate authentication and securely exchange secret keys. This protects your cluster from man-in-the-middle (MitM) attacks. Asymmetric encryption secures cluster traffic as follows: The first node in the Data Grid cluster, the coordinator node, generates a secret key. A joining node performs certificate authentication with the coordinator to mutually verify identity. The joining node requests the secret key from the coordinator node. That request includes the public key for the joining node. The coordinator node encrypts the secret key with the public key and returns it to the joining node. The joining node decrypts and installs the secret key. The node joins the cluster, encrypting and decrypting messages with the secret key. Retrieving secret keys from shared keystores You configure symmetric encryption by adding the SYM_ENCRYPT protocol to a JGroups stack in your Data Grid configuration. This allows Data Grid clusters to obtain secret keys from keystores that you provide. Nodes install the secret key from a keystore on the Data Grid classpath at startup. Node join clusters, encrypting and decrypting messages with the secret key. Comparison of asymmetric and symmetric encryption ASYM_ENCRYPT with certificate authentication provides an additional layer of encryption in comparison with SYM_ENCRYPT . You provide keystores that encrypt the requests to coordinator nodes for the secret key. Data Grid automatically generates that secret key and handles cluster traffic, while letting you specify when to generate secret keys. For example, you can configure clusters to generate new secret keys when nodes leave. This ensures that nodes cannot bypass certificate authentication and join with old keys. SYM_ENCRYPT , on the other hand, is faster than ASYM_ENCRYPT because nodes do not need to exchange keys with the cluster coordinator. A potential drawback to SYM_ENCRYPT is that there is no configuration to automatically generate new secret keys when cluster membership changes. Users are responsible for generating and distributing the secret keys that nodes use to encrypt cluster traffic. 5.9.2. Securing cluster transport with asymmetric encryption Configure Data Grid clusters to generate and distribute secret keys that encrypt JGroups messages. Procedure Create a keystore with certificate chains that enables Data Grid to verify node identity. Place the keystore on the classpath for each node in the cluster. For Data Grid Server, you put the keystore in the USDRHDG_HOME directory. Add the SSL_KEY_EXCHANGE and ASYM_ENCRYPT protocols to a JGroups stack in your Data Grid configuration, as in the following example: <infinispan> <jgroups> <!-- Creates a secure JGroups stack named "encrypt-tcp" that extends the default TCP stack. --> <stack name="encrypt-tcp" extends="tcp"> <!-- Adds a keystore that nodes use to perform certificate authentication. --> <!-- Uses the stack.combine and stack.position attributes to insert SSL_KEY_EXCHANGE into the default TCP stack after VERIFY_SUSPECT2. --> <SSL_KEY_EXCHANGE keystore_name="mykeystore.jks" keystore_password="changeit" stack.combine="INSERT_AFTER" stack.position="VERIFY_SUSPECT2"/> <!-- Configures ASYM_ENCRYPT --> <!-- Uses the stack.combine and stack.position attributes to insert ASYM_ENCRYPT into the default TCP stack before pbcast.NAKACK2. --> <!-- The use_external_key_exchange = "true" attribute configures nodes to use the `SSL_KEY_EXCHANGE` protocol for certificate authentication. --> <ASYM_ENCRYPT asym_keylength="2048" asym_algorithm="RSA" change_key_on_coord_leave = "false" change_key_on_leave = "false" use_external_key_exchange = "true" stack.combine="INSERT_BEFORE" stack.position="pbcast.NAKACK2"/> </stack> </jgroups> <cache-container name="default" statistics="true"> <!-- Configures the cluster to use the JGroups stack. --> <transport cluster="USD{infinispan.cluster.name}" stack="encrypt-tcp" node-name="USD{infinispan.node.name:}"/> </cache-container> </infinispan> Verification When you start your Data Grid cluster, the following log message indicates that the cluster is using the secure JGroups stack: Data Grid nodes can join the cluster only if they use ASYM_ENCRYPT and can obtain the secret key from the coordinator node. Otherwise the following message is written to Data Grid logs: Additional resources JGroups 4 Manual JGroups 4.2 Schema 5.9.3. Securing cluster transport with symmetric encryption Configure Data Grid clusters to encrypt JGroups messages with secret keys from keystores that you provide. Procedure Create a keystore that contains a secret key. Place the keystore on the classpath for each node in the cluster. For Data Grid Server, you put the keystore in the USDRHDG_HOME directory. Add the SYM_ENCRYPT protocol to a JGroups stack in your Data Grid configuration. <infinispan> <jgroups> <!-- Creates a secure JGroups stack named "encrypt-tcp" that extends the default TCP stack. --> <stack name="encrypt-tcp" extends="tcp"> <!-- Adds a keystore from which nodes obtain secret keys. --> <!-- Uses the stack.combine and stack.position attributes to insert SYM_ENCRYPT into the default TCP stack after VERIFY_SUSPECT2. --> <SYM_ENCRYPT keystore_name="myKeystore.p12" keystore_type="PKCS12" store_password="changeit" key_password="changeit" alias="myKey" stack.combine="INSERT_AFTER" stack.position="VERIFY_SUSPECT2"/> </stack> </jgroups> <cache-container name="default" statistics="true"> <!-- Configures the cluster to use the JGroups stack. --> <transport cluster="USD{infinispan.cluster.name}" stack="encrypt-tcp" node-name="USD{infinispan.node.name:}"/> </cache-container> </infinispan> Verification When you start your Data Grid cluster, the following log message indicates that the cluster is using the secure JGroups stack: Data Grid nodes can join the cluster only if they use SYM_ENCRYPT and can obtain the secret key from the shared keystore. Otherwise the following message is written to Data Grid logs: Additional resources JGroups 4 Manual JGroups 4.2 Schema 5.10. TCP and UDP ports for cluster traffic Data Grid uses the following ports for cluster transport messages: Default Port Protocol Description 7800 TCP/UDP JGroups cluster bind port 46655 UDP JGroups multicast Cross-site replication Data Grid uses the following ports for the JGroups RELAY2 protocol: 7900 For Data Grid clusters running on OpenShift. 7800 If using UDP for traffic between nodes and TCP for traffic between clusters. 7801 If using TCP for traffic between nodes and TCP for traffic between clusters.
|
[
"<PING num_discovery_runs=\"3\"/>",
"<TCP bind_port=\"7800\" /> <TCPPING timeout=\"3000\" initial_hosts=\"USD{jgroups.tcpping.initial_hosts:hostname1[port1],hostname2[port2]}\" port_range=\"0\" num_initial_members=\"3\"/>",
"<MPING mcast_addr=\"USD{jgroups.mcast_addr:239.6.7.8}\" mcast_port=\"USD{jgroups.mcast_port:46655}\" num_discovery_runs=\"3\" ip_ttl=\"USD{jgroups.udp.ip_ttl:2}\"/>",
"<TCP bind_port=\"7800\" /> <TCPGOSSIP timeout=\"3000\" initial_hosts=\"USD{GossipRouterAddress}\" num_initial_members=\"3\" />",
"<JDBC_PING connection_url=\"jdbc:mysql://localhost:3306/database_name\" connection_username=\"user\" connection_password=\"password\" connection_driver=\"com.mysql.jdbc.Driver\"/>",
"<dns.DNS_PING dns_query=\"myservice.myproject.svc.cluster.local\" />",
"<infinispan> <cache-container default-cache=\"replicatedCache\"> <!-- Use the default UDP stack for cluster transport. --> <transport cluster=\"USD{infinispan.cluster.name}\" stack=\"udp\" node-name=\"USD{infinispan.node.name:}\"/> </cache-container> </infinispan>",
"GlobalConfiguration globalConfig = new GlobalConfigurationBuilder().transport() .defaultTransport() .clusterName(\"qa-cluster\") //Uses the default-jgroups-udp.xml stack for cluster transport. .addProperty(\"configurationFile\", \"default-jgroups-udp.xml\") .build();",
"[org.infinispan.CLUSTER] ISPN000078: Starting JGroups channel cluster with stack udp",
"<infinispan> <jgroups> <!-- Creates a custom JGroups stack named \"my-stack\". --> <!-- Inherits properties from the default TCP stack. --> <stack name=\"my-stack\" extends=\"tcp\"> <!-- Uses TCPGOSSIP as the discovery mechanism instead of MPING --> <TCPGOSSIP initial_hosts=\"USD{jgroups.tunnel.gossip_router_hosts:localhost[12001]}\" stack.combine=\"REPLACE\" stack.position=\"MPING\" /> <!-- Removes the FD_SOCK2 protocol from the stack. --> <FD_SOCK2 stack.combine=\"REMOVE\"/> <!-- Modifies the timeout value for the VERIFY_SUSPECT2 protocol. --> <VERIFY_SUSPECT2 timeout=\"2000\"/> <!-- Adds SYM_ENCRYPT to the stack after VERIFY_SUSPECT2. --> <SYM_ENCRYPT sym_algorithm=\"AES\" keystore_name=\"mykeystore.p12\" keystore_type=\"PKCS12\" store_password=\"changeit\" key_password=\"changeit\" alias=\"myKey\" stack.combine=\"INSERT_AFTER\" stack.position=\"VERIFY_SUSPECT2\" /> </stack> </jgroups> <cache-container name=\"default\" statistics=\"true\"> <!-- Uses \"my-stack\" for cluster transport. --> <transport cluster=\"USD{infinispan.cluster.name}\" stack=\"my-stack\" node-name=\"USD{infinispan.node.name:}\"/> </cache-container> </infinispan>",
"[org.infinispan.CLUSTER] ISPN000078: Starting JGroups channel cluster with stack my-stack",
"java -cp ... -Djgroups.bind.port=1234 -Djgroups.bind.address=192.0.2.0",
"<infinispan> <!-- Contains one or more JGroups stack definitions. --> <jgroups> <!-- Defines a custom JGroups stack named \"prod\". --> <stack name=\"prod\"> <TCP bind_port=\"7800\" port_range=\"30\" recv_buf_size=\"20000000\" send_buf_size=\"640000\"/> <RED/> <MPING break_on_coord_rsp=\"true\" mcast_addr=\"USD{jgroups.mping.mcast_addr:239.2.4.6}\" mcast_port=\"USD{jgroups.mping.mcast_port:43366}\" num_discovery_runs=\"3\" ip_ttl=\"USD{jgroups.udp.ip_ttl:2}\"/> <MERGE3 /> <FD_SOCK2 /> <FD_ALL3 timeout=\"3000\" interval=\"1000\" timeout_check_interval=\"1000\" /> <VERIFY_SUSPECT2 timeout=\"1000\" /> <pbcast.NAKACK2 use_mcast_xmit=\"false\" xmit_interval=\"200\" xmit_table_num_rows=\"50\" xmit_table_msgs_per_row=\"1024\" xmit_table_max_compaction_time=\"30000\" /> <UNICAST3 conn_close_timeout=\"5000\" xmit_interval=\"200\" xmit_table_num_rows=\"50\" xmit_table_msgs_per_row=\"1024\" xmit_table_max_compaction_time=\"30000\" /> <pbcast.STABLE desired_avg_gossip=\"2000\" max_bytes=\"1M\" /> <pbcast.GMS print_local_addr=\"false\" join_timeout=\"USD{jgroups.join_timeout:2000}\" /> <UFC max_credits=\"4m\" min_threshold=\"0.40\" /> <MFC max_credits=\"4m\" min_threshold=\"0.40\" /> <FRAG4 /> </stack> </jgroups> <cache-container default-cache=\"replicatedCache\"> <!-- Uses \"prod\" for cluster transport. --> <transport cluster=\"USD{infinispan.cluster.name}\" stack=\"prod\" node-name=\"USD{infinispan.node.name:}\"/> </cache-container> </infinispan>",
"<infinispan> <jgroups> <!-- Creates a \"prod-tcp\" stack that references an external file. --> <stack-file name=\"prod-tcp\" path=\"prod-jgroups-tcp.xml\"/> </jgroups> <cache-container default-cache=\"replicatedCache\"> <!-- Use the \"prod-tcp\" stack for cluster transport. --> <transport stack=\"prod-tcp\" /> <replicated-cache name=\"replicatedCache\"/> </cache-container> <!-- Cache configuration goes here. --> </infinispan>",
"GlobalConfiguration globalConfig = new GlobalConfigurationBuilder().transport() .defaultTransport() .clusterName(\"prod-cluster\") //Uses a custom JGroups stack for cluster transport. .addProperty(\"configurationFile\", \"my-jgroups-udp.xml\") .build();",
"<config xmlns=\"urn:org:jgroups\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"urn:org:jgroups http://www.jgroups.org/schema/jgroups-4.2.xsd\"> <UDP bind_addr=\"USD{jgroups.bind_addr:127.0.0.1}\" mcast_addr=\"USD{jgroups.udp.mcast_addr:239.0.2.0}\" mcast_port=\"USD{jgroups.udp.mcast_port:46655}\" tos=\"8\" ucast_recv_buf_size=\"20000000\" ucast_send_buf_size=\"640000\" mcast_recv_buf_size=\"25000000\" mcast_send_buf_size=\"640000\" bundler.max_size=\"64000\" ip_ttl=\"USD{jgroups.udp.ip_ttl:2}\" diag.enabled=\"false\" thread_naming_pattern=\"pl\" thread_pool.enabled=\"true\" thread_pool.min_threads=\"2\" thread_pool.max_threads=\"30\" thread_pool.keep_alive_time=\"5000\" /> <!-- Other JGroups stack configuration goes here. --> </config>",
"GlobalConfigurationBuilder global = new GlobalConfigurationBuilder(); JChannel jchannel = new JChannel(); // Configure the jchannel as needed. JGroupsTransport transport = new JGroupsTransport(jchannel); global.transport().transport(transport); new DefaultCacheManager(global.build());",
"<infinispan> <jgroups> <!-- Creates a secure JGroups stack named \"encrypt-tcp\" that extends the default TCP stack. --> <stack name=\"encrypt-tcp\" extends=\"tcp\"> <!-- Adds a keystore that nodes use to perform certificate authentication. --> <!-- Uses the stack.combine and stack.position attributes to insert SSL_KEY_EXCHANGE into the default TCP stack after VERIFY_SUSPECT2. --> <SSL_KEY_EXCHANGE keystore_name=\"mykeystore.jks\" keystore_password=\"changeit\" stack.combine=\"INSERT_AFTER\" stack.position=\"VERIFY_SUSPECT2\"/> <!-- Configures ASYM_ENCRYPT --> <!-- Uses the stack.combine and stack.position attributes to insert ASYM_ENCRYPT into the default TCP stack before pbcast.NAKACK2. --> <!-- The use_external_key_exchange = \"true\" attribute configures nodes to use the `SSL_KEY_EXCHANGE` protocol for certificate authentication. --> <ASYM_ENCRYPT asym_keylength=\"2048\" asym_algorithm=\"RSA\" change_key_on_coord_leave = \"false\" change_key_on_leave = \"false\" use_external_key_exchange = \"true\" stack.combine=\"INSERT_BEFORE\" stack.position=\"pbcast.NAKACK2\"/> </stack> </jgroups> <cache-container name=\"default\" statistics=\"true\"> <!-- Configures the cluster to use the JGroups stack. --> <transport cluster=\"USD{infinispan.cluster.name}\" stack=\"encrypt-tcp\" node-name=\"USD{infinispan.node.name:}\"/> </cache-container> </infinispan>",
"[org.infinispan.CLUSTER] ISPN000078: Starting JGroups channel cluster with stack <encrypted_stack_name>",
"[org.jgroups.protocols.ASYM_ENCRYPT] <hostname>: received message without encrypt header from <hostname>; dropping it",
"<infinispan> <jgroups> <!-- Creates a secure JGroups stack named \"encrypt-tcp\" that extends the default TCP stack. --> <stack name=\"encrypt-tcp\" extends=\"tcp\"> <!-- Adds a keystore from which nodes obtain secret keys. --> <!-- Uses the stack.combine and stack.position attributes to insert SYM_ENCRYPT into the default TCP stack after VERIFY_SUSPECT2. --> <SYM_ENCRYPT keystore_name=\"myKeystore.p12\" keystore_type=\"PKCS12\" store_password=\"changeit\" key_password=\"changeit\" alias=\"myKey\" stack.combine=\"INSERT_AFTER\" stack.position=\"VERIFY_SUSPECT2\"/> </stack> </jgroups> <cache-container name=\"default\" statistics=\"true\"> <!-- Configures the cluster to use the JGroups stack. --> <transport cluster=\"USD{infinispan.cluster.name}\" stack=\"encrypt-tcp\" node-name=\"USD{infinispan.node.name:}\"/> </cache-container> </infinispan>",
"[org.infinispan.CLUSTER] ISPN000078: Starting JGroups channel cluster with stack <encrypted_stack_name>",
"[org.jgroups.protocols.SYM_ENCRYPT] <hostname>: received message without encrypt header from <hostname>; dropping it"
] |
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/embedding_data_grid_in_java_applications/cluster-transport
|
Chapter 3. Enhancements
|
Chapter 3. Enhancements This section describes the major enhancements introduced in Red Hat OpenShift Data foundation 4.14. 3.1. Support for higher disk capacities and disk quantities Previously, for local storage deployments, Red Hat recommended 9 devices or less per node and disks size of 4 TiB or less. With this update, the recommended devices per node is now 12 or less, and disks size is 16 TiB or less. Note Confirm the estimated recovery time using the OpenShift Data Foundation Recovery Calculator . It is recommended that the recovery time for host failure to be under 2 hours. 3.2. Faster RWO recovery in case of node failures Previously, it took a long time for ReadWriterOnce (RWO) volumes to recover in case of node failures. With this update, the issue has been fixed. For the cluster to automatically address node failures and recover RWO volumes faster, manually add one of the following taints to the node: node.kubernetes.io/out-of-service=nodeshutdown:NoExecute node.kubernetes.io/out-of-service=nodeshutdown:NoSchedule 3.3. Automatic space reclaiming for RBD persistent volume claims PVCs Red Hat OpenShift Data Foundation version 4.14 introduces automatic space reclaiming for RBD persistent volume claims PVCs that are in the namespace that begins with openshift- . This means administrators no longer have to manually reclaim space for the RBD PVCs in the namespace that starts with openshift- prefix. 3.4. Automation of annotating encrypted RBD storage classes Annotation is automatically set when the OpenShift console creates a RADOS block device (RBD) storage class with encryption enabled. This enables customer data integration (CDI) to use host-assisted cloning instead of the default smart cloning. 3.5. LSOs LocalVolumeSet and LocalVolumeDiscovery CRs now support mpath device types With this release, disk and mpath device types are available for LocalVolumeSet and LocalVolumeDiscovery CRs. 3.6. Automatic detection of default StorageClass for OpenShift Virtualization workloads OpenShift Data Foundation deployments using OpenShift Virtualization platform will now have a new StorageClass automatically created and it can be set as a default storage class for OpenShift Virtualization. This new StorageClass is optimized for OpenShift virtualization using a specific preset of the underlying storage. 3.7. Collect rbd status details for all images When troubleshooting certain RBD related problems, the status of the RBD-images is an important information. With this release, for the OpenShift Data Foundation internal mode deployment, odf-must-gather includes the rbd status details, making it faster to troubleshoot RBD related problems. 3.8. Change in default permission and FSGroupPolicy Permissions of newly created volumes now defaults to a more secure 755 instead of 777. FSGroupPolicy is now set to File (instead of ReadWriteOnceWithFSType in ODF 4.11) to allow application access to volumes based on FSGroup. This involves Kubernetes using fsGroup to change permissions and ownership of the volume to match user requested fsGroup in the pod's SecurityPolicy. Note Existing volumes with a huge number of files may take a long time to mount since changing permissions and ownership takes a lot of time. For more information, see this knowledgebase solution .
| null |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/4.14_release_notes/enhancements
|
Virtual Machine Management Guide
|
Virtual Machine Management Guide Red Hat Virtualization 4.4 Managing virtual machines in Red Hat Virtualization Red Hat Virtualization Documentation Team Red Hat Customer Content Services [email protected] Abstract This document describes the installation, configuration, and administration of virtual machines in Red Hat Virtualization.
| null |
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/virtual_machine_management_guide/index
|
Chapter 7. Configuring Security
|
Chapter 7. Configuring Security 7.1. Securing Remote Connections 7.1.1. Using the Legacy Security Subsystem You can use the legacy security subsystem in JBoss EAP to secure the messaging-activemq subsystem. The legacy security subsystem uses legacy security realms and domains. See the JBoss EAP Security Architecture guide for more information on security realms and security domains . The messaging-activemq subsystem is pre-configured to use the security realm named ApplicationRealm and the security domain named other . Note The legacy security subsystem approach is the default configuration from JBoss EAP 7.0. The ApplicationRealm is defined near the top of the configuration file. <management> <security-realms> ... <security-realm name="ApplicationRealm"> <authentication> <local default-user="USDlocal" allowed-users="*" skip-group-loading="true"/> <properties path="application-users.properties" relative-to="jboss.server.config.dir" /> </authentication> <authorization> <properties path="application-roles.properties" relative-to="jboss.server.config.dir" /> </authorization> </security-realm> </security-realms> ... </management> As its name implies, ApplicationRealm is the default security realm for all application-focused subsystems in JBoss EAP such as the messaging-activemq , undertow , and ejb3 subsystems. ApplicationRealm uses the local filesystem to store usernames and hashed passwords. For convenience JBoss EAP includes a script that you can use to add users to the ApplicationRealm . See Default User Configuration in the JBoss EAP How To Configure Server Security guide for details. The other security domain is the default security domain for the application-related subsystems like messaging-activemq . It is not explicitly declared in the configuration; however, you can confirm which security domain is used by the messaging-activemq subsystem with the following management CLI command: You can also update which security domain is used: The JBoss EAP How To Configure Server Security guide has more information on how to create new security realms and domains. For now, it is worth noting how the other domain appears in the configuration: <subsystem xmlns="urn:jboss:domain:security:2.0"> <security-domains> <security-domain name="other" cache-type="default"> <authentication> <login-module code="Remoting" flag="optional"> <module-option name="password-stacking" value="useFirstPass"/> </login-module> <login-module code="RealmDirect" flag="required"> <module-option name="password-stacking" value="useFirstPass"/> </login-module> </authentication> </security-domain> ... <security-domains> </subsystem> The 'other' domain uses two login-modules as its means of authentication. The first module, Remoting , authenticates remote Jakarta Enterprise Beans invocations, while the RealmDirect module uses the information store defined in a given realm to authenticate users. In this case the default realm ApplicationRealm is used, since no realm is declared. Each module has its password-stacking option set to useFirstPass , which tells the login-module to store the principal name and password of the authenticated user. See the JBoss EAP Login Module Reference for more details on the login modules and their options. Role-based access is configured at the address level, see Role Based Security for Addresses . 7.1.2. Using the Elytron Subsystem You can also use the elytron subsystem to secure the messaging-activemq subsystem. You can find more information on using the elytron subsystem and creating and Elytron security domains in the Elytron Subsystem section of How to Configure Identity Management guide. To use an Elytron security domain: Undefine the legacy security domain. Set an Elytron security domain. 7.1.2.1. Setting an Elytron Security Domain Using the Management Console To set an Elytron security domain using the management console: Access the management console. For more information, see Management Console in the JBoss EAP Configuration Guide. Navigate to Configuration Subsystems Messaging (ActiveMQ) Server default and click View . Navigate to the Security tab and click Edit . Add or edit the value of Elytron Domain . Click Save to save the changes. Reload the server for the changes to take effect. Note You can only define either security-domain or elytron-domain , but you cannot have both defined at the same time. If neither is defined, JBoss EAP will use the security-domain default value of other , which maps to the other legacy security domain. 7.1.3. Securing the Transport The default http-connector that comes bundled with JBoss EAP messaging is not secured by default. You can secure the message transport and enable web traffic for SSL/TLS by following the instructions to configure one-way and two-way SSL/TLS for applications in How to Configure Server Security for JBoss EAP. Note The above approach to secure a message transport also works for securing the http-acceptor . When you configure the transport as described above, you must perform the following additional steps. By default, all HTTP acceptors are configured to use the default http-listener , which listens on the HTTP port. You must configure HTTP acceptors to use the https-listener , which listens on the HTTPS port. The socket-binding element for all HTTP connectors must be updated to use https instead of http . Each http-connector that communicates through SSL/TLS must set the ssl-enabled parameter to true . If an HTTP connector is used to connect to another server, you must configure the related parameters such as trust-store and key-store . Securing the http-connector requires that you configure the same parameters as you do with a remote-connector , which is documented in Securing a Remote Connector . See Configuring the Messaging Transports for information about the configuring acceptors and connectors for messaging transports. 7.1.4. Securing a Remote Connector If you are not using the default http-connector and have instead created your own remote-connector and remote-acceptor for TCP communications, you can configure each for SSL/TLS by using the properties in the table below. The properties appear in the configuration as part of the child <param> elements of the acceptor or connector. Typically, a server owns its private SSL/TLS key and shares its public key with clients. In this scenario, the server defines the key-store-path and key-store-password parameters in a remote-acceptor . Since each client can have its truststore located at a different location, and be encrypted by a different password, specifying the trust-store-path and trust-store-password properties on the remote-connector is not recommended. Instead, configure these parameters on the client side using the system properties javax.net.ssl.trustStore and javax.net.ssl.trustStorePassword . The parameters you need to configure for a remote-connector are ssl-enabled=true and useDefaultSslContext=true . However, if the server uses remote-connector to connect to another server, it makes sense in this case to set the trust-store-path and trust-store-password parameters of the remote-connector . In the above use case, the remote-acceptor would be created using the following management CLI command: To create the remote-connector from the above use case, use the following management CLI command: The management CLI also allows you to add a parameter to an already existing remote-acceptor or remote-connector as well: Note that the remote-acceptor and remote-connector both reference a socket-binding to declare the port to be used for communication. See the Overview of the Messaging Subsystem Configuration for more information on socket bindings and their relationship to acceptors and connectors. Table 7.1. SSL/TLS-related Configuration Properties for the NettyConnectorFactory Property Description enabled-cipher-suites Can be used to configure an acceptor or connector. This is a comma separated list of cipher suites used for SSL/TLS communication. The default value is null which means the JVM's default will be used. enabled-protocols Can be used to configure an acceptor or connector. This is a comma separated list of protocols used for SSL/TLS communication. The default value is null which means the JVM's default will be used. key-store-password When used on an acceptor, this is the password for the server-side keystore. When used on a connector, this is the password for the client-side keystore. This is only relevant for a connector if you are using two-way SSL/TLS. Although this value can be configured on the server, it is downloaded and used by the client. If the client needs to use a different password from that set on the server, it can override the server-side setting by either using the standard javax.net.ssl.keyStorePassword system property. Use the org.apache.activemq.ssl.keyStorePassword property if another component on the client is already making use of the standard system property. key-store-path When used on an acceptor, this is the path to the SSL/TLS keystore on the server which holds the server's certificates. Use for certificates either self-signed or signed by an authority. When used on a connector, this is the path to the client-side SSL/TLS keystore which holds the client certificates. This is only relevant for a connector if you are using two-way SSL/TLS. Although this value is configured on the server, it is downloaded and used by the client. If the client needs to use a different path from that set on the server, it can override the server-side setting by using the standard javax.net.ssl.keyStore system property. Use the org.apache.activemq.ssl.keyStore system property if another component on the client is already making use of the standard property. key-store-provider Defines the format of the file in which keys are stored, PKCS11 or PKCS12 for example. The accepted values are JDK specific. needs-client-auth This property is only for an acceptor. It tells a client connecting to this acceptor that two-way SSL/TLS is required. Valid values are true or false . Default is false . ssl-enabled Must be true to enable SSL/TLS. Default is false . trust-store-password When used on an acceptor, this is the password for the server-side truststore. This is only relevant for an acceptor if you are using two-way SSL/TLS. When used on a connector, this is the password for the client-side truststore. Although this value can be configured on the server, it is downloaded and used by the client. If the client needs to use a different password from that set on the server, it can override the server-side setting by using either the standard javax.net.ssl.trustStorePassword system property. Use the org.apache.activemq.ssl.trustStorePassword system property if another component on the client is already making use of the standard property. trust-store-path When used on an acceptor, this is the path to the server-side SSL/TLS keystore that holds the keys of all the clients that the server trusts. This is only relevant for an acceptor if you are using two-way SSL/TLS. When used on a connector, this is the path to the client-side SSL/TLS keystore which holds the public keys of all the servers that the client trusts. Although this value can be configured on the server, it is downloaded and used by the client. If the client needs to use a different path from that set on the server, it can override the server-side setting by using either the standard javax.net.ssl.trustStore system property. Use the org.apache.activemq.ssl.trustStore system property if another component on the client is already making use of the standard system property. trust-store-provider Defines the format of the file in which keys are stored, PKCS11 or PKCS12 for example. The accepted values are JDK specific. 7.2. Securing Destinations In addition to securing remote connections into the messaging server, you can also configure security around specific destinations. This is done by adding a security constraint using the security-setting configuration element. JBoss EAP messaging comes with a security-setting configured by default, as shown in the output from the following management CLI command: The security-setting option makes use of wildcards in the name field to handle which destinations to apply the security constraint. The value of a single # will match any address. For more information on using wildcards in security constraints, see Role Based Security for Addresses . 7.2.1. Role-Based Security for Addresses JBoss EAP messaging contains a flexible role-based security model for applying security to queues, based on their addresses. The core JBoss EAP messaging server consists mainly of sets of queues bound to addresses. When a message is sent to an address, the server first looks up the set of queues that are bound to that address and then routes the message to the bound queues. JBoss EAP messaging has a set of permissions that can be applied against queues based on their address. An exact string match on the address can be used or a wildcard match can be used using the wildcard characters # and * . See Address Settings for more information on how to use the wildcard syntax. You can create multiple roles for each security-setting , and there are 7 permission settings that can be applied to a role. Below is the complete list of the permissions available: create-durable-queue allows the role to create a durable queue under matching addresses. delete-durable-queue allows the role to delete a durable queue under matching addresses. create-non-durable-queue allows the role to create a non-durable queue under matching addresses. delete-non-durable-queue allows the role to delete a non-durable queue under matching addresses. send allows the role to send a message to matching addresses. consume allows the role to consume a message from a queue bound to matching addresses. manage allows the role to invoke management operations by sending management messages to the management address. Configuring Role-Based Security To start using role-based security for a security-setting , you first must create one. As an example, a security-setting of news.europe.# is created below. It would apply to any destination starting with news.europe. , such as news.europe.fr or news.europe.tech.uk . , you add a role to the security-setting you created and declare permissions for it. In the example below, the dev role is created and given permissions to consume from, and send to, queues, as well as to create and delete non-durable queues. Because the default is false , you have to tell JBoss EAP only about the permissions you want to switch on. To further illustrate the use of permissions, the example below creates an admin role and allows it to send management messages by switching on the manage permission. The permissions for creating and deleting durable queues are switched on as well: To confirm the configuration of a security-setting , use the management CLI. Remember to use the recursive=true option to get the full display of permissions: Above, the permissions for addresses that start with string news.europe. are displayed in full by the management CLI. To summarize, only users who have the admin role can create or delete durable queues, while only users with the dev role can create or delete non-durable queues. Furthermore, users with the dev role can send or consume messages, but admin users cannot. They can, however, send management messages since their manage permission is set to true . In cases where more than one match applies to a set of addresses the more specific match takes precedence. For example, the address news.europe.tech.uk.# is more specific than news.europe.tech.# . Because permissions are not inherited, you can effectively deny permissions in more specific security-setting blocks by simply not specifying them. Otherwise it would not be possible to deny permissions in sub-groups of addresses. The mapping between a user and what roles they have is handled by the security manager. JBoss EAP ships with a user manager that reads user credentials from a file on disk, and can also plug into JAAS or JBoss EAP security. For more information on configuring the security manager, see the JBoss EAP Security Architecture guide. 7.2.1.1. Granting Unauthenticated Clients the guest Role Using the Legacy Security Subsystem If you want JBoss EAP to automatically grant unauthenticated clients the guest role make the following two changes: Add a new module-option to the other security domain. The new option, unauthenticatedIdentity , will tell JBoss EAP to grant guest access to unauthenticated clients. The recommended way to do this is by using the management CLI: Note that the server requires a reload after issuing the command. You can confirm the new option by using the following management CLI command: Also, your server configuration file should look something like this after the command executes: <subsystem xmlns="urn:jboss:domain:security:2.0"> <security-domains> <security-domain name="other" cache-type="default"> <authentication> ... <login-module code="RealmDirect" flag="required"> ... <module-option name="unauthenticatedIdentity" value="guest"/> ... </login-module> ... </authentication> </security-domain> ... </security-domains> </subsystem> Uncomment the following line in the file application-roles.properties by deleting the # character. The file is located in EAP_HOME /standalone/configuration/ or EAP_HOME /domain/configuration/ , depending on whether you are using standalone servers or a domain controller respectively. Remote clients should now be able to access the server without needing to authenticate. They will be given the permissions associated with the guest role. 7.3. Controlling Jakarta Messaging ObjectMessage Deserialization Because an ObjectMessage can contain potentially dangerous objects, ActiveMQ Artemis provides a simple class filtering mechanism to control which packages and classes are to be trusted and which are not. You can add objects whose classes are from trusted packages to a white list to indicate they can be deserialized without a problem. You can add objects whose classes are from untrusted packages to a black list to prevent them from being deserialized. ActiveMQ Artemis filters objects for deserialization as follows. If both the white list and the black list are empty, which is the default, any serializable object is allowed to be deserialized. If an object's class or package matches one of the entries in the black list, it is not allowed to be deserialized. If an object's class or package matches an entry in the white list, it is allowed to be deserialized. If an object's class or package matches an entry in both the black list and the white list, the one in black list takes precedence, meaning it is not allowed to be deserialized. If an object's class or package matches neither the black list nor the white list, the object deserialization is denied, unless the white list is empty, meaning there is no white list specified. An object is considered a match if its full name exactly matches one of the entries in the list, if its package matches one of the entries in the list, or if it is a subpackage of one of the entries in the list. You can specify which objects can be deserialized on a connection-factory and on a pooled-connection-factory using the deserialization-white-list and deserialization-black-list attributes. The deserialization-white-list attribute is used to define the list of classes or packages that are allowed to be deserialized. The deserialization-black-list attribute is used to define the list of classes or packages that are not allowed to be deserialized. The following commands create a black list for the RemoteConnectionFactory connection factory and a white list for the activemq-ra pooled connection factory for the default server. /subsystem=messaging-activemq/server=default/connection-factory=RemoteConnectionFactory:write-attribute(name=deserialization-black-list,value=[my.untrusted.package,another.untrusted.package]) /subsystem=messaging-activemq/server=default/pooled-connection-factory=activemq-ra:write-attribute(name=deserialization-white-list,value=[my.trusted.package]) These commands generate the following configuration in the messaging-activemq subsystem. <connection-factory name="RemoteConnectionFactory" entries="java:jboss/exported/jms/RemoteConnectionFactory" connectors="http-connector" ha="true" block-on-acknowledge="true" reconnect-attempts="-1" deserialization-black-list="my.untrusted.package another.untrusted.package"/> <pooled-connection-factory name="activemq-ra" entries="java:/JmsXA java:jboss/DefaultJMSConnectionFactory" connectors="in-vm" deserialization-white-list="my.trusted.package" transaction="xa"/> For information about connection factories and pooled connection factories, see Configuring Connection Factories in this guide. You can also specify which objects can be deserialized in an MDB by configuring the activation properties. The deserializationWhiteList property is used to define the list of classes or packages that are allowed to be deserialized. The deserializationBlackList property is used to define the list of classes or packages that are not allowed to be deserialized. For more information about activation properties, see Configuring MDBs Using a Deployment Descriptor in Developing Jakarta Enterprise Beans Applications for JBoss EAP. 7.4. Authorization Invalidation Management The security-invalidation-interval attribute on the server in the messaging-activemq subsystem determines how long an authorization is cached before an action must be re-authorized. When the system authorizes a user to perform an action at an address, the authorization is cached. The time the same user performs the same action at the same address, the system uses the cached authorization for the action. For example, the user admin attempts to send a message to the address news . The system authorizes the action, and caches the authorization. The time admin attempts to send a message to news , the system uses the cached authorization. If the cached authorization is not used again within the time specified by the invalidation interval, the authorization is cleared from the cache. The system must re-authorize the user to perform the requested action at the requested address. After installation, JBoss EAP assumes a default value of 10000 milliseconds (10 seconds). The security-invalidation-interval attribute is configurable. For example, the following command updates the interval to 60000 milliseconds (60 seconds or one minute). You must reload the server for the modification of the configuration to take effect. Reading the attribute shows the new result.
|
[
"<management> <security-realms> <security-realm name=\"ApplicationRealm\"> <authentication> <local default-user=\"USDlocal\" allowed-users=\"*\" skip-group-loading=\"true\"/> <properties path=\"application-users.properties\" relative-to=\"jboss.server.config.dir\" /> </authentication> <authorization> <properties path=\"application-roles.properties\" relative-to=\"jboss.server.config.dir\" /> </authorization> </security-realm> </security-realms> </management>",
"/subsystem=messaging-activemq/server=default:read-attribute(name=security-domain) { \"outcome\" => \"success\", \"result\" => \"other\" }",
"/subsystem=messaging-activemq/server=default:write-attribute(name=security-domain, value=mySecurityDomain)",
"<subsystem xmlns=\"urn:jboss:domain:security:2.0\"> <security-domains> <security-domain name=\"other\" cache-type=\"default\"> <authentication> <login-module code=\"Remoting\" flag=\"optional\"> <module-option name=\"password-stacking\" value=\"useFirstPass\"/> </login-module> <login-module code=\"RealmDirect\" flag=\"required\"> <module-option name=\"password-stacking\" value=\"useFirstPass\"/> </login-module> </authentication> </security-domain> <security-domains> </subsystem>",
"/subsystem=messaging-activemq/server=default:undefine-attribute(name=security-domain)",
"/subsystem=messaging-activemq/server=default:write-attribute(name=elytron-domain, value=myElytronSecurityDomain) reload",
"/subsystem=messaging-activemq/server=default/remote-acceptor=mySslAcceptor:add(socket-binding=netty,params={ssl-enabled=true, key-store-path= PATH/TO /server.jks, key-store-password=USD{VAULT::server-key::key-store-password::sharedKey}})",
"/subsystem=messaging-activemq/server=default/remote-connector=mySslConnector:add(socket-binding=netty,params={ssl-enabled=true, useDefaultSslContext=true})",
"/subsystem=messaging-activemq/server=default/remote-connector=myOtherSslConnector:map-put(name=params,key=ssl-enabled,value=true)",
"/subsystem=messaging-activemq/server=default:read-resource(recursive=true) { \"outcome\" => \"success\", \"result\" => { . \"security-setting\" => {\"#\" => {\"role\" => {\"guest\" => { \"consume\" => true, \"create-durable-queue\" => false, \"create-non-durable-queue\" => true, \"delete-durable-queue\" => false, \"delete-non-durable-queue\" => true, \"manage\" => false, \"send\" => true }}}} } }",
"/subsystem=messaging-activemq/server=default/security-setting=news.europe.#:add() {\"outcome\" => \"success\"}",
"/subsystem=messaging-activemq/server=default/security-setting=news.europe.#/role=dev:add(consume=true,delete-non-durable-queue=true,create-non-durable-queue=true,send=true) {\"outcome\" => \"success\"}",
"/subsystem=messaging-activemq/server=default/security-setting=news.europe.#/role=admin:add(manage=true,create-durable-queue=true,delete-durable-queue=true) {\"outcome\" => \"success\"}",
"/subsystem=messaging-activemq/server=default:read-children-resources(child-type=security-setting,recursive=true) { \"outcome\" => \"success\", \"result\" => { \"#\" => {\"role\" => {\"guest\" => { \"consume\" => true, \"create-durable-queue\" => false, \"create-non-durable-queue\" => true, \"delete-durable-queue\" => false, \"delete-non-durable-queue\" => true, \"manage\" => false, \"send\" => true }}}, \"news.europe.#\" => {\"role\" => { \"dev\" => { \"consume\" => true, \"create-durable-queue\" => false, \"create-non-durable-queue\" => true, \"delete-durable-queue\" => false, \"delete-non-durable-queue\" => true, \"manage\" => false, \"send\" => true }, \"admin\" => { \"consume\" => false, \"create-durable-queue\" => true, \"create-non-durable-queue\" => false, \"delete-durable-queue\" => true, \"delete-non-durable-queue\" => false, \"manage\" => true, \"send\" => false } }} }",
"/subsystem=security/security-domain=other/authentication=classic/login-module=RealmDirect:map-put(name=module-options,key=unauthenticatedIdentity,value=guest) { \"outcome\" => \"success\", \"response-headers\" => { \"operation-requires-reload\" => true, \"process-state\" => \"reload-required\" } }",
"/subsystem=security/security-domain=other/authentication=classic/login-module=RealmDirect:read-resource() { \"outcome\" => \"success\", \"result\" => { \"code\" => \"RealmDirect\", \"flag\" => \"required\", \"module\" => undefined, \"module-options\" => { \"password-stacking\" => \"useFirstPass\", \"unauthenticatedIdentity\" => \"guest\" } } }",
"<subsystem xmlns=\"urn:jboss:domain:security:2.0\"> <security-domains> <security-domain name=\"other\" cache-type=\"default\"> <authentication> <login-module code=\"RealmDirect\" flag=\"required\"> <module-option name=\"unauthenticatedIdentity\" value=\"guest\"/> </login-module> </authentication> </security-domain> </security-domains> </subsystem>",
"#guest=guest",
"/subsystem=messaging-activemq/server=default/connection-factory=RemoteConnectionFactory:write-attribute(name=deserialization-black-list,value=[my.untrusted.package,another.untrusted.package]) /subsystem=messaging-activemq/server=default/pooled-connection-factory=activemq-ra:write-attribute(name=deserialization-white-list,value=[my.trusted.package])",
"<connection-factory name=\"RemoteConnectionFactory\" entries=\"java:jboss/exported/jms/RemoteConnectionFactory\" connectors=\"http-connector\" ha=\"true\" block-on-acknowledge=\"true\" reconnect-attempts=\"-1\" deserialization-black-list=\"my.untrusted.package another.untrusted.package\"/> <pooled-connection-factory name=\"activemq-ra\" entries=\"java:/JmsXA java:jboss/DefaultJMSConnectionFactory\" connectors=\"in-vm\" deserialization-white-list=\"my.trusted.package\" transaction=\"xa\"/>",
"/subsystem=messaging-activemq/server=default:read-attribute(name=security-invalidation-interval) { \"outcome\" => \"success\", \"result\" => 10000L }",
"/subsystem=messaging-activemq/server=default:write-attribute(name=security-invalidation-interval,value=60000) { \"outcome\" => \"success\", \"response-headers\" => { \"operation-requires-reload\" => true, \"process-state\" => \"reload-required\" } }",
"/subsystem=messaging-activemq/server=default:read-attribute(name=security-invalidation-interval) { \"outcome\" => \"success\", \"result\" => 60000L }"
] |
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/configuring_messaging/configuring_messaging_security
|
Chapter 53. Using external identity providers to authenticate to IdM
|
Chapter 53. Using external identity providers to authenticate to IdM You can associate users with external identity providers (IdP) that support the OAuth 2 device authorization flow. When these users authenticate with the SSSD version available in RHEL 8.7 or later, they receive RHEL Identity Management (IdM) single sign-on capabilities with Kerberos tickets after performing authentication and authorization at the external IdP. Notable features include: Adding, modifying, and deleting references to external IdPs with ipa idp-* commands. Enabling IdP authentication for users with the ipa user-mod --user-auth-type=idp command. 53.1. The benefits of connecting IdM to an external IdP As an administrator, you might want to allow users stored in an external identity source, such as a cloud services provider, to access RHEL systems joined to your Identity Management (IdM) environment. To achieve this, you can delegate the authentication and authorization process of issuing Kerberos tickets for these users to that external entity. You can use this feature to expand IdM's capabilities and allow users stored in external identity providers (IdPs) to access Linux systems managed by IdM. 53.2. How IdM incorporates logins via external IdPs SSSD 2.7.0 contains the sssd-idp package, which implements the idp Kerberos pre-authentication method. This authentication method follows the OAuth 2.0 Device Authorization Grant flow to delegate authorization decisions to external IdPs: An IdM client user initiates OAuth 2.0 Device Authorization Grant flow, for example, by attempting to retrieve a Kerberos TGT with the kinit utility at the command line. A special code and website link are sent from the Authorization Server to the IdM KDC backend. The IdM client displays the link and the code to the user. In this example, the IdM client outputs the link and code on the command line. The user opens the website link in a browser, which can be on another host, a mobile phone, and so on: The user enters the special code. If necessary, the user logs in to the OAuth 2.0-based IdP. The user is prompted to authorize the client to access information. The user confirms access at the original device prompt. In this example, the user hits the Enter key at the command line. The IdM KDC backend polls the OAuth 2.0 Authorization Server for access to user information. What is supported: Logging in remotely via SSH with the keyboard-interactive authentication method enabled, which allows calling Pluggable Authentication Module (PAM) libraries. Logging in locally with the console via the logind service. Retrieving a Kerberos ticket-granting ticket (TGT) with the kinit utility. What is currently not supported: Logging in to the IdM WebUI directly. To log in to the IdM WebUI, you must first acquire a Kerberos ticket. Logging in to Cockpit WebUI directly. To log in to the Cockpit WebUI, you must first acquire a Kerberos ticket. Additional resources Authentication against external Identity Providers RFC 8628: OAuth 2.0 Device Authorization Grant 53.3. Creating a reference to an external identity provider To connect external identity providers (IdPs) to your Identity Management (IdM) environment, create IdP references in IdM. Complete this procedure to create a reference called my-keycloak-idp to an IdP based on the Keycloak template. For more reference templates, see Example references to different external IdPs in IdM . Prerequisites You have registered IdM as an OAuth application to your external IdP, and obtained a client ID. You can authenticate as the IdM admin account. Your IdM servers are using RHEL 8.7 or later. Your IdM servers are using SSSD 2.7.0 or later. Procedure Authenticate as the IdM admin on an IdM server. Create a reference called my-keycloak-idp to an IdP based on the Keycloak template, where the --base-url option specifies the URL to the Keycloak server in the format server-name.USDDOMAIN:USDPORT/prefix . Verification Verify that the output of the ipa idp-show command shows the IdP reference you have created. Additional resources Example references to different external IdPs in IdM Options for the ipa idp-* commands to manage external identity providers in IdM The --provider option in the ipa idp-* commands ipa help idp-add 53.4. Example references to different external IdPs in IdM The following table lists examples of the ipa idp-add command for creating references to different IdPs in IdM. Identity Provider Important options Command example Microsoft Identity Platform, Azure AD --provider microsoft --organization Google --provider google GitHub --provider github Keycloak, Red Hat Single Sign-On --provider keycloak --organization --base-url Note The Quarkus version of Keycloak 17 and later have removed the /auth/ portion of the URI. If you use the non-Quarkus distribution of Keycloak in your deployment, include /auth/ in the --base-url option. Okta --provider okta Additional resources Creating a reference to an external identity provider Options for the ipa idp-* commands to manage external identity providers in IdM The --provider option in the ipa idp-* commands 53.5. Options for the ipa idp-* commands to manage external identity providers in IdM The following examples show how to configure references to external IdPs based on the different IdP templates. Use the following options to specify your settings: --provider the predefined template for one of the known identity providers --client-id the OAuth 2.0 client identifier issued by the IdP during application registration. As the application registration procedure is specific to each IdP, refer to their documentation for details. If the external IdP is Red Hat Single Sign-On (SSO), see Creating an OpenID Connect Client . --base-url base URL for IdP templates, required by Keycloak and Okta --organization Domain or Organization ID from the IdP, required by Microsoft Azure --secret (optional) Use this option if you have configured your external IdP to require a secret from confidential OAuth 2.0 clients. If you use this option when creating an IdP reference, you are prompted for the secret interactively. Protect the client secret as a password. Note SSSD in RHEL 8.7 only supports non-confidential OAuth 2.0 clients that do not use a client secret. If you want to use external IdPs that require a client secret from confidential clients, you must use SSSD in RHEL 8.8 and later. Additional resources Creating a reference to an external identity provider Example references to different external IdPs in IdM The --provider option in the ipa idp-* commands 53.6. Managing references to external IdPs After you have created a reference to an external identity provider (IdP), you can find, show, modify, and delete that reference. This example shows you how to manage a reference to an external IdP named keycloak-server1 . Prerequisites You can authenticate as the IdM admin account. Your IdM servers are using RHEL 8.7 or later. Your IdM servers are using SSSD 2.7.0 or later. You have created a reference to an external IdP in IdM. See Creating a reference to an external identity provider . Procedure Authenticate as the IdM admin on an IdM server. Manage the IdP reference. To find an IdP reference whose entry includes the string keycloak : To display an IdP reference named my-keycloak-idp : To modify an IdP reference, use the ipa idp-mod command. For example, to change the secret for an IdP reference named my-keycloak-idp , specify the --secret option to be prompted for the secret: To delete an IdP reference named my-keycloak-idp : 53.7. Enabling an IdM user to authenticate via an external IdP To enable an IdM user to authenticate via an external identity provider (IdP), associate the external IdP reference you have previously created with the user account. This example associates the external IdP reference keycloak-server1 with the user idm-user-with-external-idp . Prerequisites Your IdM client and IdM servers are using RHEL 8.7 or later. Your IdM client and IdM servers are using SSSD 2.7.0 or later. You have created a reference to an external IdP in IdM. See Creating a reference to an external identity provider . Procedure Modify the IdM user entry to associate an IdP reference with the user account: Verification Verify that the output of the ipa user-show command for that user displays references to the IdP: 53.8. Retrieving an IdM ticket-granting ticket as an external IdP user If you have delegated authentication for an Identity Management (IdM) user to an external identity provider (IdP), the IdM user can request a Kerberos ticket-granting ticket (TGT) by authenticating to the external IdP. Complete this procedure to: Retrieve and store an anonymous Kerberos ticket locally. Request the TGT for the idm-user-with-external-idp user by using kinit with the -T option to enable Flexible Authentication via Secure Tunneling (FAST) channel to provide a secure connection between the Kerberos client and Kerberos Distribution Center (KDC). Prerequisites Your IdM client and IdM servers use RHEL 8.7 or later. Your IdM client and IdM servers use SSSD 2.7.0 or later. You have created a reference to an external IdP in IdM. See Creating a reference to an external identity provider . You have associated an external IdP reference with the user account. See Enabling an IdM user to authenticate via an external IdP . The user that you are initially logged in as has write permissions on a directory in the local filesystem. Procedure Use Anonymous PKINIT to obtain a Kerberos ticket and store it in a file named ./fast.ccache . Optional: View the retrieved ticket: Begin authenticating as the IdM user, using the -T option to enable the FAST communication channel. In a browser, authenticate as the user at the website provided in the command output. At the command line, press the Enter key to finish the authentication process. Verification Display your Kerberos ticket information and confirm that the line config: pa_type shows 152 for pre-authentication with an external IdP. The pa_type = 152 indicates external IdP authentication. 53.9. Logging in to an IdM client via SSH as an external IdP user To log in to an IdM client via SSH as an external identity provider (IdP) user, begin the login process on the command linel. When prompted, perform the authentication process at the website associated with the IdP, and finish the process at the Identity Management (IdM) client. Prerequisites Your IdM client and IdM servers are using RHEL 8.7 or later. Your IdM client and IdM servers are using SSSD 2.7.0 or later. You have created a reference to an external IdP in IdM. See Creating a reference to an external identity provider . You have associated an external IdP reference with the user account. See Enabling an IdM user to authenticate via an external IdP . Procedure Attempt to log in to the IdM client via SSH. In a browser, authenticate as the user at the website provided in the command output. At the command line, press the Enter key to finish the authentication process. Verification Display your Kerberos ticket information and confirm that the line config: pa_type shows 152 for pre-authentication with an external IdP. 53.10. The --provider option in the ipa idp-* commands The following identity providers (IdPs) support OAuth 2.0 device authorization grant flow: Microsoft Identity Platform, including Azure AD Google GitHub Keycloak, including Red Hat Single Sign-On (SSO) Okta When using the ipa idp-add command to create a reference to one of these external IdPs, you can specify the IdP type with the --provider option, which expands into additional options as described below: --provider=microsoft Microsoft Azure IdPs allow parametrization based on the Azure tenant ID, which you can specify with the --organization option to the ipa idp-add command. If you need support for the live.com IdP, specify the option --organization common . Choosing --provider=microsoft expands to use the following options. The value of the --organization option replaces the string USD{ipaidporg} in the table. Option Value --auth-uri=URI https://login.microsoftonline.com/USD{ipaidporg}/oauth2/v2.0/authorize --dev-auth-uri=URI https://login.microsoftonline.com/USD{ipaidporg}/oauth2/v2.0/devicecode --token-uri=URI https://login.microsoftonline.com/USD{ipaidporg}/oauth2/v2.0/token --userinfo-uri=URI https://graph.microsoft.com/oidc/userinfo --keys-uri=URI https://login.microsoftonline.com/common/discovery/v2.0/keys --scope=STR openid email --idp-user-id=STR email --provider=google Choosing --provider=google expands to use the following options: Option Value --auth-uri=URI https://accounts.google.com/o/oauth2/auth --dev-auth-uri=URI https://oauth2.googleapis.com/device/code --token-uri=URI https://oauth2.googleapis.com/token --userinfo-uri=URI https://openidconnect.googleapis.com/v1/userinfo --keys-uri=URI https://www.googleapis.com/oauth2/v3/certs --scope=STR openid email --idp-user-id=STR email --provider=github Choosing --provider=github expands to use the following options: Option Value --auth-uri=URI https://github.com/login/oauth/authorize --dev-auth-uri=URI https://github.com/login/device/code --token-uri=URI https://github.com/login/oauth/access_token --userinfo-uri=URI https://openidconnect.googleapis.com/v1/userinfo --keys-uri=URI https://api.github.com/user --scope=STR user --idp-user-id=STR login --provider=keycloak With Keycloak, you can define multiple realms or organizations. Since it is often a part of a custom deployment, both base URL and realm ID are required, and you can specify them with the --base-url and --organization options to the ipa idp-add command: Choosing --provider=keycloak expands to use the following options. The value you specify in the --base-url option replaces the string USD{ipaidpbaseurl} in the table, and the value you specify for the --organization `option replaces the string `USD{ipaidporg} . Option Value --auth-uri=URI https://USD{ipaidpbaseurl}/realms/USD{ipaidporg}/protocol/openid-connect/auth --dev-auth-uri=URI https://USD{ipaidpbaseurl}/realms/USD{ipaidporg}/protocol/openid-connect/auth/device --token-uri=URI https://USD{ipaidpbaseurl}/realms/USD{ipaidporg}/protocol/openid-connect/token --userinfo-uri=URI https://USD{ipaidpbaseurl}/realms/USD{ipaidporg}/protocol/openid-connect/userinfo --scope=STR openid email --idp-user-id=STR email --provider=okta After registering a new organization in Okta, a new base URL is associated with it. You can specify this base URL with the --base-url option to the ipa idp-add command: Choosing --provider=okta expands to use the following options. The value you specify for the --base-url option replaces the string USD{ipaidpbaseurl} in the table. Option Value --auth-uri=URI https://USD{ipaidpbaseurl}/oauth2/v1/authorize --dev-auth-uri=URI https://USD{ipaidpbaseurl}/oauth2/v1/device/authorize --token-uri=URI https://USD{ipaidpbaseurl}/oauth2/v1/token --userinfo-uri=URI https://USD{ipaidpbaseurl}/oauth2/v1/userinfo --scope=STR openid email --idp-user-id=STR email Additional resources Pre-populated IdP templates
|
[
"kinit admin",
"ipa idp-add my-keycloak-idp --provider keycloak --organization main --base-url keycloak.idm.example.com:8443/auth --client-id id13778 ------------------------------------------------ Added Identity Provider reference \"my-keycloak-idp\" ------------------------------------------------ Identity Provider reference name: my-keycloak-idp Authorization URI: https://keycloak.idm.example.com:8443/auth/realms/main/protocol/openid-connect/auth Device authorization URI: https://keycloak.idm.example.com:8443/auth/realms/main/protocol/openid-connect/auth/device Token URI: https://keycloak.idm.example.com:8443/auth/realms/main/protocol/openid-connect/token User info URI: https://keycloak.idm.example.com:8443/auth/realms/main/protocol/openid-connect/userinfo Client identifier: ipa_oidc_client Scope: openid email External IdP user identifier attribute: email",
"ipa idp-show my-keycloak-idp",
"ipa idp-add my-azure-idp --provider microsoft --organization main --client-id <azure_client_id>",
"ipa idp-add my-google-idp --provider google --client-id <google_client_id>",
"ipa idp-add my-github-idp --provider github --client-id <github_client_id>",
"ipa idp-add my-keycloak-idp --provider keycloak --organization main --base-url keycloak.idm.example.com:8443/auth --client-id <keycloak_client_id>",
"ipa idp-add my-okta-idp --provider okta --base-url dev-12345.okta.com --client-id <okta_client_id>",
"kinit admin",
"ipa idp-find keycloak",
"ipa idp-show my-keycloak-idp",
"ipa idp-mod my-keycloak-idp --secret",
"ipa idp-del my-keycloak-idp",
"ipa user-mod idm-user-with-external-idp --idp my-keycloak-idp --idp-user-id [email protected] --user-auth-type=idp --------------------------------- Modified user \"idm-user-with-external-idp\" --------------------------------- User login: idm-user-with-external-idp First name: Test Last name: User1 Home directory: /home/idm-user-with-external-idp Login shell: /bin/sh Principal name: [email protected] Principal alias: [email protected] Email address: [email protected] UID: 35000003 GID: 35000003 User authentication types: idp External IdP configuration: keycloak External IdP user identifier: [email protected] Account disabled: False Password: False Member of groups: ipausers Kerberos keys available: False",
"ipa user-show idm-user-with-external-idp User login: idm-user-with-external-idp First name: Test Last name: User1 Home directory: /home/idm-user-with-external-idp Login shell: /bin/sh Principal name: [email protected] Principal alias: [email protected] Email address: [email protected] ID: 35000003 GID: 35000003 User authentication types: idp External IdP configuration: keycloak External IdP user identifier: [email protected] Account disabled: False Password: False Member of groups: ipausers Kerberos keys available: False",
"kinit -n -c ./fast.ccache",
"klist -c fast.ccache Ticket cache: FILE:fast.ccache Default principal: WELLKNOWN/ANONYMOUS@WELLKNOWN:ANONYMOUS Valid starting Expires Service principal 03/03/2024 13:36:37 03/04/2024 13:14:28 krbtgt/[email protected]",
"kinit -T ./fast.ccache idm-user-with-external-idp Authenticate at https://oauth2.idp.com:8443/auth/realms/master/device?user_code=YHMQ-XKTL and press ENTER.:",
"klist -C Ticket cache: KCM:0:58420 Default principal: [email protected] Valid starting Expires Service principal 05/09/22 07:48:23 05/10/22 07:03:07 krbtgt/[email protected] config: fast_avail(krbtgt/[email protected]) = yes 08/17/2022 20:22:45 08/18/2022 20:22:43 krbtgt/[email protected] config: pa_type(krbtgt/[email protected]) = 152",
"[user@client ~]USD ssh [email protected] ([email protected]) Authenticate at https://oauth2.idp.com:8443/auth/realms/main/device?user_code=XYFL-ROYR and press ENTER.",
"[idm-user-with-external-idp@client ~]USD klist -C Ticket cache: KCM:0:58420 Default principal: [email protected] Valid starting Expires Service principal 05/09/22 07:48:23 05/10/22 07:03:07 krbtgt/[email protected] config: fast_avail(krbtgt/[email protected]) = yes 08/17/2022 20:22:45 08/18/2022 20:22:43 krbtgt/[email protected] config: pa_type(krbtgt/[email protected]) = 152",
"ipa idp-add MySSO --provider keycloak --org main --base-url keycloak.domain.com:8443/auth --client-id <your-client-id>",
"ipa idp-add MyOkta --provider okta --base-url dev-12345.okta.com --client-id <your-client-id>"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_idm_users_groups_hosts_and_access_control_rules/assembly_using-external-identity-providers-to-authenticate-to-idm_managing-users-groups-hosts
|
6.1 Release Notes
|
6.1 Release Notes Red Hat Enterprise Linux 6 Release Notes for Red Hat Enterprise Linux 6.1 Red Hat Engineering Content Services
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.1_release_notes/index
|
Chapter 77. KafkaClientAuthenticationPlain schema reference
|
Chapter 77. KafkaClientAuthenticationPlain schema reference Used in: KafkaBridgeSpec , KafkaConnectSpec , KafkaMirrorMaker2ClusterSpec , KafkaMirrorMakerConsumerSpec , KafkaMirrorMakerProducerSpec Full list of KafkaClientAuthenticationPlain schema properties To configure SASL-based PLAIN authentication, set the type property to plain . SASL PLAIN authentication mechanism requires a username and password. Warning The SASL PLAIN mechanism will transfer the username and password across the network in cleartext. Only use SASL PLAIN authentication if TLS encryption is enabled. 77.1. username Specify the username in the username property. 77.2. passwordSecret In the passwordSecret property, specify a link to a Secret containing the password. You can use the secrets created by the User Operator. If required, create a text file that contains the password, in cleartext, to use for authentication: echo -n PASSWORD > MY-PASSWORD .txt You can then create a Secret from the text file, setting your own field name (key) for the password: oc create secret generic MY-CONNECT-SECRET-NAME --from-file= MY-PASSWORD-FIELD-NAME =./ MY-PASSWORD .txt Example Secret for PLAIN client authentication for Kafka Connect apiVersion: v1 kind: Secret metadata: name: my-connect-secret-name type: Opaque data: my-password-field-name: LFTIyFRFlMmU2N2Tm The secretName property contains the name of the Secret and the password property contains the name of the key under which the password is stored inside the Secret . Important Do not specify the actual password in the password property. An example SASL based PLAIN client authentication configuration authentication: type: plain username: my-connect-username passwordSecret: secretName: my-connect-secret-name password: my-password-field-name 77.3. KafkaClientAuthenticationPlain schema properties The type property is a discriminator that distinguishes use of the KafkaClientAuthenticationPlain type from KafkaClientAuthenticationTls , KafkaClientAuthenticationScramSha256 , KafkaClientAuthenticationScramSha512 , KafkaClientAuthenticationOAuth . It must have the value plain for the type KafkaClientAuthenticationPlain . Property Property type Description passwordSecret PasswordSecretSource Reference to the Secret which holds the password. type string Must be plain . username string Username used for the authentication.
|
[
"echo -n PASSWORD > MY-PASSWORD .txt",
"create secret generic MY-CONNECT-SECRET-NAME --from-file= MY-PASSWORD-FIELD-NAME =./ MY-PASSWORD .txt",
"apiVersion: v1 kind: Secret metadata: name: my-connect-secret-name type: Opaque data: my-password-field-name: LFTIyFRFlMmU2N2Tm",
"authentication: type: plain username: my-connect-username passwordSecret: secretName: my-connect-secret-name password: my-password-field-name"
] |
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-kafkaclientauthenticationplain-reference
|
Chapter 79. Next steps
|
Chapter 79. steps Packaging and deploying an Red Hat Decision Manager project
| null |
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/developing_decision_services_in_red_hat_decision_manager/next_steps_6
|
5. Kernel-Related Updates
|
5. Kernel-Related Updates 5.1. All Architectures Bugzilla #467714 The ibmphp module is not safe to unload. Previously, the mechanism that prevented the ibmphp module from unloading was insufficient, and eventually triggered a bug halt. With this update, the method to prevent this module from unloading has been improved, preventing the bug halt. However, attempting to unload the module may produce a warning in the message log, indicating that the module is not safe to unload. This warning can be safely ignored. Bugzilla #461564 With this update, physical memory will be limited to 64GB for 32-bit x86 kernels running on systems with more than 64GB. The kernel splits memory into 2 separate regions: Lowmem and Highmem. Lowmem is mapped into the kernel address space at all times. Highmem, however, is mapped into a kernel virtual window a page at a time as needed. If memory I/Os are allowed to exceed 64GB, the mem_map (also known as the page array) size can approach or even exceed the size of Lowmem. If this happens, the kernel panics during boot or starts prematurely. In the latter case, the kernel fails to allocate kernel memory after booting and either panics or hangs. Bugzilla #246233 Previously, if a user pressed the arrow keys continously on a Hardware Virtual Machine (HVM) an interrupt race condition between the hardware interrupt and timer interrupt was encountered. As a result, the keyboard driver reported unknown keycode events. With this update, the i8042 polling timer has been removed, which resolves this issue. Bugzilla #435705 With this update, the diskdump utility (which provides the ability to create and collect vmcore Kernel dumps) is now supported for use with the sata_svw driver. Bugzilla #439043 With this update, the "swap_token_timeout" parameter has been added to /proc/sys/vm. This file contains valid hold time of swap out protection token. The Linux Virtual Memory (VM) subsystem has a token based thrashing control mechanism and uses the token to prevent unnecessary page faults in thrashing situation. The unit of the value is in `second`. The value would be useful to tune thrashing behavior. Setting it to 0 will disable the swap token mechanism. Bugzilla #439431 Previously, when a NFSv4 (Network File System Version 4) client encountered issues while processing a directory using readdir() , an error for the entire readdir() call was returned. With this update, the fattr4_rdattr_error flag is now set when readdir() is called, instructing the server to continue on and only report an error on the specific directory entry that was causing the issue. Bugzilla #443655 Previously, the NFS (Network File System) client was not handling malformed replies from the readdir() function. Consequently, the reply from the server would indicate that the call to the readdir() function was successful, but the reply would contain no entries. With this update, the readdir() reply parsing logic has been changed, such that when a malformed reply is received, the client returns an EIO error. Bugzilla #448076 The RPC client stores the result of a portmap call at a place in memory that can be freed and reallocated under the right circumstances. However, under some circumstances, the result of the portmap call was freed from memory too early, which may have resulted in memory corruption. With this update, reference counting has been added to the memory location where the portmap result is stored, and will only free it after it has been used. Bugzilla #450743 Under some circumstances, the allocation of some data structures for RPC calls may have been blocked when the system memory was low. Consequently, deadlock may have been encountered under heavy memory pressure when there were a large number of NFS pages awaiting writeback. With this update, the allocation of these data structures is now non-blocking, which resolves this issue. Bugzilla #451088 Previously, degraded performance may have been encountered when writing to a LVM mirrored volume synchronously (using the O_SYNC flag). Consequently, every write I/O to a mirrored volume was delayed by 3ms, resulting in the mirrored volume being approximately 5-10 times slower than a linear volume. With this update, I/O queue unplugging has been added to the dm-raid1 driver, and the performace of mirrored volumes has been improved to be comparable with that of linear volumes. Bugzilla #476997 A new tuning parameter has been added to allow system administrators to change the max number of modified pages kupdate writes to disk per iteration each time it runs. This new tunable ( /proc/sys/vm/max_writeback_pages ) defaults to a value of 1024 (4MB) so that a maximum of 1024 pages get written out by each iteration of kupdate . Increasing this value alters how aggressively kupdate flushes modified pages and decreases the potential amount of data loss if the system crashes between kupdate runs. However, increasing the max_writeback_pages value may have negative performance consequences on systems that are sensitive to I/O loads. Bugzilla #456911 A new allowable value has been added to the /proc/sys/kernel/wake_balance tunable parameter. Setting wake_balance to a value of 2 will instruct the scheduler to run the thread on any available CPU rather than scheduling it on the optimal CPU. Setting this kernel parameter to 2 will force the scheduler to reduce the overall latency even at the cost of total system throughput. Bugzilla #475715 When checking a directory tree, the kernel module could, in some circumstances, incorrectly decide the tree was not busy. An active offset mount with an open file handle being used for expires caused the file handle to not count toward the busyness check. This resulted in mount requests being made for already mounted offsets. With this update, the kernel module check has been corrected and incorrect mount requests are no longer generated. Bugzilla #453470 During system initalization, the CPU vendor was detected after the initialization of the Advanced Programmable Interrupt Controllers (APICs). Consequently, on x86_64 AMD systems with more than 8 cores, APIC clustered mode was used, resulting in suboptimal system performance. With this update, the CPU vendor is now queried prior to initializing the APICs, resulting in APIC physical flat mode being used by default, which resolves this issue. Bugzilla #462459 The Common Internet File System (CIFS) code has been updated in Red Hat Enterprise Linux 4.8, fixing a number of bugs that had been repaired in upstream, including the following change: Previously, when mounting a server without Unix extensions, it was possible to change the mode of a file. However, this mode change could not be permanently stored, and may have changed back to the original mode at any time. With this update, the mode of the file cannot be temporarily changed by default; chmod() calls will return success, but have no effect. A new mount option, dynperm needs to be used if the old behavior is required. Bugzilla #451819 Previously, in the kernel, there was a race condition may have been encountered between dio_bio_end_aio() and dio_await_one() . This may have lead to a situation where direct I/O is left waiting indefinitely on an I/O process that has already completed. With this update, these reference counting operations are now locked so that the submission and completion paths see a unified state, which resolves this issue. Bugzilla #249775 Previously, upgrading a fully virtualized guest system from Red Hat Enterprise Linux 4.6 (with the kmod-xenpv package installed) to newer versions of Red Hat Enterprise Linux 4 resulted in an improper module dependency between the built-in kernel modules: xen-vbd.ko & xen-vnif.ko and the older xen-platform-pci.ko module. Consequently, file systems mounted via the xen-vbd.ko block driver, and guest networking using the xen-vnif.ko network driver would fail. In Red Hat Enterprise Linux 4.7, the functionality in the xen-platform-pci.ko module was built-in to the kernel. However, when a formally loadable kernel module becomes a part of the kernel, the symbol dependency check for existing loadable modules is not accounted for in the module-init-tools correctly. With this update, the xen-platform-pci.ko functionality has been removed from the built-in kernel and placed back into a loadable module, allowing the module-init-tools to check and create the proper dependencies during a kernel upgrade. Bugzilla #463897 Previously, attempting to mount disks or partitions in a 32-bit Red Hat Enterprise Linux 4.6 fully virtualized guest using the paravirtualized block driver( xen-vbd.ko ) on a 64-bit host would fail. With this update, the block front driver ( block.c ) has been updated to inform the block back driver that the guest is using the 32-bit protocol, which resolves this issue. Bugzilla #460984 Previously, installing the pv-on-hvm drivers on a bare-metal kernel automatically created the /proc/xen directory. Consequently, applications that verify if the system is running a virtualized kernel by checking for the existence of the /proc/xen directory may have incorrectly assumed that the virtualized kernel is being used. With this update, the pv-on-hvm drivers no longer create the /proc/xen directory, which resolves this issue. Bugzilla #455756 Previously, paravirtualized guests could only have a maximum of 16 disk devices. In this update, this limit has been increased to a maximum of 256 disk devices. Bugzilla #523930 In some circumstances, write operations to a particular TTY device opened by more than one user (eg, one opened it as /dev/console and the other opened it as /dev/ttyS0 ) were blocked. If one user opened the TTY terminal without setting the O_NONBLOCK flag, this user's write operations were suspended if the output buffer was full or if a STOP (Ctrl-S) signal was sent. As well, because the O_NONBLOCK flag was not respected, write operations for user terminals opened with the O_NONBLOCK flag set were also blocked. This update re-implements TTY locks, ensuring O_NONBLOCK works as expected, even if a STOP signal is sent from another terminal. Bugzilla #519692 Previously, the get_random_int() function returned the same number until the jiffies counter (which ticks at a clock interrupt frequency) or process ID (PID) changed, making it possible to predict the random numbers. This may have weakened the ASLR security feature. With this update, get_random_int() is more random and no longer uses a common seed value. This reduces the possibility of predicting the values get_random_int() returns. Bugzilla #518707 ib_mthca , the driver for Host Channel Adapter (HCA) cards based on the Mellanox Technologies MT25408 InfiniHost III Lx HCA integrated circuit device, uses kmalloc() to allocate large bitmasks. This ensures allocated memory is a contiguous physical block, as is required by DMA devices such as these HCA cards. Previously, the largest allowed kmalloc() was a 128kB page. If ib_mthca was set to allocate more than 128kB (for example, by setting the num_mutt option to "num_mutt=2097152", causing kmalloc() to allocate 256kB) the driver failed to load, returning the message This update alters the allocation methods of the ib_mthca driver. When mthca_buddy_init() wants more than a page, memory is allocated directly from the page allocator, rather than using kmalloc() . It is now possible to pin large amounts of memory for use by the ib_mthca driver by increasing the values assigned to num_mutt and num_mtt . Bugzilla #519446 Previously, there were some instances in the kernel where the __ptrace_unlink() function (part of the ptrace system call) used REMOVE_LINKS and SET_LINKS , rather than add_parent and remove_parent , while changing the parent of a process. This approach could abuse the global process list and, as a consequence, create deadlocked and unkillable processes in some circumstances. With this update, __ptrace_unlink() now uses add_parent and remove_parent in every instance, ensuring that deadlocked and unkillable processes cannot be created. Note Unkillable or deadlocked processes created by this bug had no effect on system availability.
|
[
"Failed to initialize memory region table, aborting."
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/4.8_release_notes/ar01s05
|
14.13.6. Configuring Virtual CPU Count
|
14.13.6. Configuring Virtual CPU Count To modify the number of CPUs assigned to a guest virtual machine, use the virsh setvcpus command: # virsh setvcpus {domain-name, domain-id or domain-uuid} count [[--config] [--live] | [--current] [--guest] The following parameters may be set for the virsh setvcpus command: {domain-name, domain-id or domain-uuid} - Specifies the virtual machine. count - Specifies the number of virtual CPUs to set. Note The count value cannot exceed the number of CPUs that were assigned to the guest virtual machine when it was created. It may also be limited by the host or the hypervisor. For Xen, you can only adjust the virtual CPUs of a running domain if the domain is paravirtualized. --live - The default option, used if none are specified. The configuration change takes effect on the running guest virtual machine. This is referred to as a hot plug if the number of vCPUs is increased, and hot unplug if it is reduced. Important The vCPU hot unplug feature is a Technology Preview. Therefore, it is not supported and not recommended for use in high-value deployments. --config - The configuration change takes effect on the reboot of the guest. Both the --config and --live options may be specified together if supported by the hypervisor. --current - Configuration change takes effect on the current state of the guest virtual machine. If used on a running guest, it acts as --live , if used on a shut-down guest, it acts as --config . --maximum - Sets a maximum vCPU limit that can be hot-plugged on the reboot of the guest. As such, it must only be used with the --config option, and not with the --live option. --guest - Instead of a hot plug or a hot unplug, the QEMU guest agent modifies the vCPU count directly in the running guest by enabling or disabling vCPUs. This option cannot be used with count value higher than the current number of vCPUs in the gueet, and configurations set with --guest are reset when a guest is rebooted. Example 14.4. vCPU hot plug and hot unplug To hot-plug a vCPU, run the following command on a guest with a single vCPU: This increases the number of vCPUs for guestVM1 to two. The change is performed while guestVM1 is running, as indicated by the --live option. To hot-unplug one vCPU from the same running guest, run the following: Be aware, however, that currently, using vCPU hot unplug can lead to problems with further modifications of the vCPU count.
|
[
"virsh setvcpus guestVM1 2 --live",
"virsh setvcpus guestVM1 1 --live"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sect-displaying_per_guest_virtual_machine_information-configuring_virtual_cpu_count
|
Chapter 7. Clustering
|
Chapter 7. Clustering Clustering is sharing load between hosts. Each instance must be able to act as an entry point for UI and API access. This must enable the automation controller administrators to use load balancers in front of as many instances as they want and keep good data visibility. Note Load balancing is optional, and it is entirely possible to have ingress on one or all instances as needed. Each instance must be able to join the automation controller cluster and expand its ability to run jobs. This is a simple system where jobs can run anywhere rather than be directed on where to run. Also, you can group clustered instances into different pools or queues, called Instance groups . Ansible Automation Platform supports container-based clusters by using Kubernetes, meaning you can install new automation controller instances on this platform without any variation or diversion in functionality. You can create instance groups to point to a Kubernetes container. For more information, see the Container and instance groups section. Supported operating systems The following operating systems are supported for establishing a clustered environment: Red Hat Enterprise Linux 8 or later Note Isolated instances are not supported in conjunction with running automation controller in OpenShift. 7.1. Setup considerations Learn about the initial setup of clusters. To upgrade an existing cluster, see Upgrade Planning in the Ansible Automation Platform Upgrade and Migration Guide . Note the following important considerations in the new clustering environment: PostgreSQL is a standalone instance and is not clustered. Automation controller does not manage replica configuration or database failover (if the user configures standby replicas). When you start a cluster, the database node must be a standalone server, and PostgreSQL must not be installed on one of the automation controller nodes. PgBouncer is not recommended for connection pooling with automation controller. Automation controller relies on pg_notify for sending messages across various components, and therefore, PgBouncer cannot readily be used in transaction pooling mode. All instances must be reachable from all other instances and they must be able to reach the database. It is also important for the hosts to have a stable address or hostname (depending on how the automation controller host is configured). All instances must be geographically collocated, with reliable low-latency connections between instances. To upgrade to a clustered environment, your primary instance must be part of the default group in the inventory and it needs to be the first host listed in the default group. Manual projects must be manually synced to all instances by the customer, and updated on all instances at once. The inventory file for platform deployments should be saved or persisted. If new instances are to be provisioned, the passwords and configuration options, as well as host names, must be made available to the installer. 7.2. Install and configure Provisioning new instances involves updating the inventory file and re-running the setup playbook. It is important that the inventory file contains all passwords and information used when installing the cluster or other instances might be reconfigured. The inventory file contains a single inventory group, automationcontroller . Note All instances are responsible for various housekeeping tasks related to task scheduling, such as determining where jobs are supposed to be launched and processing playbook events, as well as periodic cleanup. [automationcontroller] hostA hostB hostC [instance_group_east] hostB hostC [instance_group_west] hostC hostD Note If no groups are selected for a resource, then the automationcontroller group is used, but if any other group is selected, then the automationcontroller group is not used in any way. The database group remains for specifying an external PostgreSQL. If the database host is provisioned separately, this group must be empty: [automationcontroller] hostA hostB hostC [database] hostDB When a playbook runs on an individual controller instance in a cluster, the output of that playbook is broadcast to all of the other nodes as part of automation controller's websocket-based streaming output functionality. You must handle this data broadcast using internal addressing by specifying a private routable address for each node in your inventory: [automationcontroller] hostA routable_hostname=10.1.0.2 hostB routable_hostname=10.1.0.3 hostC routable_hostname=10.1.0.4 routable_hostname For more information about routable_hostname , see General variables in the Red Hat Ansible Automation Platform Installation Guide . Important versions of automation controller used the variable name rabbitmq_host . If you are upgrading from a version of the platform, and you previously specified rabbitmq_host in your inventory, rename rabbitmq_host to routable_hostname before upgrading. 7.2.1. Instances and ports used by automation controller and automation hub Ports and instances used by automation controller and also required by the on-premise automation hub node are as follows: Port 80, 443 (normal automation controller and automation hub ports) Port 22 (ssh - ingress only required) Port 5432 (database instance - if the database is installed on an external instance, it must be opened to automation controller instances) 7.3. Status and monitoring by browser API Automation controller reports as much status as it can using the browser API at /api/v2/ping to validate the health of the cluster. This includes the following: The instance servicing the HTTP request The timestamps of the last heartbeat of all other instances in the cluster Instance Groups and Instance membership in those groups View more details about Instances and Instance Groups, including running jobs and membership information at /api/v2/instances/ and /api/v2/instance_groups/ . 7.4. Instance services and failure behavior Each automation controller instance is made up of the following different services working collaboratively: HTTP services This includes the automation controller application itself as well as external web services. Callback receiver Receives job events from running Ansible jobs. Dispatcher The worker queue that processes and runs all jobs. Redis This key value store is used as a queue for event data propagated from ansible-playbook to the application. Rsyslog The log processing service used to deliver logs to various external logging services. Automation controller is configured so that if any of these services or their components fail, then all services are restarted. If these fail often in a short span of time, then the entire instance is placed offline in an automated fashion to allow remediation without causing unexpected behavior. For backing up and restoring a clustered environment, see the Backup and restore clustered environments section. 7.5. Job runtime behavior The way jobs are run and reported to a normal user of automation controller does not change. On the system side, note the following differences: When a job is submitted from the API interface it is pushed into the dispatcher queue. Each automation controller instance connects to and receives jobs from that queue using a scheduling algorithm. Any instance in the cluster is just as likely to receive the work and execute the task. If an instance fails while executing jobs, then the work is marked as permanently failed. Project updates run successfully on any instance that could potentially run a job. Projects synchronize themselves to the correct version on the instance immediately before running the job. If the required revision is already locally checked out and Galaxy or Collections updates are not required, then a sync cannot be performed. When the synchronization happens, it is recorded in the database as a project update with a launch_type = sync and job_type = run . Project syncs do not change the status or version of the project; instead, they update the source tree only on the instance where they run. If updates are required from Galaxy or Collections, a sync is performed that downloads the required roles, consuming more space in your /tmp file . In cases where you have a large project (around 10 GB), disk space on /tmp can be an issue. 7.5.1. Job runs By default, when a job is submitted to the automation controller queue, it can be picked up by any of the workers. However, you can control where a particular job runs, such as restricting the instances from which a job runs on. To support taking an instance offline temporarily, there is a property enabled defined on each instance. When this property is disabled, no jobs are assigned to that instance. Existing jobs finish, but no new work is assigned. Troubleshooting When you issue a cancel request on a running automation controller job, automation controller issues a SIGINT to the ansible-playbook process. While this causes Ansible to stop dispatching new tasks and exit, in many cases, module tasks that were already dispatched to remote hosts will run to completion. This behavior is similar to pressing Ctrl-c during a command-line Ansible run. With respect to software dependencies, if a running job is canceled, the job is removed but the dependencies remain. 7.6. Deprovisioning instances Re-running the setup playbook does not automatically deprovision instances since clusters do not currently distinguish between an instance that was taken offline intentionally or due to failure. Instead, shut down all services on the automation controller instance and then run the deprovisioning tool from any other instance. Procedure Shut down the instance or stop the service with the command: automation-controller-service stop . Run the following deprovision command from another instance to remove it from the automation controller cluster: USD awx-manage deprovision_instance --hostname=<name used in inventory file> Example Deprovisioning instance groups in automation controller does not automatically deprovision or remove instance groups. For more information, see the Deprovisioning instance groups section.
|
[
"[automationcontroller] hostA hostB hostC [instance_group_east] hostB hostC [instance_group_west] hostC hostD",
"[automationcontroller] hostA hostB hostC [database] hostDB",
"[automationcontroller] hostA routable_hostname=10.1.0.2 hostB routable_hostname=10.1.0.3 hostC routable_hostname=10.1.0.4 routable_hostname",
"awx-manage deprovision_instance --hostname=<name used in inventory file>",
"awx-manage deprovision_instance --hostname=hostB"
] |
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/automation_controller_administration_guide/controller-clustering
|
Chapter 11. Creating a customized instance
|
Chapter 11. Creating a customized instance Cloud users can specify additional data to use when they launch an instance, such as a shell script that the instance runs on boot. The cloud user can use the following methods to pass data to instances: User data Use to include instructions in the instance launch command for cloud-init to execute. Instance metadata A list of key-value pairs that you can specify when you create or update an instance. You can access the additional data passed to the instance by using a config drive or the metadata service. Config drive You can attach a config drive to an instance when it boots. The config drive is presented to the instance as a read-only drive. The instance can mount this drive and read files from it. You can use the config drive as a source for cloud-init information. Config drives are useful when combined with cloud-init for server bootstrapping, and when you want to pass large files to your instances. For example, you can configure cloud-init to automatically mount the config drive and run the setup scripts during the initial instance boot. Config drives are created with the volume label of config-2 , and attached to the instance when it boots. The contents of any additional files passed to the config drive are added to the user_data file in the openstack/{version}/ directory of the config drive. cloud-init retrieves the user data from this file. Metadata service Provides a REST API to retrieve data specific to an instance. Instances access this service at 169.254.169.254 or at fe80::a9fe:a9fe . cloud-init can use both a config drive and the metadata service to consume the additional data for customizing an instance. The cloud-init package supports several data input formats. Shell scripts and the cloud-config format are the most common input formats: Shell scripts: The data declaration begins with #! or Content-Type: text/x-shellscript . Shell scripts are invoked last in the boot process. cloud-config format: The data declaration begins with #cloud-config or Content-Type: text/cloud-config . cloud-config files must be valid YAML to be parsed and executed by cloud-init . Note cloud-init has a maximum user data size of 16384 bytes for data passed to an instance. You cannot change the size limit, therefore use gzip compression when you need to exceed the size limit. Vendor-specific data The RHOSP administrator can also pass data to instances when they are being created. This data may not be visible to you as the cloud user, for example, a cryptographic token that registers the instance with Active Directory. The RHOSP administrator uses the vendordata feature to pass data to instances. Vendordata configuration is read only, and is located in one of the following files: /openstack/{version}/vendor_data.json /openstack/{version}/vendor_data2.json You can view these files using the metadata service or from the config drive on your instance. To access the files by using the metadata service, make a GET request to either http://169.254.169.254/openstack/{version}/vendor_data.json or http://169.254.169.254/openstack/{version}/vendor_data2.json . 11.1. Customizing an instance by using user data You can use user data to include instructions in the instance launch command. cloud-init executes these commands to customize the instance as the last step in the boot process. Procedure Create a file with instructions for cloud-init . For example, create a bash script that installs and enables a web server on the instance: Launch an instance with the --user-data option to pass the bash script: When the instance state is active, attach a floating IP address: Log in to the instance with SSH: Check that the customization was successfully performed. For example, to check that the web server has been installed and enabled, enter the following command: Review the /var/log/cloud-init.log file for relevant messages, such as whether or not the cloud-init executed: 11.2. Customizing an instance by using metadata You can use instance metadata to specify the properties of an instance in the instance launch command. Procedure Launch an instance with the --property <key=value> option. For example, to mark the instance as a webserver, set the following property: Optional: Add an additional property to the instance after it is created, for example: 11.3. Customizing an instance by using a config drive You can create a config drive for an instance that is attached during the instance boot process. You can pass content to the config drive that the config drive makes available to the instance. Procedure Enable the config drive, and specify a file that contains content that you want to make available in the config drive. For example, the following command creates a new instance named config-drive-instance and attaches a config drive that contains the contents of the file my-user-data.txt : This command creates the config drive with the volume label of config-2 , which is attached to the instance when it boots, and adds the contents of my-user-data.txt to the user_data file in the openstack/{version}/ directory of the config drive. Log in to the instance. Mount the config drive: If the instance OS uses udev : If the instance OS does not use udev , you need to first identify the block device that corresponds to the config drive:
|
[
"vim /home/scripts/install_httpd #!/bin/bash -y install httpd python-psycopg2 systemctl enable httpd --now",
"openstack server create --image rhel8 --flavor default --nic net-id=web-server-network --security-group default --key-name web-server-keypair --user-data /home/scripts/install_httpd --wait web-server-instance",
"openstack floating ip create web-server-network openstack server add floating ip web-server-instance 172.25.250.123",
"ssh -i ~/.ssh/web-server-keypair [email protected]",
"curl http://localhost | grep Test <title>Test Page for the Apache HTTP Server on Red Hat Enterprise Linux</title> <h1>Red Hat Enterprise Linux <strong>Test Page</strong></h1>",
"sudo less /var/log/cloud-init.log ...output omitted ...util.py[DEBUG]: Cloud-init v. 0.7.9 finished at Sat, 23 Jun 2018 02:26:02 +0000. Datasource DataSourceOpenStack [net,ver=2]. Up 21.25 seconds",
"openstack server create --image rhel8 --flavor default --property role=webservers --wait web-server-instance",
"openstack server set --property region=emea --wait web-server-instance",
"(overcloud)USD openstack server create --flavor m1.tiny --config-drive true --user-data ./my-user-data.txt --image cirros config-drive-instance",
"mkdir -p /mnt/config mount /dev/disk/by-label/config-2 /mnt/config",
"blkid -t LABEL=\"config-2\" -odevice /dev/vdb mkdir -p /mnt/config mount /dev/vdb /mnt/config"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/creating_and_managing_instances/assembly_creating-a-customized-instance_instances
|
Chapter 5. KafkaClusterSpec schema reference
|
Chapter 5. KafkaClusterSpec schema reference Used in: KafkaSpec Full list of KafkaClusterSpec schema properties Configures a Kafka cluster using the Kafka custom resource. The config properties are one part of the overall configuration for the resource. Use the config properties to configure Kafka broker options as keys. Example Kafka configuration apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: version: 3.9.0 metadataVersion: 3.9 # ... config: auto.create.topics.enable: "false" offsets.topic.replication.factor: 3 transaction.state.log.replication.factor: 3 transaction.state.log.min.isr: 2 default.replication.factor: 3 min.insync.replicas: 2 # ... The values can be one of the following JSON types: String Number Boolean Exceptions You can specify and configure the options listed in the Apache Kafka documentation . However, Streams for Apache Kafka takes care of configuring and managing options related to the following, which cannot be changed: Security (encryption, authentication, and authorization) Listener configuration Broker ID configuration Configuration of log data directories Inter-broker communication ZooKeeper connectivity Properties with the following prefixes cannot be set: advertised. authorizer. broker. controller cruise.control.metrics.reporter.bootstrap. cruise.control.metrics.topic host.name inter.broker.listener.name listener. listeners. log.dir password. port process.roles sasl. security. servers,node.id ssl. super.user zookeeper.clientCnxnSocket zookeeper.connect zookeeper.set.acl zookeeper.ssl If the config property contains an option that cannot be changed, it is disregarded, and a warning message is logged to the Cluster Operator log file. All other supported options are forwarded to Kafka, including the following exceptions to the options configured by Streams for Apache Kafka: Any ssl configuration for supported TLS versions and cipher suites Configuration for the zookeeper.connection.timeout.ms property to set the maximum time allowed for establishing a ZooKeeper connection Cruise Control metrics properties: cruise.control.metrics.topic.num.partitions cruise.control.metrics.topic.replication.factor cruise.control.metrics.topic.retention.ms cruise.control.metrics.topic.auto.create.retries cruise.control.metrics.topic.auto.create.timeout.ms cruise.control.metrics.topic.min.insync.replicas Controller properties: controller.quorum.election.backoff.max.ms controller.quorum.election.timeout.ms controller.quorum.fetch.timeout.ms 5.1. Configuring rack awareness and init container images Rack awareness is enabled using the rack property. When rack awareness is enabled, Kafka broker pods use init container to collect the labels from the OpenShift cluster nodes. The container image for this init container can be specified using the brokerRackInitImage property. If the brokerRackInitImage field is not provided, the images used are prioritized as follows: Container image specified in STRIMZI_DEFAULT_KAFKA_INIT_IMAGE environment variable in the Cluster Operator configuration. registry.redhat.io/amq-streams/strimzi-rhel9-operator:2.9.0 container image. Example brokerRackInitImage configuration apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... rack: topologyKey: topology.kubernetes.io/zone brokerRackInitImage: my-org/my-image:latest # ... Note Overriding container images is recommended only in special situations, such as when your network does not allow access to the container registry used by Streams for Apache Kafka. In such cases, you should either copy the Streams for Apache Kafka images or build them from the source. Be aware that if the configured image is not compatible with Streams for Apache Kafka images, it might not work properly. 5.2. Logging Kafka has its own configurable loggers, which include the following: log4j.logger.org.apache.zookeeper log4j.logger.kafka log4j.logger.org.apache.kafka log4j.logger.kafka.request.logger log4j.logger.kafka.network.Processor log4j.logger.kafka.server.KafkaApis log4j.logger.kafka.network.RequestChannelUSD log4j.logger.kafka.controller log4j.logger.kafka.log.LogCleaner log4j.logger.state.change.logger log4j.logger.kafka.authorizer.logger Kafka uses the Apache log4j logger implementation. Use the logging property to configure loggers and logger levels. You can set the log levels by specifying the logger and level directly (inline) or use a custom (external) ConfigMap. If a ConfigMap is used, you set logging.valueFrom.configMapKeyRef.name property to the name of the ConfigMap containing the external logging configuration. Inside the ConfigMap, the logging configuration is described using log4j.properties . Both logging.valueFrom.configMapKeyRef.name and logging.valueFrom.configMapKeyRef.key properties are mandatory. A ConfigMap using the exact logging configuration specified is created with the custom resource when the Cluster Operator is running, then recreated after each reconciliation. If you do not specify a custom ConfigMap, default logging settings are used. If a specific logger value is not set, upper-level logger settings are inherited for that logger. For more information about log levels, see Apache logging services . Here we see examples of inline and external logging. The inline logging specifies the root logger level. You can also set log levels for specific classes or loggers by adding them to the loggers property. Inline logging apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # ... kafka: # ... logging: type: inline loggers: kafka.root.logger.level: INFO log4j.logger.kafka.coordinator.transaction: TRACE log4j.logger.kafka.log.LogCleanerManager: DEBUG log4j.logger.kafka.request.logger: DEBUG log4j.logger.io.strimzi.kafka.oauth: DEBUG log4j.logger.org.openpolicyagents.kafka.OpaAuthorizer: DEBUG # ... Note Setting a log level to DEBUG may result in a large amount of log output and may have performance implications. External logging apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # ... logging: type: external valueFrom: configMapKeyRef: name: customConfigMap key: kafka-log4j.properties # ... Any available loggers that are not configured have their level set to OFF . If Kafka was deployed using the Cluster Operator, changes to Kafka logging levels are applied dynamically. If you use external logging, a rolling update is triggered when logging appenders are changed. Garbage collector (GC) Garbage collector logging can also be enabled (or disabled) using the jvmOptions property . 5.3. KafkaClusterSpec schema properties Property Property type Description version string The Kafka broker version. Defaults to the latest version. Consult the user documentation to understand the process required to upgrade or downgrade the version. metadataVersion string Added in Streams for Apache Kafka 2.7. The KRaft metadata version used by the Kafka cluster. This property is ignored when running in ZooKeeper mode. If the property is not set, it defaults to the metadata version that corresponds to the version property. replicas integer The number of pods in the cluster. This property is required when node pools are not used. image string The container image used for Kafka pods. If the property is not set, the default Kafka image version is determined based on the version configuration. The image names are specifically mapped to corresponding versions in the Cluster Operator configuration. Changing the Kafka image version does not automatically update the image versions for other components, such as Kafka Exporter. listeners GenericKafkaListener array Configures listeners to provide access to Kafka brokers. config map Kafka broker config properties with the following prefixes cannot be set: listeners, advertised., broker., listener., host.name, port, inter.broker.listener.name, sasl., ssl., security., password., log.dir, zookeeper.connect, zookeeper.set.acl, zookeeper.ssl, zookeeper.clientCnxnSocket, authorizer., super.user, cruise.control.metrics.topic, cruise.control.metrics.reporter.bootstrap.servers, node.id, process.roles, controller., metadata.log.dir, zookeeper.metadata.migration.enable, client.quota.callback.static.kafka.admin., client.quota.callback.static.produce, client.quota.callback.static.fetch, client.quota.callback.static.storage.per.volume.limit.min.available., client.quota.callback.static.excluded.principal.name.list (with the exception of: zookeeper.connection.timeout.ms, sasl.server.max.receive.size, ssl.cipher.suites, ssl.protocol, ssl.enabled.protocols, ssl.secure.random.implementation, cruise.control.metrics.topic.num.partitions, cruise.control.metrics.topic.replication.factor, cruise.control.metrics.topic.retention.ms, cruise.control.metrics.topic.auto.create.retries, cruise.control.metrics.topic.auto.create.timeout.ms, cruise.control.metrics.topic.min.insync.replicas, controller.quorum.election.backoff.max.ms, controller.quorum.election.timeout.ms, controller.quorum.fetch.timeout.ms). storage EphemeralStorage , PersistentClaimStorage , JbodStorage Storage configuration (disk). Cannot be updated. This property is required when node pools are not used. authorization KafkaAuthorizationSimple , KafkaAuthorizationOpa , KafkaAuthorizationKeycloak , KafkaAuthorizationCustom Authorization configuration for Kafka brokers. rack Rack Configuration of the broker.rack broker config. brokerRackInitImage string The image of the init container used for initializing the broker.rack . livenessProbe Probe Pod liveness checking. readinessProbe Probe Pod readiness checking. jvmOptions JvmOptions JVM Options for pods. jmxOptions KafkaJmxOptions JMX Options for Kafka brokers. resources ResourceRequirements CPU and memory resources to reserve. metricsConfig JmxPrometheusExporterMetrics Metrics configuration. logging InlineLogging , ExternalLogging Logging configuration for Kafka. template KafkaClusterTemplate Template for Kafka cluster resources. The template allows users to specify how the OpenShift resources are generated. tieredStorage TieredStorageCustom Configure the tiered storage feature for Kafka brokers. quotas QuotasPluginKafka , QuotasPluginStrimzi Quotas plugin configuration for Kafka brokers allows setting quotas for disk usage, produce/fetch rates, and more. Supported plugin types include kafka (default) and strimzi . If not specified, the default kafka quotas plugin is used.
|
[
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: version: 3.9.0 metadataVersion: 3.9 # config: auto.create.topics.enable: \"false\" offsets.topic.replication.factor: 3 transaction.state.log.replication.factor: 3 transaction.state.log.min.isr: 2 default.replication.factor: 3 min.insync.replicas: 2",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # rack: topologyKey: topology.kubernetes.io/zone brokerRackInitImage: my-org/my-image:latest #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # kafka: # logging: type: inline loggers: kafka.root.logger.level: INFO log4j.logger.kafka.coordinator.transaction: TRACE log4j.logger.kafka.log.LogCleanerManager: DEBUG log4j.logger.kafka.request.logger: DEBUG log4j.logger.io.strimzi.kafka.oauth: DEBUG log4j.logger.org.openpolicyagents.kafka.OpaAuthorizer: DEBUG #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # logging: type: external valueFrom: configMapKeyRef: name: customConfigMap key: kafka-log4j.properties #"
] |
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-KafkaClusterSpec-reference
|
Chapter 5. Using CPU Manager and Topology Manager
|
Chapter 5. Using CPU Manager and Topology Manager CPU Manager manages groups of CPUs and constrains workloads to specific CPUs. CPU Manager is useful for workloads that have some of these attributes: Require as much CPU time as possible. Are sensitive to processor cache misses. Are low-latency network applications. Coordinate with other processes and benefit from sharing a single processor cache. Topology Manager collects hints from the CPU Manager, Device Manager, and other Hint Providers to align pod resources, such as CPU, SR-IOV VFs, and other device resources, for all Quality of Service (QoS) classes on the same non-uniform memory access (NUMA) node. Topology Manager uses topology information from the collected hints to decide if a pod can be accepted or rejected on a node, based on the configured Topology Manager policy and pod resources requested. Topology Manager is useful for workloads that use hardware accelerators to support latency-critical execution and high throughput parallel computation. To use Topology Manager you must configure CPU Manager with the static policy. 5.1. Setting up CPU Manager Procedure Optional: Label a node: # oc label node perf-node.example.com cpumanager=true Edit the MachineConfigPool of the nodes where CPU Manager should be enabled. In this example, all workers have CPU Manager enabled: # oc edit machineconfigpool worker Add a label to the worker machine config pool: metadata: creationTimestamp: 2020-xx-xxx generation: 3 labels: custom-kubelet: cpumanager-enabled Create a KubeletConfig , cpumanager-kubeletconfig.yaml , custom resource (CR). Refer to the label created in the step to have the correct nodes updated with the new kubelet config. See the machineConfigPoolSelector section: apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cpumanager-enabled spec: machineConfigPoolSelector: matchLabels: custom-kubelet: cpumanager-enabled kubeletConfig: cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s 2 1 Specify a policy: none . This policy explicitly enables the existing default CPU affinity scheme, providing no affinity beyond what the scheduler does automatically. This is the default policy. static . This policy allows containers in guaranteed pods with integer CPU requests. It also limits access to exclusive CPUs on the node. If static , you must use a lowercase s . 2 Optional. Specify the CPU Manager reconcile frequency. The default is 5s . Create the dynamic kubelet config: # oc create -f cpumanager-kubeletconfig.yaml This adds the CPU Manager feature to the kubelet config and, if needed, the Machine Config Operator (MCO) reboots the node. To enable CPU Manager, a reboot is not needed. Check for the merged kubelet config: # oc get machineconfig 99-worker-XXXXXX-XXXXX-XXXX-XXXXX-kubelet -o json | grep ownerReference -A7 Example output "ownerReferences": [ { "apiVersion": "machineconfiguration.openshift.io/v1", "kind": "KubeletConfig", "name": "cpumanager-enabled", "uid": "7ed5616d-6b72-11e9-aae1-021e1ce18878" } ] Check the worker for the updated kubelet.conf : # oc debug node/perf-node.example.com sh-4.2# cat /host/etc/kubernetes/kubelet.conf | grep cpuManager Example output cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s 2 1 cpuManagerPolicy is defined when you create the KubeletConfig CR. 2 cpuManagerReconcilePeriod is defined when you create the KubeletConfig CR. Create a pod that requests a core or multiple cores. Both limits and requests must have their CPU value set to a whole integer. That is the number of cores that will be dedicated to this pod: # cat cpumanager-pod.yaml Example output apiVersion: v1 kind: Pod metadata: generateName: cpumanager- spec: containers: - name: cpumanager image: gcr.io/google_containers/pause-amd64:3.0 resources: requests: cpu: 1 memory: "1G" limits: cpu: 1 memory: "1G" nodeSelector: cpumanager: "true" Create the pod: # oc create -f cpumanager-pod.yaml Verify that the pod is scheduled to the node that you labeled: # oc describe pod cpumanager Example output Name: cpumanager-6cqz7 Namespace: default Priority: 0 PriorityClassName: <none> Node: perf-node.example.com/xxx.xx.xx.xxx ... Limits: cpu: 1 memory: 1G Requests: cpu: 1 memory: 1G ... QoS Class: Guaranteed Node-Selectors: cpumanager=true Verify that the cgroups are set up correctly. Get the process ID (PID) of the pause process: # ββinit.scope β ββ1 /usr/lib/systemd/systemd --switched-root --system --deserialize 17 ββkubepods.slice ββkubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice β ββcrio-b5437308f1a574c542bdf08563b865c0345c8f8c0b0a655612c.scope β ββ32706 /pause Pods of quality of service (QoS) tier Guaranteed are placed within the kubepods.slice . Pods of other QoS tiers end up in child cgroups of kubepods : # cd /sys/fs/cgroup/cpuset/kubepods.slice/kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice/crio-b5437308f1ad1a7db0574c542bdf08563b865c0345c86e9585f8c0b0a655612c.scope # for i in `ls cpuset.cpus tasks` ; do echo -n "USDi "; cat USDi ; done Example output cpuset.cpus 1 tasks 32706 Check the allowed CPU list for the task: # grep ^Cpus_allowed_list /proc/32706/status Example output Cpus_allowed_list: 1 Verify that another pod (in this case, the pod in the burstable QoS tier) on the system cannot run on the core allocated for the Guaranteed pod: # cat /sys/fs/cgroup/cpuset/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc494a073_6b77_11e9_98c0_06bba5c387ea.slice/crio-c56982f57b75a2420947f0afc6cafe7534c5734efc34157525fa9abbf99e3849.scope/cpuset.cpus 0 # oc describe node perf-node.example.com Example output ... Capacity: attachable-volumes-aws-ebs: 39 cpu: 2 ephemeral-storage: 124768236Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 8162900Ki pods: 250 Allocatable: attachable-volumes-aws-ebs: 39 cpu: 1500m ephemeral-storage: 124768236Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 7548500Ki pods: 250 ------- ---- ------------ ---------- --------------- ------------- --- default cpumanager-6cqz7 1 (66%) 1 (66%) 1G (12%) 1G (12%) 29m Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 1440m (96%) 1 (66%) This VM has two CPU cores. The system-reserved setting reserves 500 millicores, meaning that half of one core is subtracted from the total capacity of the node to arrive at the Node Allocatable amount. You can see that Allocatable CPU is 1500 millicores. This means you can run one of the CPU Manager pods since each will take one whole core. A whole core is equivalent to 1000 millicores. If you try to schedule a second pod, the system will accept the pod, but it will never be scheduled: NAME READY STATUS RESTARTS AGE cpumanager-6cqz7 1/1 Running 0 33m cpumanager-7qc2t 0/1 Pending 0 11s 5.2. Topology Manager policies Topology Manager aligns Pod resources of all Quality of Service (QoS) classes by collecting topology hints from Hint Providers, such as CPU Manager and Device Manager, and using the collected hints to align the Pod resources. Topology Manager supports four allocation policies, which you assign in the KubeletConfig custom resource (CR) named cpumanager-enabled : none policy This is the default policy and does not perform any topology alignment. best-effort policy For each container in a pod with the best-effort topology management policy, kubelet calls each Hint Provider to discover their resource availability. Using this information, the Topology Manager stores the preferred NUMA Node affinity for that container. If the affinity is not preferred, Topology Manager stores this and admits the pod to the node. restricted policy For each container in a pod with the restricted topology management policy, kubelet calls each Hint Provider to discover their resource availability. Using this information, the Topology Manager stores the preferred NUMA Node affinity for that container. If the affinity is not preferred, Topology Manager rejects this pod from the node, resulting in a pod in a Terminated state with a pod admission failure. single-numa-node policy For each container in a pod with the single-numa-node topology management policy, kubelet calls each Hint Provider to discover their resource availability. Using this information, the Topology Manager determines if a single NUMA Node affinity is possible. If it is, the pod is admitted to the node. If a single NUMA Node affinity is not possible, the Topology Manager rejects the pod from the node. This results in a pod in a Terminated state with a pod admission failure. 5.3. Setting up Topology Manager To use Topology Manager, you must configure an allocation policy in the KubeletConfig custom resource (CR) named cpumanager-enabled . This file might exist if you have set up CPU Manager. If the file does not exist, you can create the file. Prequisites Configure the CPU Manager policy to be static . Procedure To activate Topololgy Manager: Configure the Topology Manager allocation policy in the custom resource. USD oc edit KubeletConfig cpumanager-enabled apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cpumanager-enabled spec: machineConfigPoolSelector: matchLabels: custom-kubelet: cpumanager-enabled kubeletConfig: cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s topologyManagerPolicy: single-numa-node 2 1 This parameter must be static with a lowercase s . 2 Specify your selected Topology Manager allocation policy. Here, the policy is single-numa-node . Acceptable values are: default , best-effort , restricted , single-numa-node . 5.4. Pod interactions with Topology Manager policies The example Pod specs below help illustrate pod interactions with Topology Manager. The following pod runs in the BestEffort QoS class because no resource requests or limits are specified. spec: containers: - name: nginx image: nginx The pod runs in the Burstable QoS class because requests are less than limits. spec: containers: - name: nginx image: nginx resources: limits: memory: "200Mi" requests: memory: "100Mi" If the selected policy is anything other than none , Topology Manager would not consider either of these Pod specifications. The last example pod below runs in the Guaranteed QoS class because requests are equal to limits. spec: containers: - name: nginx image: nginx resources: limits: memory: "200Mi" cpu: "2" example.com/device: "1" requests: memory: "200Mi" cpu: "2" example.com/device: "1" Topology Manager would consider this pod. The Topology Manager would consult the hint providers, which are CPU Manager and Device Manager, to get topology hints for the pod. Topology Manager will use this information to store the best topology for this container. In the case of this pod, CPU Manager and Device Manager will use this stored information at the resource allocation stage.
|
[
"oc label node perf-node.example.com cpumanager=true",
"oc edit machineconfigpool worker",
"metadata: creationTimestamp: 2020-xx-xxx generation: 3 labels: custom-kubelet: cpumanager-enabled",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cpumanager-enabled spec: machineConfigPoolSelector: matchLabels: custom-kubelet: cpumanager-enabled kubeletConfig: cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s 2",
"oc create -f cpumanager-kubeletconfig.yaml",
"oc get machineconfig 99-worker-XXXXXX-XXXXX-XXXX-XXXXX-kubelet -o json | grep ownerReference -A7",
"\"ownerReferences\": [ { \"apiVersion\": \"machineconfiguration.openshift.io/v1\", \"kind\": \"KubeletConfig\", \"name\": \"cpumanager-enabled\", \"uid\": \"7ed5616d-6b72-11e9-aae1-021e1ce18878\" } ]",
"oc debug node/perf-node.example.com sh-4.2# cat /host/etc/kubernetes/kubelet.conf | grep cpuManager",
"cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s 2",
"cat cpumanager-pod.yaml",
"apiVersion: v1 kind: Pod metadata: generateName: cpumanager- spec: containers: - name: cpumanager image: gcr.io/google_containers/pause-amd64:3.0 resources: requests: cpu: 1 memory: \"1G\" limits: cpu: 1 memory: \"1G\" nodeSelector: cpumanager: \"true\"",
"oc create -f cpumanager-pod.yaml",
"oc describe pod cpumanager",
"Name: cpumanager-6cqz7 Namespace: default Priority: 0 PriorityClassName: <none> Node: perf-node.example.com/xxx.xx.xx.xxx Limits: cpu: 1 memory: 1G Requests: cpu: 1 memory: 1G QoS Class: Guaranteed Node-Selectors: cpumanager=true",
"ββinit.scope β ββ1 /usr/lib/systemd/systemd --switched-root --system --deserialize 17 ββkubepods.slice ββkubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice β ββcrio-b5437308f1a574c542bdf08563b865c0345c8f8c0b0a655612c.scope β ββ32706 /pause",
"cd /sys/fs/cgroup/cpuset/kubepods.slice/kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice/crio-b5437308f1ad1a7db0574c542bdf08563b865c0345c86e9585f8c0b0a655612c.scope for i in `ls cpuset.cpus tasks` ; do echo -n \"USDi \"; cat USDi ; done",
"cpuset.cpus 1 tasks 32706",
"grep ^Cpus_allowed_list /proc/32706/status",
"Cpus_allowed_list: 1",
"cat /sys/fs/cgroup/cpuset/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc494a073_6b77_11e9_98c0_06bba5c387ea.slice/crio-c56982f57b75a2420947f0afc6cafe7534c5734efc34157525fa9abbf99e3849.scope/cpuset.cpus 0 oc describe node perf-node.example.com",
"Capacity: attachable-volumes-aws-ebs: 39 cpu: 2 ephemeral-storage: 124768236Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 8162900Ki pods: 250 Allocatable: attachable-volumes-aws-ebs: 39 cpu: 1500m ephemeral-storage: 124768236Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 7548500Ki pods: 250 ------- ---- ------------ ---------- --------------- ------------- --- default cpumanager-6cqz7 1 (66%) 1 (66%) 1G (12%) 1G (12%) 29m Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 1440m (96%) 1 (66%)",
"NAME READY STATUS RESTARTS AGE cpumanager-6cqz7 1/1 Running 0 33m cpumanager-7qc2t 0/1 Pending 0 11s",
"oc edit KubeletConfig cpumanager-enabled",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cpumanager-enabled spec: machineConfigPoolSelector: matchLabels: custom-kubelet: cpumanager-enabled kubeletConfig: cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s topologyManagerPolicy: single-numa-node 2",
"spec: containers: - name: nginx image: nginx",
"spec: containers: - name: nginx image: nginx resources: limits: memory: \"200Mi\" requests: memory: \"100Mi\"",
"spec: containers: - name: nginx image: nginx resources: limits: memory: \"200Mi\" cpu: \"2\" example.com/device: \"1\" requests: memory: \"200Mi\" cpu: \"2\" example.com/device: \"1\""
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/scalability_and_performance/using-cpu-manager
|
Chapter 1. Logging configuration
|
Chapter 1. Logging configuration Read about the use of logging API in Red Hat build of Quarkus, configuring logging output, and using logging adapters to unify the output from other logging APIs. Quarkus uses the JBoss Log Manager logging backend for publishing application and framework logs. Quarkus supports the JBoss Logging API and multiple other logging APIs, seamlessly integrated with JBoss Log Manager. You can use any of the following APIs : JBoss Logging JDK java.util.logging (JUL) SLF4J Apache Commons Logging Apache Log4j 2 Apache Log4j 1 1.1. Use JBoss Logging for application logging When using the JBoss Logging API, your application requires no additional dependencies, as Red Hat build of Quarkus automatically provides it. An example of using the JBoss Logging API to log a message: import org.jboss.logging.Logger; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import jakarta.ws.rs.core.MediaType; @Path("/hello") public class ExampleResource { private static final Logger LOG = Logger.getLogger(ExampleResource.class); @GET @Produces(MediaType.TEXT_PLAIN) public String hello() { LOG.info("Hello"); return "hello"; } } Note While JBoss Logging routes log messages into JBoss Log Manager directly, one of your libraries might rely on a different logging API. In such cases, you need to use a logging adapter to ensure that its log messages are routed to JBoss Log Manager as well. 1.2. Get an application logger To get an application logger in Red Hat build of Quarkus, select one of the following approaches. Declaring a logger field Simplified logging Injecting a configured logger 1.2.1. Declaring a logger field With this classic approach, you use a specific API to obtain a logger instance, store it in a static field of a class, and call logging operations upon this instance. The same flow can be applied with any of the supported logging APIs . An example of storing a logger instance into a static field by using the JBoss Logging API: package com.example; import org.jboss.logging.Logger; public class MyService { private static final Logger log = Logger.getLogger(MyService.class); 1 public void doSomething() { log.info("It works!"); 2 } } 1 Define the logger field. 2 Invoke the desired logging methods on the log object. 1.2.2. Simplified logging Quarkus simplifies logging by automatically adding logger fields to classes that use io.quarkus.logging.Log . This eliminates the need for repetitive boilerplate code and enhances logging setup convenience. An example of simplified logging with static method calls: package com.example; import io.quarkus.logging.Log; 1 class MyService { 2 public void doSomething() { Log.info("Simple!"); 3 } } 1 The io.quarkus.logging.Log class contains the same methods as JBoss Logging, except that they are static . 2 Note that the class does not declare a logger field. This is because during application build, a private static final org.jboss.logging.Logger field is created automatically in each class that uses the Log API. The fully qualified name of the class that calls the Log methods is used as a logger name. In this example, the logger name would be com.example.MyService . 3 Finally, all calls to Log methods are rewritten to regular JBoss Logging calls on the logger field during the application build. Warning Only use the Log API in application classes, not in external dependencies. Log method calls that are not processed by Quarkus at build time will throw an exception. 1.2.3. Injecting a configured logger The injection of a configured org.jboss.logging.Logger logger instance with the @Inject annotation is another alternative to adding an application logger, but is applicable only to CDI beans. You can use @Inject Logger log , where the logger gets named after the class you inject it to, or @Inject @LoggerName("... ") Logger log , where the logger will receive the specified name. Once injected, you can use the log object to invoke logging methods. An example of two different types of logger injection: package com.example; import org.jboss.logging.Logger; @ApplicationScoped class SimpleBean { @Inject Logger log; 1 @LoggerName("foo") Logger fooLog; 2 public void ping() { log.info("Simple!"); fooLog.info("Goes to _foo_ logger!"); } } 1 The fully qualified class name (FQCN) of the declaring class is used as a logger name, for example, org.jboss.logging.Logger.getLogger(SimpleBean.class) will be used. 2 In this case, the name foo is used as a logger name, for example, org.jboss.logging.Logger.getLogger("foo") will be used. Note The logger instances are cached internally. Therefore, when a logger is injected, for example, into a @RequestScoped bean, it is shared for all bean instances to avoid possible performance penalties associated with logger instantiation. 1.3. Use log levels Red Hat build of Quarkus provides different log levels, which helps developers to control the amount of information logged based on the severity of the events. Table 1.1. Available log levels: OFF A special level used in configuration to turn off logging. FATAL A critical service failure or total inability to handle any requests. ERROR A major issue in processing or an inability to complete a request. WARN A non-critical service error or problem that might not require immediate correction. INFO Service lifecycle events or other important infrequent information. DEBUG Additional information about lifecycle events or events not tied to specific requests, useful for debugging. TRACE Detailed per-request debugging information, potentially at a very high frequency. ALL A special level to turn on logging for all messages, including custom levels. You can also configure the following levels for applications and libraries that use java.util.logging : SEVERE Same as ERROR . WARNING Same as WARN . CONFIG Service configuration information. FINE Same as DEBUG . FINER Same as TRACE . FINEST Increased debug output compared to TRACE , which might have a higher frequency. Table 1.2. The mapping between the levels Numerical level value Standard level name Equivalent java.util.logging (JUL) level name 1100 FATAL Not applicable 1000 ERROR SEVERE 900 WARN WARNING 800 INFO INFO 700 Not applicable CONFIG 500 DEBUG FINE 400 TRACE FINER 300 Not applicable FINEST 1.4. Configure the log level, category, and format JBoss Logging, integrated into Red Hat build of Quarkus, offers a unified configuration for all supported logging APIs through a single configuration file that sets up all available extensions. To adjust runtime logging, modify the application.properties file. An example of how you can set the default log level to INFO logging and include Hibernate DEBUG logs: quarkus.log.level=INFO quarkus.log.category."org.hibernate".level=DEBUG When you set the log level to below DEBUG , you must also adjust the minimum log level. This setting might be applied either globally with the quarkus.log.min-level configuration property, or per category: quarkus.log.category."org.hibernate".min-level=TRACE This sets a floor level for which Quarkus needs to generate supporting code. The minimum log level must be set at build time so that Quarkus can open the door to optimization opportunities where logging on unusable levels can be elided. An example from native execution: Setting INFO as the minimum logging level sets lower-level checks, such as isTraceEnabled , to false . This identifies code like if(logger.isDebug()) callMethod(); that will never be executed and mark it as "dead." Warning If you add these properties on the command line, ensure the " character is escaped properly: All potential properties are listed in the logging configuration reference section. 1.4.1. Logging categories Logging is configured on a per-category basis, with each category being configured independently. Configuration for a category applies recursively to all subcategories unless there is a more specific subcategory configuration. The parent of all logging categories is called the "root category." As the ultimate parent, this category might contain a configuration that applies globally to all other categories. This includes the globally configured handlers and formatters. Example 1.1. An example of a global configuration that applies to all categories: quarkus.log.handlers=con,mylog quarkus.log.handler.console.con.enable=true quarkus.log.handler.file.mylog.enable=true In this example, the root category is configured to use two named handlers: con and mylog . Example 1.2. An example of a per-category configuration: quarkus.log.category."org.apache.kafka.clients".level=INFO quarkus.log.category."org.apache.kafka.common.utils".level=INFO This example shows how you can configure the minimal log level on the categories org.apache.kafka.clients and org.apache.kafka.common.utils . For more information, see Logging configuration reference . If you want to configure something extra for a specific category, create a named handler like quarkus.log.handler.[console|file|syslog].<your-handler-name>.* and set it up for that category by using quarkus.log.category.<my-category>.handlers . An example use case can be a desire to use a different timestamp format for log messages which are saved to a file than the format used for other handlers. For further demonstration, see the outputs of the Attaching named handlers to a category example. Property Name Default Description quarkus.log.category."<category-name>".level INFO [a] The level to use to configure the category named <category-name> . The quotes are necessary. quarkus.log.category."<category-name>".min-level DEBUG The minimum logging level to use to configure the category named <category-name> . The quotes are necessary. quarkus.log.category."<category-name>".use-parent-handlers true Specify whether this logger should send its output to its parent logger. quarkus.log.category."<category-name>".handlers=[<handler>] empty [b] The names of the handlers that you want to attach to a specific category. [a] Some extensions might define customized default log levels for certain categories to reduce log noise by default. Setting the log level in configuration will override any extension-defined log levels. [b] By default, the configured category gets the same handlers attached as the one on the root logger. Note The . symbol separates the specific parts in the configuration property. The quotes in the property name are used as a required escape to keep category specifications, such as quarkus.log.category."io.quarkus.smallrye.jwt".level=TRACE , intact. 1.4.2. Root logger configuration The root logger category is handled separately, and is configured by using the following properties: Property Name Default Description quarkus.log.level INFO The default log level for every log category. quarkus.log.min-level DEBUG The default minimum log level for every log category. The parent category is examined if no level configuration exists for a given logger category. The root logger configuration is used if no specific configurations are provided for the category and any of its parent categories. Note Although the root logger's handlers are usually configured directly via quarkus.log.console , quarkus.log.file and quarkus.log.syslog , it can nonetheless have additional named handlers attached to it using the quarkus.log.handlers property. 1.5. Logging format Red Hat build of Quarkus uses a pattern-based logging formatter that generates human-readable text logs by default, but you can also configure the format for each log handler by using a dedicated property. For the console handler, the property is quarkus.log.console.format . The logging format string supports the following symbols: Symbol Summary Description %% % Renders a simple % character. %c Category Renders the category name. %C Source class Renders the source class name. [a] %d{xxx} Date Renders a date with the given date format string, which uses the syntax defined by java.text.SimpleDateFormat . %e Exception Renders the thrown exception, if any. %F Source file Renders the source file name. [a] %h Host name Renders the system simple host name. %H Qualified host name Renders the system's fully qualified host name, which might be the same as the simple host name, depending on operating system configuration. %i Process ID Render the current process PID. %l Source location Renders the source location information, which includes source file name, line number, class name, and method name. [a] %L Source line Renders the source line number. [a] %m Full Message Renders the log message plus exception (if any). %M Source method Renders the source method name. [a] %n Newline Renders the platform-specific line separator string. %N Process name Render the name of the current process. %p Level Render the log level of the message. %r Relative time Render the time in milliseconds since the start of the application log. %s Simple message Renders just the log message, with no exception trace. %t Thread name Render the thread name. %t{id} Thread ID Render the thread ID. %z{<zone name>} Time zone Set the time zone of the output to <zone name> . %X{<MDC property name>} Mapped Diagnostic Context Value Renders the value from Mapped Diagnostic Context. %X Mapped Diagnostic Context Values Renders all the values from Mapped Diagnostic Context in format {property.key=property.value} . %x Nested Diagnostics context values Renders all the values from Nested Diagnostics Context in format {value1.value2} . [a] Format sequences which examine caller information might affect performance 1.5.1. Alternative console logging formats Changing the console log format is useful, for example, when the console output of the Quarkus application is captured by a service that processes and stores the log information for later analysis. 1.5.1.1. JSON logging format The quarkus-logging-json extension might be employed to add support for the JSON logging format and its related configuration. Add this extension to your build file as the following snippet illustrates: Using Maven: <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-logging-json</artifactId> </dependency> Using Gradle: implementation("io.quarkus:quarkus-logging-json") By default, the presence of this extension replaces the output format configuration from the console configuration, and the format string and the color settings (if any) are ignored. The other console configuration items, including those that control asynchronous logging and the log level, will continue to be applied. For some, it will make sense to use humanly readable (unstructured) logging in dev mode and JSON logging (structured) in production mode. This can be achieved using different profiles, as shown in the following configuration. Disable JSON logging in application.properties for dev and test mode: %dev.quarkus.log.console.json=false %test.quarkus.log.console.json=false 1.5.1.1.1. Configuration Configure the JSON logging extension using supported properties to customize its behavior. Configuration property fixed at build time - All other configuration properties are overridable at runtime Configuration property Type Default Console logging Type Default quarkus.log.console.json Determine whether to enable the JSON console formatting extension, which disables "normal" console formatting. Environment variable: QUARKUS_LOG_CONSOLE_JSON boolean true quarkus.log.console.json.pretty-print Enable "pretty printing" of the JSON record. Note that some JSON parsers will fail to read the pretty printed output. Environment variable: QUARKUS_LOG_CONSOLE_JSON_PRETTY_PRINT boolean false quarkus.log.console.json.date-format The date format to use. The special string "default" indicates that the default format should be used. Environment variable: QUARKUS_LOG_CONSOLE_JSON_DATE_FORMAT string default quarkus.log.console.json.record-delimiter The special end-of-record delimiter to be used. By default, newline is used. Environment variable: QUARKUS_LOG_CONSOLE_JSON_RECORD_DELIMITER string quarkus.log.console.json.zone-id The zone ID to use. The special string "default" indicates that the default zone should be used. Environment variable: QUARKUS_LOG_CONSOLE_JSON_ZONE_ID string default quarkus.log.console.json.exception-output-type The exception output type to specify. Environment variable: QUARKUS_LOG_CONSOLE_JSON_EXCEPTION_OUTPUT_TYPE detailed , formatted , detailed-and-formatted detailed quarkus.log.console.json.print-details Enable printing of more details in the log. Printing the details can be expensive as the values are retrieved from the caller. The details include the source class name, source file name, source method name, and source line number. Environment variable: QUARKUS_LOG_CONSOLE_JSON_PRINT_DETAILS boolean false quarkus.log.console.json.key-overrides Override keys with custom values. Omitting this value indicates that no key overrides will be applied. Environment variable: QUARKUS_LOG_CONSOLE_JSON_KEY_OVERRIDES string quarkus.log.console.json.excluded-keys Keys to be excluded from the JSON output. Environment variable: QUARKUS_LOG_CONSOLE_JSON_EXCLUDED_KEYS list of string quarkus.log.console.json.additional-field."field-name".value Additional field value. Environment variable: QUARKUS_LOG_CONSOLE_JSON_ADDITIONAL_FIELD__FIELD_NAME__VALUE string required quarkus.log.console.json.additional-field."field-name".type Additional field type specification. Supported types: string , int , and long . String is the default if not specified. Environment variable: QUARKUS_LOG_CONSOLE_JSON_ADDITIONAL_FIELD__FIELD_NAME__TYPE string , int , long string File logging Type Default quarkus.log.file.json Determine whether to enable the JSON console formatting extension, which disables "normal" console formatting. Environment variable: QUARKUS_LOG_FILE_JSON boolean true quarkus.log.file.json.pretty-print Enable "pretty printing" of the JSON record. Note that some JSON parsers will fail to read the pretty printed output. Environment variable: QUARKUS_LOG_FILE_JSON_PRETTY_PRINT boolean false quarkus.log.file.json.date-format The date format to use. The special string "default" indicates that the default format should be used. Environment variable: QUARKUS_LOG_FILE_JSON_DATE_FORMAT string default quarkus.log.file.json.record-delimiter The special end-of-record delimiter to be used. By default, newline is used. Environment variable: QUARKUS_LOG_FILE_JSON_RECORD_DELIMITER string quarkus.log.file.json.zone-id The zone ID to use. The special string "default" indicates that the default zone should be used. Environment variable: QUARKUS_LOG_FILE_JSON_ZONE_ID string default quarkus.log.file.json.exception-output-type The exception output type to specify. Environment variable: QUARKUS_LOG_FILE_JSON_EXCEPTION_OUTPUT_TYPE detailed , formatted , detailed-and-formatted detailed quarkus.log.file.json.print-details Enable printing of more details in the log. Printing the details can be expensive as the values are retrieved from the caller. The details include the source class name, source file name, source method name, and source line number. Environment variable: QUARKUS_LOG_FILE_JSON_PRINT_DETAILS boolean false quarkus.log.file.json.key-overrides Override keys with custom values. Omitting this value indicates that no key overrides will be applied. Environment variable: QUARKUS_LOG_FILE_JSON_KEY_OVERRIDES string quarkus.log.file.json.excluded-keys Keys to be excluded from the JSON output. Environment variable: QUARKUS_LOG_FILE_JSON_EXCLUDED_KEYS list of string quarkus.log.file.json.additional-field."field-name".value Additional field value. Environment variable: QUARKUS_LOG_FILE_JSON_ADDITIONAL_FIELD__FIELD_NAME__VALUE string required quarkus.log.file.json.additional-field."field-name".type Additional field type specification. Supported types: string , int , and long . String is the default if not specified. Environment variable: QUARKUS_LOG_FILE_JSON_ADDITIONAL_FIELD__FIELD_NAME__TYPE string , int , long string Syslog logging Type Default quarkus.log.syslog.json Determine whether to enable the JSON console formatting extension, which disables "normal" console formatting. Environment variable: QUARKUS_LOG_SYSLOG_JSON boolean true quarkus.log.syslog.json.pretty-print Enable "pretty printing" of the JSON record. Note that some JSON parsers will fail to read the pretty printed output. Environment variable: QUARKUS_LOG_SYSLOG_JSON_PRETTY_PRINT boolean false quarkus.log.syslog.json.date-format The date format to use. The special string "default" indicates that the default format should be used. Environment variable: QUARKUS_LOG_SYSLOG_JSON_DATE_FORMAT string default quarkus.log.syslog.json.record-delimiter The special end-of-record delimiter to be used. By default, newline is used. Environment variable: QUARKUS_LOG_SYSLOG_JSON_RECORD_DELIMITER string quarkus.log.syslog.json.zone-id The zone ID to use. The special string "default" indicates that the default zone should be used. Environment variable: QUARKUS_LOG_SYSLOG_JSON_ZONE_ID string default quarkus.log.syslog.json.exception-output-type The exception output type to specify. Environment variable: QUARKUS_LOG_SYSLOG_JSON_EXCEPTION_OUTPUT_TYPE detailed , formatted , detailed-and-formatted detailed quarkus.log.syslog.json.print-details Enable printing of more details in the log. Printing the details can be expensive as the values are retrieved from the caller. The details include the source class name, source file name, source method name, and source line number. Environment variable: QUARKUS_LOG_SYSLOG_JSON_PRINT_DETAILS boolean false quarkus.log.syslog.json.key-overrides Override keys with custom values. Omitting this value indicates that no key overrides will be applied. Environment variable: QUARKUS_LOG_SYSLOG_JSON_KEY_OVERRIDES string quarkus.log.syslog.json.excluded-keys Keys to be excluded from the JSON output. Environment variable: QUARKUS_LOG_SYSLOG_JSON_EXCLUDED_KEYS list of string quarkus.log.syslog.json.additional-field."field-name".value Additional field value. Environment variable: QUARKUS_LOG_SYSLOG_JSON_ADDITIONAL_FIELD__FIELD_NAME__VALUE string required quarkus.log.syslog.json.additional-field."field-name".type Additional field type specification. Supported types: string , int , and long . String is the default if not specified. Environment variable: QUARKUS_LOG_SYSLOG_JSON_ADDITIONAL_FIELD__FIELD_NAME__TYPE string , int , long string Warning Enabling pretty printing might cause certain processors and JSON parsers to fail. Note Printing the details can be expensive as the values are retrieved from the caller. The details include the source class name, source file name, source method name, and source line number. 1.6. Log handlers A log handler is a logging component responsible for the emission of log events to a recipient. Red Hat build of Quarkus includes several different log handlers: console , file , and syslog . The featured examples use com.example as a logging category. 1.6.1. Console log handler The console log handler is enabled by default, and it directs all log events to the application's console, usually the system's stdout . A global configuration example: quarkus.log.console.format=%d{yyyy-MM-dd HH:mm:ss} %-5p [%c] (%t) %s%e%n A per-category configuration example: quarkus.log.handler.console.my-console-handler.format=%d{yyyy-MM-dd HH:mm:ss} [com.example] %s%e%n quarkus.log.category."com.example".handlers=my-console-handler quarkus.log.category."com.example".use-parent-handlers=false For details about its configuration, see the console logging configuration reference. 1.6.2. File log handler To log events to a file on the application's host, use the Quarkus file log handler. The file log handler is disabled by default, so you must first enable it. The Quarkus file log handler supports log file rotation. Log file rotation ensures efficient log management by preserving a specified number of backup files while keeping the primary log file updated and at a manageable size. A global configuration example: quarkus.log.file.enable=true quarkus.log.file.path=application.log quarkus.log.file.format=%d{yyyy-MM-dd HH:mm:ss} %-5p [%c] (%t) %s%e%n A per-category configuration example: quarkus.log.handler.file.my-file-handler.enable=true quarkus.log.handler.file.my-file-handler.path=application.log quarkus.log.handler.file.my-file-handler.format=%d{yyyy-MM-dd HH:mm:ss} [com.example] %s%e%n quarkus.log.category."com.example".handlers=my-file-handler quarkus.log.category."com.example".use-parent-handlers=false For details about its configuration, see the file logging configuration reference. 1.6.3. Syslog log handler The syslog handler in Quarkus follows the Syslog protocol, which is used to send log messages on UNIX-like systems. It uses the protocol defined in RFC 5424 . By default, the syslog handler is disabled. When enabled, it sends all log events to a syslog server, typically the local syslog server for the application. A global configuration example: quarkus.log.syslog.enable=true quarkus.log.syslog.app-name=my-application quarkus.log.syslog.format=%d{yyyy-MM-dd HH:mm:ss} %-5p [%c] (%t) %s%e%n A per-category configuration example: quarkus.log.handler.syslog.my-syslog-handler.enable=true quarkus.log.handler.syslog.my-syslog-handler.app-name=my-application quarkus.log.handler.syslog.my-syslog-handler.format=%d{yyyy-MM-dd HH:mm:ss} [com.example] %s%e%n quarkus.log.category."com.example".handlers=my-syslog-handler quarkus.log.category."com.example".use-parent-handlers=false For details about its configuration, see the Syslog logging configuration reference. 1.7. Add a logging filter to your log handler Log handlers, such as the console log handler, can be linked with a filter that determines whether a log record should be logged. To register a logging filter: Annotate a final class that implements java.util.logging.Filter with @io.quarkus.logging.LoggingFilter , and set the name property: An example of writing a filter: package com.example; import io.quarkus.logging.LoggingFilter; import java.util.logging.Filter; import java.util.logging.LogRecord; @LoggingFilter(name = "my-filter") public final class TestFilter implements Filter { private final String part; public TestFilter(@ConfigProperty(name = "my-filter.part") String part) { this.part = part; } @Override public boolean isLoggable(LogRecord record) { return !record.getMessage().contains(part); } } In this example, we exclude log records containing specific text from console logs. The specific text to filter on is not hard-coded; instead, it is read from the my-filter.part configuration property. An example of Configuring the filter in application.properties : my-filter.part=TEST Attach the filter to the corresponding handler using the filter configuration property, located in application.properties : quarkus.log.console.filter=my-filter 1.8. Examples of logging configurations The following examples show some of the ways in which you can configure logging in Red Hat build of Quarkus: Console DEBUG logging except for Quarkus logs (INFO), no color, shortened time, shortened category prefixes quarkus.log.console.format=%d{HH:mm:ss} %-5p [%c{2.}] (%t) %s%e%n quarkus.log.console.level=DEBUG quarkus.console.color=false quarkus.log.category."io.quarkus".level=INFO Note If you add these properties in the command line, ensure " is escaped. For example, -Dquarkus.log.category.\"io.quarkus\".level=DEBUG . File TRACE logging configuration quarkus.log.file.enable=true # Send output to a trace.log file under the /tmp directory quarkus.log.file.path=/tmp/trace.log quarkus.log.file.level=TRACE quarkus.log.file.format=%d{HH:mm:ss} %-5p [%c{2.}] (%t) %s%e%n # Set 2 categories (io.quarkus.smallrye.jwt, io.undertow.request.security) to TRACE level quarkus.log.min-level=TRACE quarkus.log.category."io.quarkus.smallrye.jwt".level=TRACE quarkus.log.category."io.undertow.request.security".level=TRACE Note As we do not change the root logger, the console log contains only INFO or higher level logs. Named handlers attached to a category # Send output to a trace.log file under the /tmp directory quarkus.log.file.path=/tmp/trace.log quarkus.log.console.format=%d{HH:mm:ss} %-5p [%c{2.}] (%t) %s%e%n # Configure a named handler that logs to console quarkus.log.handler.console."STRUCTURED_LOGGING".format=%e%n # Configure a named handler that logs to file quarkus.log.handler.file."STRUCTURED_LOGGING_FILE".enable=true quarkus.log.handler.file."STRUCTURED_LOGGING_FILE".format=%e%n # Configure the category and link the two named handlers to it quarkus.log.category."io.quarkus.category".level=INFO quarkus.log.category."io.quarkus.category".handlers=STRUCTURED_LOGGING,STRUCTURED_LOGGING_FILE Named handlers attached to the root logger # configure a named file handler that sends the output to 'quarkus.log' quarkus.log.handler.file.CONSOLE_MIRROR.enable=true quarkus.log.handler.file.CONSOLE_MIRROR.path=quarkus.log # attach the handler to the root logger quarkus.log.handlers=CONSOLE_MIRROR 1.9. Centralized log management Use a centralized location to efficiently collect, store, and analyze log data from various components and instances of the application. To send logs to a centralized tool such as Graylog, Logstash, or Fluentd, see the Quarkus Centralized log management guide. 1.10. Configure logging for @QuarkusTest Enable proper logging for @QuarkusTest by setting the java.util.logging.manager system property to org.jboss.logmanager.LogManager . The system property must be set early on to be effective, so it is recommended to configure it in the build system. Setting the java.util.logging.manager system property in the Maven Surefire plugin configuration <build> <plugins> <plugin> <artifactId>maven-surefire-plugin</artifactId> <version>USD{surefire-plugin.version}</version> <configuration> <systemPropertyVariables> <java.util.logging.manager>org.jboss.logmanager.LogManager</java.util.logging.manager> 1 <quarkus.log.level>DEBUG</quarkus.log.level> 2 <maven.home>USD{maven.home}</maven.home> </systemPropertyVariables> </configuration> </plugin> </plugins> </build> 1 Make sure the org.jboss.logmanager.LogManager is used. 2 Enable debug logging for all logging categories. For Gradle, add the following configuration to the build.gradle file: test { systemProperty "java.util.logging.manager", "org.jboss.logmanager.LogManager" } See also Running @QuarkusTest from an IDE . 1.11. Use other logging APIs Red Hat build of Quarkus relies on the JBoss Logging library for all the logging requirements. Suppose you use libraries that depend on other logging libraries, such as Apache Commons Logging, Log4j, or SLF4J. In that case, exclude them from the dependencies and use one of the JBoss Logging adapters. This is especially important when building native executables, as you could encounter issues similar to the following when compiling the native executable: The logging implementation is not included in the native executable, but you can resolve this issue using JBoss Logging adapters. These adapters are available for popular open-source logging components, as explained in the chapter. 1.11.1. Add a logging adapter to your application For each logging API that is not jboss-logging : Add a logging adapter library to ensure that messages logged through these APIs are routed to the JBoss Log Manager backend. Note This step is unnecessary for libraries that are dependencies of a Quarkus extension where the extension handles it automatically. Apache Commons Logging: Using Maven: <dependency> <groupId>org.jboss.logging</groupId> <artifactId>commons-logging-jboss-logging</artifactId> </dependency> Using Gradle: implementation("org.jboss.logging:commons-logging-jboss-logging") Log4j: Using Maven: <dependency> <groupId>org.jboss.logmanager</groupId> <artifactId>log4j-jboss-logmanager</artifactId> </dependency> Using Gradle: implementation("org.jboss.logmanager:log4j-jboss-logmanager") Log4j 2: Using Maven: <dependency> <groupId>org.jboss.logmanager</groupId> <artifactId>log4j2-jboss-logmanager</artifactId> </dependency> Using Gradle: implementation("org.jboss.logmanager:log4j2-jboss-logmanager") Note Do not include any Log4j dependencies, as the log4j2-jboss-logmanager library contains everything needed to use Log4j as a logging implementation. SLF4J: Using Maven: <dependency> <groupId>org.jboss.slf4j</groupId> <artifactId>slf4j-jboss-logmanager</artifactId> </dependency> Using Gradle: implementation("org.jboss.slf4j:slf4j-jboss-logmanager") Verify whether the logs generated by the added library adhere to the same format as the other Quarkus logs. 1.11.2. Use MDC to add contextual log information Quarkus overrides the logging Mapped Diagnostic Context (MDC) to improve compatibility with its reactive core. 1.11.2.1. Add and read MDC data To add data to the MDC and extract it in your log output: Use the MDC class to set the data. Add import org.jboss.logmanager.MDC; Set MDC.put(... ) as shown in the example below: An example with JBoss Logging and io.quarkus.logging.Log package me.sample; import io.quarkus.logging.Log; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import org.jboss.logmanager.MDC; import java.util.UUID; @Path("/hello/jboss") public class GreetingResourceJbossLogging { @GET @Path("/test") public String greeting() { MDC.put("request.id", UUID.randomUUID().toString()); MDC.put("request.path", "/hello/test"); Log.info("request received"); return "hello world!"; } } Configure the log format to use %X{mdc-key} : quarkus.log.console.format=%d{HH:mm:ss} %-5p request.id=%X{request.id} request.path=%X{request.path} [%c{2.}] (%t) %s%n The resulting message contains the MDC data: 08:48:13 INFO request.id=c37a3a36-b7f6-4492-83a1-de41dbc26fe2 request.path=/hello/test [me.sa.GreetingResourceJbossLogging] (executor-thread-1) request received 1.11.2.2. MDC and supported logging APIs Based on your logging API, use one of the following MDC classes: Log4j 1 - org.apache.log4j.MDC.put(key, value) Log4j 2 - org.apache.logging.log4j.ThreadContext.put(key, value) SLF4J - org.slf4j.MDC.put(key, value) 1.11.2.3. MDC propagation In Quarkus, the MDC provider has a specific implementation for handling the reactive context, ensuring that MDC data is propagated during reactive and asynchronous processing. As a result, you can still access the MDC data in various scenarios: After asynchronous calls, for example, when a REST client returns a Uni. In code submitted to org.eclipse.microprofile.context.ManagedExecutor . In code executed with vertx.executeBlocking() . Note If applicable, MDC data is stored in a duplicated context , which is an isolated context for processing a single task or request. 1.12. Logging configuration reference Configuration property fixed at build time - All other configuration properties are overridable at runtime Configuration property Type Default quarkus.log.metrics.enabled If enabled and a metrics extension is present, logging metrics are published. Environment variable: QUARKUS_LOG_METRICS_ENABLED boolean false quarkus.log.min-level The default minimum log level. Environment variable: QUARKUS_LOG_MIN_LEVEL Level DEBUG quarkus.log.decorate-stacktraces This will decorate the stacktrace in dev mode to show the line in the code that cause the exception Environment variable: QUARKUS_LOG_DECORATE_STACKTRACES boolean true quarkus.log.level The log level of the root category, which is used as the default log level for all categories. JBoss Logging supports Apache-style log levels: {@link org.jboss.logmanager.Level#FATAL} {@link org.jboss.logmanager.Level#ERROR} {@link org.jboss.logmanager.Level#WARN} {@link org.jboss.logmanager.Level#INFO} {@link org.jboss.logmanager.Level#DEBUG} {@link org.jboss.logmanager.Level#TRACE} In addition, it also supports the standard JDK log levels. Environment variable: QUARKUS_LOG_LEVEL Level INFO quarkus.log.handlers The names of additional handlers to link to the root category. These handlers are defined in consoleHandlers, fileHandlers, or syslogHandlers. Environment variable: QUARKUS_LOG_HANDLERS list of string Minimum logging categories Type Default quarkus.log.category."categories".min-level The minimum log level for this category. By default, all categories are configured with DEBUG minimum level. To get runtime logging below DEBUG , e.g., TRACE , adjust the minimum level at build time. The right log level needs to be provided at runtime. As an example, to get TRACE logging, minimum level needs to be at TRACE , and the runtime log level needs to match that. Environment variable: QUARKUS_LOG_CATEGORY__CATEGORIES__MIN_LEVEL InheritableLevel inherit quarkus.log.category."categories".level The log level for this category. Note that to get log levels below INFO , the minimum level build-time configuration option also needs to be adjusted. Environment variable: QUARKUS_LOG_CATEGORY__CATEGORIES__LEVEL InheritableLevel inherit quarkus.log.category."categories".handlers The names of the handlers to link to this category. Environment variable: QUARKUS_LOG_CATEGORY__CATEGORIES__HANDLERS list of string quarkus.log.category."categories".use-parent-handlers Specify whether this logger should send its output to its parent Logger Environment variable: QUARKUS_LOG_CATEGORY__CATEGORIES__USE_PARENT_HANDLERS boolean true Console logging Type Default quarkus.log.console.enable If console logging should be enabled Environment variable: QUARKUS_LOG_CONSOLE_ENABLE boolean true quarkus.log.console.stderr If console logging should go to System#err instead of System#out . Environment variable: QUARKUS_LOG_CONSOLE_STDERR boolean false quarkus.log.console.format The log format. Note that this value is ignored if an extension is present that takes control of console formatting (e.g., an XML or JSON-format extension). Environment variable: QUARKUS_LOG_CONSOLE_FORMAT string %d{yyyy-MM-dd HH:mm:ss,SSS} %-5p [%c{3.}] (%t) %s%e%n quarkus.log.console.level The console log level. Environment variable: QUARKUS_LOG_CONSOLE_LEVEL Level ALL quarkus.log.console.darken Specify how much the colors should be darkened. Note that this value is ignored if an extension is present that takes control of console formatting (e.g., an XML or JSON-format extension). Environment variable: QUARKUS_LOG_CONSOLE_DARKEN int 0 quarkus.log.console.filter The name of the filter to link to the console handler. Environment variable: QUARKUS_LOG_CONSOLE_FILTER string quarkus.log.console.async Indicates whether to log asynchronously Environment variable: QUARKUS_LOG_CONSOLE_ASYNC boolean false quarkus.log.console.async.queue-length The queue length to use before flushing writing Environment variable: QUARKUS_LOG_CONSOLE_ASYNC_QUEUE_LENGTH int 512 quarkus.log.console.async.overflow Determine whether to block the publisher (rather than drop the message) when the queue is full Environment variable: QUARKUS_LOG_CONSOLE_ASYNC_OVERFLOW block , discard block File logging Type Default quarkus.log.file.enable If file logging should be enabled Environment variable: QUARKUS_LOG_FILE_ENABLE boolean false quarkus.log.file.format The log format Environment variable: QUARKUS_LOG_FILE_FORMAT string %d{yyyy-MM-dd HH:mm:ss,SSS} %h %N[%i] %-5p [%c{3.}] (%t) %s%e%n quarkus.log.file.level The level of logs to be written into the file. Environment variable: QUARKUS_LOG_FILE_LEVEL Level ALL quarkus.log.file.path The name of the file in which logs will be written. Environment variable: QUARKUS_LOG_FILE_PATH File quarkus.log quarkus.log.file.filter The name of the filter to link to the file handler. Environment variable: QUARKUS_LOG_FILE_FILTER string quarkus.log.file.encoding The character encoding used Environment variable: QUARKUS_LOG_FILE_ENCODING Charset quarkus.log.file.async Indicates whether to log asynchronously Environment variable: QUARKUS_LOG_FILE_ASYNC boolean false quarkus.log.file.async.queue-length The queue length to use before flushing writing Environment variable: QUARKUS_LOG_FILE_ASYNC_QUEUE_LENGTH int 512 quarkus.log.file.async.overflow Determine whether to block the publisher (rather than drop the message) when the queue is full Environment variable: QUARKUS_LOG_FILE_ASYNC_OVERFLOW block , discard block quarkus.log.file.rotation.max-file-size The maximum log file size, after which a rotation is executed. Environment variable: QUARKUS_LOG_FILE_ROTATION_MAX_FILE_SIZE MemorySize 10M quarkus.log.file.rotation.max-backup-index The maximum number of backups to keep. Environment variable: QUARKUS_LOG_FILE_ROTATION_MAX_BACKUP_INDEX int 5 quarkus.log.file.rotation.file-suffix The file handler rotation file suffix. When used, the file will be rotated based on its suffix. The suffix must be in a date-time format that is understood by DateTimeFormatter . Example fileSuffix: .yyyy-MM-dd Note: If the suffix ends with .zip or .gz, the rotation file will also be compressed. Environment variable: QUARKUS_LOG_FILE_ROTATION_FILE_SUFFIX string quarkus.log.file.rotation.rotate-on-boot Indicates whether to rotate log files on server initialization. You need to either set a max-file-size or configure a file-suffix for it to work. Environment variable: QUARKUS_LOG_FILE_ROTATION_ROTATE_ON_BOOT boolean true Syslog logging Type Default quarkus.log.syslog.enable If syslog logging should be enabled Environment variable: QUARKUS_LOG_SYSLOG_ENABLE boolean false quarkus.log.syslog.endpoint The IP address and port of the Syslog server Environment variable: QUARKUS_LOG_SYSLOG_ENDPOINT host:port localhost:514 quarkus.log.syslog.app-name The app name used when formatting the message in RFC5424 format Environment variable: QUARKUS_LOG_SYSLOG_APP_NAME string quarkus.log.syslog.hostname The name of the host the messages are being sent from Environment variable: QUARKUS_LOG_SYSLOG_HOSTNAME string quarkus.log.syslog.facility Sets the facility used when calculating the priority of the message as defined by RFC-5424 and RFC-3164 Environment variable: QUARKUS_LOG_SYSLOG_FACILITY kernel , user-level , mail-system , system-daemons , security , syslogd , line-printer , network-news , uucp , clock-daemon , security2 , ftp-daemon , ntp , log-audit , log-alert , clock-daemon2 , local-use-0 , local-use-1 , local-use-2 , local-use-3 , local-use-4 , local-use-5 , local-use-6 , local-use-7 user-level quarkus.log.syslog.syslog-type Set the SyslogType syslog type this handler should use to format the message sent Environment variable: QUARKUS_LOG_SYSLOG_SYSLOG_TYPE rfc5424 , rfc3164 rfc5424 quarkus.log.syslog.protocol Sets the protocol used to connect to the Syslog server Environment variable: QUARKUS_LOG_SYSLOG_PROTOCOL tcp , udp , ssl-tcp tcp quarkus.log.syslog.use-counting-framing If enabled, the message being sent is prefixed with the size of the message Environment variable: QUARKUS_LOG_SYSLOG_USE_COUNTING_FRAMING boolean false quarkus.log.syslog.truncate Set to true to truncate the message if it exceeds maximum length Environment variable: QUARKUS_LOG_SYSLOG_TRUNCATE boolean true quarkus.log.syslog.block-on-reconnect Enables or disables blocking when attempting to reconnect a org.jboss.logmanager.handlers.SyslogHandler.Protocol#TCP TCP or org.jboss.logmanager.handlers.SyslogHandler.Protocol#SSL_TCP SSL TCP protocol Environment variable: QUARKUS_LOG_SYSLOG_BLOCK_ON_RECONNECT boolean false quarkus.log.syslog.format The log message format Environment variable: QUARKUS_LOG_SYSLOG_FORMAT string %d{yyyy-MM-dd HH:mm:ss,SSS} %-5p [%c{3.}] (%t) %s%e%n quarkus.log.syslog.level The log level specifying what message levels will be logged by the Syslog logger Environment variable: QUARKUS_LOG_SYSLOG_LEVEL Level ALL quarkus.log.syslog.filter The name of the filter to link to the file handler. Environment variable: QUARKUS_LOG_SYSLOG_FILTER string quarkus.log.syslog.max-length The maximum length, in bytes, of the message allowed to be sent. The length includes the header and the message. If not set, the default value is 2048 when sys-log-type is rfc5424 (which is the default) and 1024 when sys-log-type is rfc3164 Environment variable: QUARKUS_LOG_SYSLOG_MAX_LENGTH MemorySize quarkus.log.syslog.async Indicates whether to log asynchronously Environment variable: QUARKUS_LOG_SYSLOG_ASYNC boolean false quarkus.log.syslog.async.queue-length The queue length to use before flushing writing Environment variable: QUARKUS_LOG_SYSLOG_ASYNC_QUEUE_LENGTH int 512 quarkus.log.syslog.async.overflow Determine whether to block the publisher (rather than drop the message) when the queue is full Environment variable: QUARKUS_LOG_SYSLOG_ASYNC_OVERFLOW block , discard block Console handlers Type Default quarkus.log.handler.console."console-handlers".enable If console logging should be enabled Environment variable: QUARKUS_LOG_HANDLER_CONSOLE__CONSOLE_HANDLERS__ENABLE boolean true quarkus.log.handler.console."console-handlers".stderr If console logging should go to System#err instead of System#out . Environment variable: QUARKUS_LOG_HANDLER_CONSOLE__CONSOLE_HANDLERS__STDERR boolean false quarkus.log.handler.console."console-handlers".format The log format. Note that this value is ignored if an extension is present that takes control of console formatting (e.g., an XML or JSON-format extension). Environment variable: QUARKUS_LOG_HANDLER_CONSOLE__CONSOLE_HANDLERS__FORMAT string %d{yyyy-MM-dd HH:mm:ss,SSS} %-5p [%c{3.}] (%t) %s%e%n quarkus.log.handler.console."console-handlers".level The console log level. Environment variable: QUARKUS_LOG_HANDLER_CONSOLE__CONSOLE_HANDLERS__LEVEL Level ALL quarkus.log.handler.console."console-handlers".darken Specify how much the colors should be darkened. Note that this value is ignored if an extension is present that takes control of console formatting (e.g., an XML or JSON-format extension). Environment variable: QUARKUS_LOG_HANDLER_CONSOLE__CONSOLE_HANDLERS__DARKEN int 0 quarkus.log.handler.console."console-handlers".filter The name of the filter to link to the console handler. Environment variable: QUARKUS_LOG_HANDLER_CONSOLE__CONSOLE_HANDLERS__FILTER string quarkus.log.handler.console."console-handlers".async Indicates whether to log asynchronously Environment variable: QUARKUS_LOG_HANDLER_CONSOLE__CONSOLE_HANDLERS__ASYNC boolean false quarkus.log.handler.console."console-handlers".async.queue-length The queue length to use before flushing writing Environment variable: QUARKUS_LOG_HANDLER_CONSOLE__CONSOLE_HANDLERS__ASYNC_QUEUE_LENGTH int 512 quarkus.log.handler.console."console-handlers".async.overflow Determine whether to block the publisher (rather than drop the message) when the queue is full Environment variable: QUARKUS_LOG_HANDLER_CONSOLE__CONSOLE_HANDLERS__ASYNC_OVERFLOW block , discard block File handlers Type Default quarkus.log.handler.file."file-handlers".enable If file logging should be enabled Environment variable: QUARKUS_LOG_HANDLER_FILE__FILE_HANDLERS__ENABLE boolean false quarkus.log.handler.file."file-handlers".format The log format Environment variable: QUARKUS_LOG_HANDLER_FILE__FILE_HANDLERS__FORMAT string %d{yyyy-MM-dd HH:mm:ss,SSS} %h %N[%i] %-5p [%c{3.}] (%t) %s%e%n quarkus.log.handler.file."file-handlers".level The level of logs to be written into the file. Environment variable: QUARKUS_LOG_HANDLER_FILE__FILE_HANDLERS__LEVEL Level ALL quarkus.log.handler.file."file-handlers".path The name of the file in which logs will be written. Environment variable: QUARKUS_LOG_HANDLER_FILE__FILE_HANDLERS__PATH File quarkus.log quarkus.log.handler.file."file-handlers".filter The name of the filter to link to the file handler. Environment variable: QUARKUS_LOG_HANDLER_FILE__FILE_HANDLERS__FILTER string quarkus.log.handler.file."file-handlers".encoding The character encoding used Environment variable: QUARKUS_LOG_HANDLER_FILE__FILE_HANDLERS__ENCODING Charset quarkus.log.handler.file."file-handlers".async Indicates whether to log asynchronously Environment variable: QUARKUS_LOG_HANDLER_FILE__FILE_HANDLERS__ASYNC boolean false quarkus.log.handler.file."file-handlers".async.queue-length The queue length to use before flushing writing Environment variable: QUARKUS_LOG_HANDLER_FILE__FILE_HANDLERS__ASYNC_QUEUE_LENGTH int 512 quarkus.log.handler.file."file-handlers".async.overflow Determine whether to block the publisher (rather than drop the message) when the queue is full Environment variable: QUARKUS_LOG_HANDLER_FILE__FILE_HANDLERS__ASYNC_OVERFLOW block , discard block quarkus.log.handler.file."file-handlers".rotation.max-file-size The maximum log file size, after which a rotation is executed. Environment variable: QUARKUS_LOG_HANDLER_FILE__FILE_HANDLERS__ROTATION_MAX_FILE_SIZE MemorySize 10M quarkus.log.handler.file."file-handlers".rotation.max-backup-index The maximum number of backups to keep. Environment variable: QUARKUS_LOG_HANDLER_FILE__FILE_HANDLERS__ROTATION_MAX_BACKUP_INDEX int 5 quarkus.log.handler.file."file-handlers".rotation.file-suffix The file handler rotation file suffix. When used, the file will be rotated based on its suffix. The suffix must be in a date-time format that is understood by DateTimeFormatter . Example fileSuffix: .yyyy-MM-dd Note: If the suffix ends with .zip or .gz, the rotation file will also be compressed. Environment variable: QUARKUS_LOG_HANDLER_FILE__FILE_HANDLERS__ROTATION_FILE_SUFFIX string quarkus.log.handler.file."file-handlers".rotation.rotate-on-boot Indicates whether to rotate log files on server initialization. You need to either set a max-file-size or configure a file-suffix for it to work. Environment variable: QUARKUS_LOG_HANDLER_FILE__FILE_HANDLERS__ROTATION_ROTATE_ON_BOOT boolean true Syslog handlers Type Default quarkus.log.handler.syslog."syslog-handlers".enable If syslog logging should be enabled Environment variable: QUARKUS_LOG_HANDLER_SYSLOG__SYSLOG_HANDLERS__ENABLE boolean false quarkus.log.handler.syslog."syslog-handlers".endpoint The IP address and port of the Syslog server Environment variable: QUARKUS_LOG_HANDLER_SYSLOG__SYSLOG_HANDLERS__ENDPOINT host:port localhost:514 quarkus.log.handler.syslog."syslog-handlers".app-name The app name used when formatting the message in RFC5424 format Environment variable: QUARKUS_LOG_HANDLER_SYSLOG__SYSLOG_HANDLERS__APP_NAME string quarkus.log.handler.syslog."syslog-handlers".hostname The name of the host the messages are being sent from Environment variable: QUARKUS_LOG_HANDLER_SYSLOG__SYSLOG_HANDLERS__HOSTNAME string quarkus.log.handler.syslog."syslog-handlers".facility Sets the facility used when calculating the priority of the message as defined by RFC-5424 and RFC-3164 Environment variable: QUARKUS_LOG_HANDLER_SYSLOG__SYSLOG_HANDLERS__FACILITY kernel , user-level , mail-system , system-daemons , security , syslogd , line-printer , network-news , uucp , clock-daemon , security2 , ftp-daemon , ntp , log-audit , log-alert , clock-daemon2 , local-use-0 , local-use-1 , local-use-2 , local-use-3 , local-use-4 , local-use-5 , local-use-6 , local-use-7 user-level quarkus.log.handler.syslog."syslog-handlers".syslog-type Set the SyslogType syslog type this handler should use to format the message sent Environment variable: QUARKUS_LOG_HANDLER_SYSLOG__SYSLOG_HANDLERS__SYSLOG_TYPE rfc5424 , rfc3164 rfc5424 quarkus.log.handler.syslog."syslog-handlers".protocol Sets the protocol used to connect to the Syslog server Environment variable: QUARKUS_LOG_HANDLER_SYSLOG__SYSLOG_HANDLERS__PROTOCOL tcp , udp , ssl-tcp tcp quarkus.log.handler.syslog."syslog-handlers".use-counting-framing If enabled, the message being sent is prefixed with the size of the message Environment variable: QUARKUS_LOG_HANDLER_SYSLOG__SYSLOG_HANDLERS__USE_COUNTING_FRAMING boolean false quarkus.log.handler.syslog."syslog-handlers".truncate Set to true to truncate the message if it exceeds maximum length Environment variable: QUARKUS_LOG_HANDLER_SYSLOG__SYSLOG_HANDLERS__TRUNCATE boolean true quarkus.log.handler.syslog."syslog-handlers".block-on-reconnect Enables or disables blocking when attempting to reconnect a org.jboss.logmanager.handlers.SyslogHandler.Protocol#TCP TCP or org.jboss.logmanager.handlers.SyslogHandler.Protocol#SSL_TCP SSL TCP protocol Environment variable: QUARKUS_LOG_HANDLER_SYSLOG__SYSLOG_HANDLERS__BLOCK_ON_RECONNECT boolean false quarkus.log.handler.syslog."syslog-handlers".format The log message format Environment variable: QUARKUS_LOG_HANDLER_SYSLOG__SYSLOG_HANDLERS__FORMAT string %d{yyyy-MM-dd HH:mm:ss,SSS} %-5p [%c{3.}] (%t) %s%e%n quarkus.log.handler.syslog."syslog-handlers".level The log level specifying what message levels will be logged by the Syslog logger Environment variable: QUARKUS_LOG_HANDLER_SYSLOG__SYSLOG_HANDLERS__LEVEL Level ALL quarkus.log.handler.syslog."syslog-handlers".filter The name of the filter to link to the file handler. Environment variable: QUARKUS_LOG_HANDLER_SYSLOG__SYSLOG_HANDLERS__FILTER string quarkus.log.handler.syslog."syslog-handlers".max-length The maximum length, in bytes, of the message allowed to be sent. The length includes the header and the message. If not set, the default value is 2048 when sys-log-type is rfc5424 (which is the default) and 1024 when sys-log-type is rfc3164 Environment variable: QUARKUS_LOG_HANDLER_SYSLOG__SYSLOG_HANDLERS__MAX_LENGTH MemorySize quarkus.log.handler.syslog."syslog-handlers".async Indicates whether to log asynchronously Environment variable: QUARKUS_LOG_HANDLER_SYSLOG__SYSLOG_HANDLERS__ASYNC boolean false quarkus.log.handler.syslog."syslog-handlers".async.queue-length The queue length to use before flushing writing Environment variable: QUARKUS_LOG_HANDLER_SYSLOG__SYSLOG_HANDLERS__ASYNC_QUEUE_LENGTH int 512 quarkus.log.handler.syslog."syslog-handlers".async.overflow Determine whether to block the publisher (rather than drop the message) when the queue is full Environment variable: QUARKUS_LOG_HANDLER_SYSLOG__SYSLOG_HANDLERS__ASYNC_OVERFLOW block , discard block Log cleanup filters - internal use Type Default quarkus.log.filter."filters".if-starts-with The message prefix to match Environment variable: QUARKUS_LOG_FILTER__FILTERS__IF_STARTS_WITH list of string inherit quarkus.log.filter."filters".target-level The new log level for the filtered message. Defaults to DEBUG. Environment variable: QUARKUS_LOG_FILTER__FILTERS__TARGET_LEVEL Level DEBUG About the MemorySize format A size configuration option recognizes strings in this format (shown as a regular expression): [0-9]+[KkMmGgTtPpEeZzYy]? . If no suffix is given, assume bytes.
|
[
"import org.jboss.logging.Logger; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import jakarta.ws.rs.core.MediaType; @Path(\"/hello\") public class ExampleResource { private static final Logger LOG = Logger.getLogger(ExampleResource.class); @GET @Produces(MediaType.TEXT_PLAIN) public String hello() { LOG.info(\"Hello\"); return \"hello\"; } }",
"package com.example; import org.jboss.logging.Logger; public class MyService { private static final Logger log = Logger.getLogger(MyService.class); 1 public void doSomething() { log.info(\"It works!\"); 2 } }",
"package com.example; import io.quarkus.logging.Log; 1 class MyService { 2 public void doSomething() { Log.info(\"Simple!\"); 3 } }",
"package com.example; import org.jboss.logging.Logger; @ApplicationScoped class SimpleBean { @Inject Logger log; 1 @LoggerName(\"foo\") Logger fooLog; 2 public void ping() { log.info(\"Simple!\"); fooLog.info(\"Goes to _foo_ logger!\"); } }",
"quarkus.log.level=INFO quarkus.log.category.\"org.hibernate\".level=DEBUG",
"quarkus.log.category.\"org.hibernate\".min-level=TRACE",
"-Dquarkus.log.category.\\\"org.hibernate\\\".level=TRACE",
"quarkus.log.handlers=con,mylog quarkus.log.handler.console.con.enable=true quarkus.log.handler.file.mylog.enable=true",
"quarkus.log.category.\"org.apache.kafka.clients\".level=INFO quarkus.log.category.\"org.apache.kafka.common.utils\".level=INFO",
"<dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-logging-json</artifactId> </dependency>",
"implementation(\"io.quarkus:quarkus-logging-json\")",
"%dev.quarkus.log.console.json=false %test.quarkus.log.console.json=false",
"quarkus.log.console.format=%d{yyyy-MM-dd HH:mm:ss} %-5p [%c] (%t) %s%e%n",
"quarkus.log.handler.console.my-console-handler.format=%d{yyyy-MM-dd HH:mm:ss} [com.example] %s%e%n quarkus.log.category.\"com.example\".handlers=my-console-handler quarkus.log.category.\"com.example\".use-parent-handlers=false",
"quarkus.log.file.enable=true quarkus.log.file.path=application.log quarkus.log.file.format=%d{yyyy-MM-dd HH:mm:ss} %-5p [%c] (%t) %s%e%n",
"quarkus.log.handler.file.my-file-handler.enable=true quarkus.log.handler.file.my-file-handler.path=application.log quarkus.log.handler.file.my-file-handler.format=%d{yyyy-MM-dd HH:mm:ss} [com.example] %s%e%n quarkus.log.category.\"com.example\".handlers=my-file-handler quarkus.log.category.\"com.example\".use-parent-handlers=false",
"quarkus.log.syslog.enable=true quarkus.log.syslog.app-name=my-application quarkus.log.syslog.format=%d{yyyy-MM-dd HH:mm:ss} %-5p [%c] (%t) %s%e%n",
"quarkus.log.handler.syslog.my-syslog-handler.enable=true quarkus.log.handler.syslog.my-syslog-handler.app-name=my-application quarkus.log.handler.syslog.my-syslog-handler.format=%d{yyyy-MM-dd HH:mm:ss} [com.example] %s%e%n quarkus.log.category.\"com.example\".handlers=my-syslog-handler quarkus.log.category.\"com.example\".use-parent-handlers=false",
"package com.example; import io.quarkus.logging.LoggingFilter; import java.util.logging.Filter; import java.util.logging.LogRecord; @LoggingFilter(name = \"my-filter\") public final class TestFilter implements Filter { private final String part; public TestFilter(@ConfigProperty(name = \"my-filter.part\") String part) { this.part = part; } @Override public boolean isLoggable(LogRecord record) { return !record.getMessage().contains(part); } }",
"my-filter.part=TEST",
"quarkus.log.console.filter=my-filter",
"quarkus.log.console.format=%d{HH:mm:ss} %-5p [%c{2.}] (%t) %s%e%n quarkus.log.console.level=DEBUG quarkus.console.color=false quarkus.log.category.\"io.quarkus\".level=INFO",
"quarkus.log.file.enable=true Send output to a trace.log file under the /tmp directory quarkus.log.file.path=/tmp/trace.log quarkus.log.file.level=TRACE quarkus.log.file.format=%d{HH:mm:ss} %-5p [%c{2.}] (%t) %s%e%n Set 2 categories (io.quarkus.smallrye.jwt, io.undertow.request.security) to TRACE level quarkus.log.min-level=TRACE quarkus.log.category.\"io.quarkus.smallrye.jwt\".level=TRACE quarkus.log.category.\"io.undertow.request.security\".level=TRACE",
"Send output to a trace.log file under the /tmp directory quarkus.log.file.path=/tmp/trace.log quarkus.log.console.format=%d{HH:mm:ss} %-5p [%c{2.}] (%t) %s%e%n Configure a named handler that logs to console quarkus.log.handler.console.\"STRUCTURED_LOGGING\".format=%e%n Configure a named handler that logs to file quarkus.log.handler.file.\"STRUCTURED_LOGGING_FILE\".enable=true quarkus.log.handler.file.\"STRUCTURED_LOGGING_FILE\".format=%e%n Configure the category and link the two named handlers to it quarkus.log.category.\"io.quarkus.category\".level=INFO quarkus.log.category.\"io.quarkus.category\".handlers=STRUCTURED_LOGGING,STRUCTURED_LOGGING_FILE",
"configure a named file handler that sends the output to 'quarkus.log' quarkus.log.handler.file.CONSOLE_MIRROR.enable=true quarkus.log.handler.file.CONSOLE_MIRROR.path=quarkus.log attach the handler to the root logger quarkus.log.handlers=CONSOLE_MIRROR",
"<build> <plugins> <plugin> <artifactId>maven-surefire-plugin</artifactId> <version>USD{surefire-plugin.version}</version> <configuration> <systemPropertyVariables> <java.util.logging.manager>org.jboss.logmanager.LogManager</java.util.logging.manager> 1 <quarkus.log.level>DEBUG</quarkus.log.level> 2 <maven.home>USD{maven.home}</maven.home> </systemPropertyVariables> </configuration> </plugin> </plugins> </build>",
"test { systemProperty \"java.util.logging.manager\", \"org.jboss.logmanager.LogManager\" }",
"Caused by java.lang.ClassNotFoundException: org.apache.commons.logging.impl.LogFactoryImpl",
"<dependency> <groupId>org.jboss.logging</groupId> <artifactId>commons-logging-jboss-logging</artifactId> </dependency>",
"implementation(\"org.jboss.logging:commons-logging-jboss-logging\")",
"<dependency> <groupId>org.jboss.logmanager</groupId> <artifactId>log4j-jboss-logmanager</artifactId> </dependency>",
"implementation(\"org.jboss.logmanager:log4j-jboss-logmanager\")",
"<dependency> <groupId>org.jboss.logmanager</groupId> <artifactId>log4j2-jboss-logmanager</artifactId> </dependency>",
"implementation(\"org.jboss.logmanager:log4j2-jboss-logmanager\")",
"<dependency> <groupId>org.jboss.slf4j</groupId> <artifactId>slf4j-jboss-logmanager</artifactId> </dependency>",
"implementation(\"org.jboss.slf4j:slf4j-jboss-logmanager\")",
"package me.sample; import io.quarkus.logging.Log; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import org.jboss.logmanager.MDC; import java.util.UUID; @Path(\"/hello/jboss\") public class GreetingResourceJbossLogging { @GET @Path(\"/test\") public String greeting() { MDC.put(\"request.id\", UUID.randomUUID().toString()); MDC.put(\"request.path\", \"/hello/test\"); Log.info(\"request received\"); return \"hello world!\"; } }",
"quarkus.log.console.format=%d{HH:mm:ss} %-5p request.id=%X{request.id} request.path=%X{request.path} [%c{2.}] (%t) %s%n",
"08:48:13 INFO request.id=c37a3a36-b7f6-4492-83a1-de41dbc26fe2 request.path=/hello/test [me.sa.GreetingResourceJbossLogging] (executor-thread-1) request received"
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.15/html/logging_configuration/logging
|
Chapter 10. 4.7 Release Notes
|
Chapter 10. 4.7 Release Notes 10.1. New Features The following major enhancements have been introduced in Red Hat Update Infrastructure 4.7. Shared mount options for RHUI Installer have been expanded With this update, the RHUI Installer's shared storage mounting options have been expanded to allow config files, certificate files and log files to be mounted on shared storage. For more information, see rhui-installer --help . HAProxy can be run on RHEL 9 With this update RHUI supports HAProxy running on RHEL 9 even when RHUA is running on RHEL 8. Log file rhui-subscription-sync.log has been relocated With this update log file rhui-subscription-sync.log has been relocated to directory /var/log/rhui from directory /var/log. 10.2. Bug Fixes The following bugs have been fixed in Red Hat Update Infrastructure 4.7 that have a significant impact on users. Unrecognized rhui-manager commands are no longer ignored With this update unrecognized rhui-manager commands are no longer ignored, instead they are reported as unrecognized. Extraneous warnings when no CDSs are configured have been removed In prior versions of RHUI when rhui-manager status was executed, while no CDS nodes where being tracked, warnings were being logged in /var/log/rhui/rhua_ansible.log. With this update such extraneous warnings are not produced. Unnecessary nginx packages are omitted With this update unnecessary nginx packages are no longer installed. Saved versions of repo metadata now limitied to five With this update the number of versions of saved repository metadata is limited to five. In the past, as new packages were added into a repository, a new version of the metadata was generated, potentially happening hundreds of times. Erroneous error messages have been removed With this update empty repos are now exported. In the past, if RHUI was configured with --fetch-missing-symlinks False , the unexported empty repos resulted in "Errors during downloading metadata for repository"
| null |
https://docs.redhat.com/en/documentation/red_hat_update_infrastructure/4/html/release_notes/assembly_4-7-release-notes_release-notes
|
Chapter 3. Integrating Red Hat Ceph Storage
|
Chapter 3. Integrating Red Hat Ceph Storage You can configure Red Hat OpenStack Services on OpenShift (RHOSO) to integrate with an external Red Hat Ceph Storage cluster. This configuration connects the following services to a Red Hat Ceph Storage cluster: Block Storage service (cinder) Image service (glance) Object Storage service (swift) Compute service (nova) Shared File Systems service (manila) To configure Red Hat Ceph Storage as the back end for RHOSO storage, complete the following tasks: Verify that Red Hat Ceph Storage is deployed and all the required services are running. Create the Red Hat Ceph Storage pools on the Red Hat Ceph Storage cluster. Create a Red Hat Ceph Storage secret on the Red Hat Ceph Storage cluster to provide RHOSO services access to the Red Hat Ceph Storage cluster. Obtain the Ceph File System Identifier. Configure the OpenStackControlPlane CR to use the Red Hat Ceph Storage cluster as the back end. Configure the OpenStackDataPlane CR to use the Red Hat Ceph Storage cluster as the back end. Prerequisites Access to a Red Hat Ceph Storage cluster. The RHOSO control plane is installed on an operational RHOSO cluster. 3.1. Creating Red Hat Ceph Storage pools Create pools on the Red Hat Ceph Storage cluster server for each RHOSO service that uses the cluster. Procedure Create pools for the Compute service (vms), the Block Storage service (volumes), and the Image service (images): Note When you create the pools, set the appropriate placement group (PG) number, as described in Placement Groups in the Red Hat Ceph Storage Storage Strategies Guide . Optional: Create the cephfs volume if the Shared File Systems service (manila) is enabled in the control plane. This automatically enables the CephFS Metadata service (MDS) and creates the necessary data and metadata pools on the Ceph cluster: Optional: Deploy an NFS service on the Red Hat Ceph Storage cluster to use CephFS with NFS: Replace <vip> with the IP address assigned to the NFS service. The NFS service should be isolated on a network that can be shared with all Red Hat OpenStack users. See NFS cluster and export management , for more information about customizing the NFS service. Important When you deploy an NFS service for the Shared File Systems service, do not select a custom port to expose NFS. Only the default NFS port of 2049 is supported. You must enable the Red Hat Ceph Storage ingress service and set the ingress-mode to haproxy-protocol . Otherwise, you cannot use IP-based access rules with the Shared File Systems service. For security in production environments, do not provide access to 0.0.0.0/0 on shares to mount them on client machines. Create a cephx key for RHOSO to use to access pools: Important If the Shared File Systems service is enabled in the control plane, replace osd caps with the following: Export the cephx key: Export the configuration file: 3.2. Creating a Red Hat Ceph Storage secret Create a secret so that services can access the Red Hat Ceph Storage cluster. Procedure Transfer the cephx key and configuration file created in the Creating Red Hat Ceph Storage pools procedure to a host that can create resources in the openstack namespace. Base64 encode these files and store them in KEY and CONF environment variables: Create a YAML file to create the Secret resource. Using the environment variables, add the Secret configuration to the YAML file: Save the YAML file. Create the Secret resource: Replace <secret_configuration_file> with the name of the YAML file you created. Note The examples in this section use openstack as the name of the Red Hat Ceph Storage user. The file name in the Secret resource must match this user name. For example, if the file name used for the username openstack2 is /etc/ceph/ceph.client.openstack2.keyring , then the secret data line should be ceph.client.openstack2.keyring: USDKEY . 3.3. Obtaining the Red Hat Ceph Storage File System Identifier The Red Hat Ceph Storage File System Identifier (FSID) is a unique identifier for the cluster. The FSID is used in configuration and verification of cluster interoperability with RHOSO. Procedure Extract the FSID from the Red Hat Ceph Storage secret: USD FSID=USD(oc get secret ceph-conf-files -o json | jq -r '.data."ceph.conf"' | base64 -d | grep fsid | sed -e 's/fsid = //') 3.4. Configuring the control plane to use the Red Hat Ceph Storage cluster You must configure the OpenStackControlPlane CR to use the Red Hat Ceph Storage cluster. Configuration includes the following tasks: Confirming the Red Hat Ceph Storage cluster and the associated services have the correct network configuration. Configuring the control plane to use the Red Hat Ceph Storage secret. Configuring the Image service (glance) to use the Red Hat Ceph Storage cluster. Configuring the Block Storage service (cinder) to use the Red Hat Ceph Storage cluster. Optional: Configuring the Shared File Systems service (manila) to use native CephFS or CephFS-NFS with the Red Hat Ceph Storage cluster. Note This example does not include configuring Block Storage backup service ( cinder-backup ) with Red Hat Ceph Storage. Procedure Check the storage interface defined in your NodeNetworkConfigurationPolicy ( nncp ) custom resource to confirm that it has the same network configuration as the public_network of the Red Hat Ceph Storage cluster. This is required to enable access to the Red Hat Ceph Storage cluster through the Storage network. The Storage network should have the same network configuration as the public_network of the Red Hat Ceph Storage cluster. It is not necessary for RHOSO to access the cluster_network of the Red Hat Ceph Storage cluster. Note If it does not impact workload performance, the Storage network can be different from the external Red Hat Ceph Storage cluster public_network using routed (L3) connectivity as long as the appropriate routes are added to the Storage network to reach the external Red Hat Ceph Storage cluster public_network . Check the networkAttachments for the default Image service instance in the OpenStackControlPlane CR to confirm that the default Image service is configured to access the Storage network: Confirm the Block Storage service is configured to access the Storage network through MetalLB. Optional: Confirm the Shared File Systems service is configured to access the Storage network through ManilaShare. Confirm the Compute service (nova) is configured to access the Storage network. Confirm the Red Hat Ceph Storage configuration file, /etc/ceph/ceph.conf , contains the IP addresses of the Red Hat Ceph Storage cluster monitors. These IP addresses must be within the Storage network IP address range. Open your openstack_control_plane.yaml file to edit the OpenStackControlPlane CR. Add the extraMounts parameter to define the services that require access to the Red Hat Ceph Storage secret. The following is an example of using the extraMounts parameter for this purpose. Only include ManilaShare in the propagation list if you are using the Shared File Systems service (manila): Replace <ceph-conf-files> with the name of your Secret CR created in Creating a Red Hat Ceph Storage secret . Add the customServiceConfig parameter to the glance template to configure the Image service to use the Red Hat Ceph Storage cluster: When you use Red Hat Ceph Storage as a back end for the Image service, image-conversion is enabled by default. For more information, see Planning storage and shared file systems in Planning your deployment . Add the customServiceConfig parameter to the cinder template to configure the Block Storage service to use the Red Hat Ceph Storage cluster. For information about using Block Storage backups, see Configuring the Block Storage backup service . 1 Replace with the actual FSID. The FSID itself does not need to be considered secret. For more information, see Obtaining the Red Hat Ceph Storage FSID . Optional: Add the customServiceConfig parameter to the manila template to configure the Shared File Systems service to use native CephFS or CephFS-NFS with the Red Hat Ceph Storage cluster. For more information, see Configuring the Shared File Systems service (manila) . The following example exposes native CephFS: The following example exposes CephFS with NFS: Apply the updates to the OpenStackControlPlane CR: 3.5. Configuring the data plane to use the Red Hat Ceph Storage cluster Configure the data plane to use the Red Hat Ceph Storage cluster. Procedure Create a ConfigMap with additional content for the Compute service (nova) configuration file /etc/nova/nova.conf.d/ inside the nova_compute container. This additional content directs the Compute service to use Red Hat Ceph Storage RBD. 1 This file name must follow the naming convention of ##-<name>-nova.conf . Files are evaluated by the Compute service alphabetically. A filename that starts with 01 will be evaluated by the Compute service before a filename that starts with 02 . When the same configuration option occurs in multiple files, the last one read wins. 2 The USDFSID value should contain the actual FSID as described in the Obtaining the Ceph FSID section . The FSID itself does not need to be considered secret. Create a custom version of the default nova service to use the new ConfigMap , which in this case is called ceph-nova . 1 The custom service is named nova-custom-ceph . It cannot be named nova because nova is an unchangeable default service. Any custom service that has the same name as a default service name will be overwritten during reconciliation. Apply the ConfigMap and custom service changes: Update the OpenStackDataPlaneNodeSet services list to replace the nova service with the new custom service (in this case called nova-custom-ceph ), add the ceph-client service, and use the extraMounts parameter to define access to the Ceph Storage secret. Note The ceph-client service must be added before the libvirt and nova-custom-ceph services. The ceph-client service configures EDPM nodes as clients of a Red Hat Ceph Storage server by distributing the Red Hat Ceph Storage client files. Save the changes to the services list. Create an OpenStackDataPlaneDeployment CR: Replace <dataplanedeployment_cr_file> with the name of your file. Result When the nova-custom-ceph service Ansible job runs, the job copies overrides from the ConfigMaps to the Compute service hosts. It will also use virsh secret-* commands so the libvirt service retrieves the cephx secret by FSID . Run the following command on an EDPM node after the job completes to confirm the job results: 3.6. Configuring an external Ceph Object Gateway back end You can configure an external Ceph Object Gateway (RGW) to act as an Object Storage service (swift) back end, by completing the following high-level tasks: Configure the RGW to verify users and their roles in the Identity service (keystone) to authenticate with the external RGW service. Deploy and configure a RGW service to handle object storage requests. You use the openstack client tool to configure the Object Storage service. 3.6.1. Configuring RGW authentication You must configure RGW to verify users and their roles in the Identity service (keystone) to authenticate with the external RGW service. Prerequisites You have deployed an operational OpenStack control plane. Procedure Create the Object Storage service on the control plane: Create a user called swift : Replace <swift_password> with the password to assign to the swift user. Create roles for the swift user: Add the swift user to system roles: Export the RGW endpoint IP addresses to variables and create control plane endpoints: Replace <rgw_endpoint_ip_address_storage> with the IP address of the RGW endpoint on the storage network. This is how internal services will access RGW. Replace <rgw_endpoint_ip_address_external> with the IP address of the RGW endpoint on the external network. This is how cloud users will write objects to RGW. Note Both endpoint IP addresses are the endpoints that represent the Virtual IP addresses, owned by haproxy and keepalived , used to reach the RGW backends that will be deployed in the Red Hat Ceph Storage cluster in the procedure Configuring and Deploying the RGW service . Add the swiftoperator role to the control plane admin group: 3.6.2. Configuring and deploying the RGW service Configure and deploy a RGW service to handle object storage requests. Procedure Log in to a Red Hat Ceph Storage Controller node. Create a file called /tmp/rgw_spec.yaml and add the RGW deployment parameters: Replace <host_1> , <host_2> , ..., <host_n> with the name of the Ceph nodes where the RGW instances are deployed. Replace <storage_network> with the network range used to resolve the interfaces where radosgw processes are bound. Replace <storage_network_vip> with the virtual IP (VIP) used as the haproxy front end. This is the same address configured as the Object Storage service endpoint ( USDRGW_ENDPOINT ) in the Configuring RGW authentication procedure. Optional: Replace <external_network_vip> with an additional VIP on an external network to use as the haproxy front end. This address is used to connect to RGW from an external network. Save the file. Enter the cephadm shell and mount the rgw_spec.yaml file. Add RGW related configuration to the cluster: Replace <keystone_endpoint> with the Identity service internal endpoint. The EDPM nodes are able to resolve the internal endpoint but not the public one. Do not omit the URIScheme from the URL, it must be either http:// or https:// . Replace <swift_password> with the password assigned to the swift user in the step. Deploy the RGW configuration using the Orchestrator: 3.7. Configuring RGW with TLS for an external Red Hat Ceph Storage cluster Configure RGW with TLS so the control plane services can resolve the external Red Hat Ceph Storage cluster host names. This procedure configures Ceph RGW to emulate the Object Storage service (swift). It creates a DNS zone and certificate so that a URL such as https://rgw-external.ceph.local:8080 is registered as an Identity service (keystone) endpoint. This enables Red Hat OpenStack Services on OpenShift (RHOSO) clients to resolve the host and trust the certificate. Because a RHOSO pod needs to securely access an HTTPS endpoint hosted outside of Red Hat OpenShift Container Platform (RHOCP), this process is used to create a DNS domain and certificate for that endpoint. During this procedure, a DNSData domain is created, ceph.local in the examples, so that pods can map host names to IP addresses for services that are not hosted on RHOCP. DNS forwarding is then configured for the domain with the CoreDNS service. Lastly, a certificate is created using the RHOSO public root certificate authority. You must copy the certificate and key file created in RHOCP to the nodes hosting RGW so they can become part of the Ceph Orchestrator RGW specification. Procedure Create a DNSData custom resource (CR) for the external Ceph cluster. Note Creating a DNSData CR creates a new dnsmasq pod that is able to read and resolve the DNS information in the associated DNSData CR. The following is an example of a DNSData CR: Note In this example, it is assumed that the host at the IP address 172.18.0.2 hosts the Ceph RGW endpoint for access on the private storage network. This host passes the CR so that a DNS A and PTR record is created. This enables the host to be accessed by using the host name ceph-rgw-internal-vip.ceph.local . It is also assumed that the host at the IP address 10.10.10.2 hosts the Ceph RGW endpoint for access on the external network. This host passes the CR so that a DNS A and PTR record is created. This enables the host to be accessed by using the host name ceph-rgw-external-vip.ceph.local . The list of hosts in this example is not a definitive list of required hosts. It is provided for demonstration purposes. Substitute the appropriate hosts for your environment. Apply the CR to your environment: Replace <ceph_dns_yaml> with the name of the DNSData CR file. Update the CoreDNS CR with a forwarder to the dnsmasq for requests to the ceph.local domain. For more information about DNS forwarding, see Using DNS forwarding in the RHOCP Networking guide. List the openstack domain DNS cluster IP: The following is an example output for this command: Record the forwarding information from the command output. List the CoreDNS CR: Edit the CoreDNS CR and update it with the forwarding information. The following is an example of a CoreDNS CR updated with forwarding information: The following is what has been added to the CR: 1 The forwarding information recorded from the oc get svc dnsmasq-dns command. Create a Certificate CR with the host names from the DNSData CR. The following is an example of a Certificate CR: Note The certificate issuerRef is set to the root certificate authority (CA) of RHOSO. This CA is automatically created when the control plane is deployed. The default name of the CA is rootca-public . The RHOSO pods trust this new certificate because the root CA is used. Apply the CR to your environment: Replace <ceph_cert_yaml> with the name of the Certificate CR file. Extract the certificate and key data from the secret created when the Certificate CR was applied: Replace <ceph_cert_secret_name> with the name used in the secretName field of your Certificate CR. Note This command outputs YAML with a data section that looks like the following: The <b64cert> and <b64key> values are the base64-encoded certificate and key strings that you must use in the step. Extract and base64 decode the certificate and key information obtained in the step and save a concatenation of them in the Ceph Object Gateway service specification. The rgw section of the the specification file looks like the following: The ingress section of the specification file looks like the following: In the above examples, the rgw_frontend_ssl_certificate and ssl_cert contain the base64 decoded values from both the <b64cert> and <b64key> in the step with no spaces in between. Use the procedure Deploying the Ceph Object Gateway using the service specification to deploy Ceph RGW with SSL. Connect to the openstackclient pod. Verify that the forwarding information has been successfully updated. Replace <host_name> with the name of the external host previously added to the DNSData CR. Note The following is an example output from this command where the openstackclient pod successfully resolved the host name, and no SSL verification errors were encountered. 3.8. Enabling deferred deletion for volumes or images with dependencies When you use Ceph RBD as a back end for the Block Storage service (cinder) or the Image service (glance), you can enable deferred deletion in the Ceph RBD Clone v2 API. With deferred deletion, you can delete a volume from the Block Storage service or an image from the Image service, even if Ceph RBD volumes or snapshots depend on them, for example, COW clones created in different storage pools by the Block Storage service or the Compute service (nova). The volume is deleted from the Block Storage service or the image is deleted from the Image service, but it is still stored in a trash area in Ceph RBD for dependencies. The volume or image is only deleted from Ceph RBD when there are no dependencies. Limitations When you enable Clone v2 deferred deletion in existing environments, the feature only applies to new volumes or images. Procedure Verify which Ceph version the clients in your Ceph Storage cluster are running: Example output: To set the cluster to use the Clone v2 API and the deferred deletion feature by default, set min-compat-client to mimic . Only clients in the cluster that are running Ceph version 13.2.x (Mimic) can access images with dependencies: Schedule an interval for trash purge in minutes by using the m suffix: Replace <pool> with the name of the associated storage pool, for example, volumes in the Block Storage service. Replace <30m> with the interval in minutes that you want to specify for trash purge . Verify a trash purge schedule has been set for the pool: 3.9. Troubleshooting Red Hat Ceph Storage RBD integration The Compute (nova), Block Storage (cinder), and Image (glance) services can integrate with Red Hat Ceph Storage RBD to use it as a storage back end. If this integration does not work as expected, you can perform an incremental troubleshooting procedure to progressively eliminate possible causes. The following example shows how to troubleshoot an Image service integration. You can adapt the same steps to troubleshoot Compute and Block Storage service integrations. Note If you discover the cause of your issue before completing this procedure, it is not necessary to do any subsequent steps. You can exit this procedure and resolve the issue. Procedure Determine if any parts of the control plane are not properly deployed by assessing whether the Ready condition is not True : If you identify a service that is not properly deployed, check the status of the service. The following example checks the status of the Compute service: Note You can check the status of all deployed services with the command oc get pods -n openstack and the logs of a specific service with the command oc logs -n openstack <service_pod_name> . Replace <service_pod_name> with the name of the service pod you want to check. If you identify an operator that is not properly deployed, check the status of the operator: Note Check the operator logs with the command oc logs -n openstack-operators -lopenstack.org/operator-name=<operator_name> . Check the Status of the data plane deployment: If the Status of the data plane deployment is False , check the logs of the associated Ansible job: Replace <ansible_job_name> with the name of the associated job. The job name is listed in the Message field of oc get -n openstack OpenStackDataPlaneDeployment command. Check the Status of the data plane node set deployment: If the Status of the data plane node set deployment is False , check the logs of the associated Ansible job: Replace <ansible_job_name> with the name of the associated job. It is listed in the Message field of oc get -n openstack OpenStackDataPlaneNodeSet command. If any pods are in the CrashLookBackOff state, you can duplicate them for troublehooting purposes with the oc debug command: Replace <pod_name> with the name of the pod to duplicate. Tip You can also use the oc debug command in the following object debugging activities: To run /bin/sh on a container other than the first one, the commands default behavior, using the command form oc debug -container <pod_name> <container_name> . This is useful for pods like the API where the first container is tailing a file and the second container is the one you want to debug. If you use this command form, you must first use the command oc get pods | grep <search_string> to find the container name. To route traffic to the pod during the debug process, use the command form oc debug <pod_name> --keep-labels=true . To create any resource that creates pods such as Deployments , StatefulSets , and Nodes , use the command form oc debug <resource_type>/<resource_name> . An example of creating a StatefulSet would be oc debug StatefulSet/cinder-scheduler . Connect to the pod and confirm that the ceph.client.openstack.keyring and ceph.conf files are present in the /etc/ceph directory. Note If the pod is in a CrashLookBackOff state, use the oc debug command as described in the step to duplicate the pod and route traffic to it. Replace <pod_name> with the name of the applicable pod. Tip If the Ceph configuration files are missing, check the extraMounts parameter in your OpenStackControlPlane CR. Confirm the pod has a network connection to the Red Hat Ceph Storage cluster by connecting to the IP and port of a Ceph Monitor from the pod. The IP and port information is located in /etc/ceph.conf . The following is an example of this process: Tip Troubleshoot the network connection between the cluster and pod if you cannot connect to a Ceph Monitor. The example uses a Python socket to connect to the IP and port of the Red Hat Ceph Storage cluster from the ceph.conf file. There are two potential outcomes from the execution of the s.connect((ip,port)) function: If the command executes successfully and there is no error similar to the following example, the network connection between the pod and cluster is functioning correctly. If the connection is functioning correctly, the command execution will provide no return value at all. If the command takes a long time to execute and returns an error similar to the following example, the network connection between the pod and cluster is not functioning correctly. It should be investigated further to troubleshoot the connection. Examine the cephx key as shown in the following example: List the contents of a pool from the caps osd parameter as shown in the following example: Replace <pool_name> with the name of the required Red Hat Ceph Storage pool. Tip If this command returns the number 0 or greater, the cephx key provides adequate permissions to connect to, and read information from, the Red Hat Ceph Storage cluster. If this command does not complete but network connectivity to the cluster was confirmed, work with the Ceph administrator to obtain the correct cephx keyring. Additionally, it is possible there is an MTU mismatch on the Storage network. If the network is using jumbo frames (an MTU value of 9000), all switch ports between servers using the interface must be updated to support jumbo frames. If this change is not made on the switch, problems can occur at the Ceph application layer. Verify all hosts using the network can communicate at the desired MTU with a command such as ping -M do -s 8972 <ip_address> . Send test data to the images pool on the Ceph cluster. The following is an example of performing this task: Tip It is possible to be able to read data from the cluster but not have permissions to write data to it, even if write permission was granted in the cephx keyring. If write permissions have been granted but you cannot write data to the cluster, this may indicate the cluster is overloaded and not able to write new data. In the example, the rbd command did not complete successfully and was canceled. It was subsequently confirmed the cluster itself did not have the resources to write new data. The issue was resolved on the cluster itself. There was nothing incorrect with the client configuation. 3.10. Customizing and managing Red Hat Ceph Storage Red Hat OpenStack Services on OpenShift (RHOSO) 18.0 supports Red Hat Ceph Storage 7. For information on the customization and management of Red Hat Ceph Storage 7, refer to the Red Hat Ceph Storage documentation . The following guides contain key information and procedures for these tasks: Administration Guide Configuration Guide Operations Guide Data Security and Hardening Guide Dashboard Guide Troubleshooting Guide
|
[
"for P in vms volumes images; do cephadm shell -- ceph osd pool create USDP; cephadm shell -- ceph osd pool application enable USDP rbd; done",
"cephadm shell -- ceph fs volume create cephfs",
"cephadm shell -- ceph nfs cluster create cephfs --ingress --virtual-ip=<vip> --ingress-mode=haproxy-protocol",
"cephadm shell -- ceph auth add client.openstack mgr 'allow *' mon 'profile rbd' osd 'profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images'",
"cephadm shell -- ceph auth add client.openstack mgr 'allow *' mon 'profile rbd' osd 'profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.data'",
"cephadm shell -- ceph auth get client.openstack > /etc/ceph/ceph.client.openstack.keyring",
"cephadm shell -- ceph config generate-minimal-conf > /etc/ceph/ceph.conf",
"KEY=USD(cat /etc/ceph/ceph.client.openstack.keyring | base64 -w 0) CONF=USD(cat /etc/ceph/ceph.conf | base64 -w 0)",
"apiVersion: v1 data: ceph.client.openstack.keyring: USDKEY ceph.conf: USDCONF kind: Secret metadata: name: ceph-conf-files namespace: openstack type: Opaque",
"oc create -f <secret_configuration_file>",
"glance: enabled: true template: databaseInstance: openstack storage: storageRequest: 10G glanceAPIs: default replicas: 3 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer networkAttachments: - storage",
"apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane spec: extraMounts: - name: v1 region: r1 extraVol: - propagation: - CinderVolume - GlanceAPI - ManilaShare extraVolType: Ceph volumes: - name: ceph projected: sources: - secret: name: <ceph-conf-files> mounts: - name: ceph mountPath: \"/etc/ceph\" readOnly: true",
"apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: glance: template: customServiceConfig: | [DEFAULT] enabled_backends = default_backend:rbd [glance_store] default_backend = default_backend [default_backend] rbd_store_ceph_conf = /etc/ceph/ceph.conf store_description = \"RBD backend\" rbd_store_pool = images rbd_store_user = openstack databaseInstance: openstack databaseAccount: glance secret: osp-secret storage: storageRequest: 10G extraMounts: - name: v1 region: r1 extraVol: - propagation: - GlanceAPI extraVolType: Ceph volumes: - name: ceph secret: secretName: ceph-conf-files mounts: - name: ceph mountPath: \"/etc/ceph\" readOnly: true",
"apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane spec: extraMounts: cinder: template: cinderVolumes: ceph: customServiceConfig: | [DEFAULT] enabled_backends=ceph [ceph] volume_backend_name=ceph volume_driver=cinder.volume.drivers.rbd.RBDDriver rbd_ceph_conf=/etc/ceph/ceph.conf rbd_user=openstack rbd_pool=volumes rbd_flatten_volume_from_snapshot=False rbd_secret_uuid=USDFSID 1",
"apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane spec: extraMounts: manila: template: manilaAPI: customServiceConfig: | [DEFAULT] enabled_share_protocols=cephfs manilaShares: share1: customServiceConfig: | [DEFAULT] enabled_share_backends=cephfs [cephfs] driver_handles_share_servers=False share_backend_name=cephfs share_driver=manila.share.drivers.cephfs.driver.CephFSDriver cephfs_conf_path=/etc/ceph/ceph.conf cephfs_auth_id=openstack cephfs_cluster_name=ceph cephfs_volume_mode=0755 cephfs_protocol_helper_type=CEPHFS",
"apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane spec: extraMounts: manila: template: manilaAPI: customServiceConfig: | [DEFAULT] enabled_share_protocols=nfs manilaShares: share1: customServiceConfig: | [DEFAULT] enabled_share_backends=cephfsnfs [cephfsnfs] driver_handles_share_servers=False share_backend_name=cephfsnfs share_driver=manila.share.drivers.cephfs.driver.CephFSDriver cephfs_conf_path=/etc/ceph/ceph.conf cephfs_auth_id=openstack cephfs_cluster_name=ceph cephfs_volume_mode=0755 cephfs_protocol_helper_type=NFS cephfs_nfs_cluster_id=cephfs",
"oc apply -f openstack_control_plane.yaml",
"apiVersion: v1 kind: ConfigMap metadata: name: ceph-nova data: 03-ceph-nova.conf: | 1 [libvirt] images_type=rbd images_rbd_pool=vms images_rbd_ceph_conf=/etc/ceph/ceph.conf images_rbd_glance_store_name=default_backend images_rbd_glance_copy_poll_interval=15 images_rbd_glance_copy_timeout=600 rbd_user=openstack rbd_secret_uuid=USDFSID 2",
"apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneService metadata: name: nova-custom-ceph 1 spec: label: dataplane-deployment-nova-custom-ceph caCerts: combined-ca-bundle edpmServiceType: nova dataSources: - configMapRef: name: ceph-nova - secretRef: name: nova-cell1-compute-config - secretRef: name: nova-migration-ssh-key playbook: osp.edpm.nova",
"oc create -f ceph-nova.yaml",
"apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet spec: roles: edpm-compute: services: - configure-network - validate-network - install-os - configure-os - run-os - ceph-client - ovn - libvirt - nova-custom-ceph - telemetry nodeTemplate: extraMounts: - extraVolType: Ceph volumes: - name: ceph secret: secretName: ceph-conf-files mounts: - name: ceph mountPath: \"/etc/ceph\" readOnly: true",
"oc create -f <dataplanedeployment_cr_file>",
"podman exec libvirt_virtsecretd virsh secret-get-value USDFSID",
"openstack service create --name swift --description \"OpenStack Object Storage\" object-store",
"openstack user create --project service --password <swift_password> swift",
"openstack role create swiftoperator openstack role create ResellerAdmin",
"openstack role add --user swift --project service member openstack role add --user swift --project service admin",
"export RGW_ENDPOINT_STORAGE=<rgw_endpoint_ip_address_storage> export RGW_ENDPOINT_EXTERNAL=<rgw_endpoint_ip_address_external> openstack endpoint create --region regionOne object-store public http://USDRGW_ENDPOINT_EXTERNAL:8080/swift/v1/AUTH_%\\(tenant_id\\)s; openstack endpoint create --region regionOne object-store internal http://USDRGW_ENDPOINT_STORAGE:8080/swift/v1/AUTH_%\\(tenant_id\\)s;",
"openstack role add --project admin --user admin swiftoperator",
"service_type: rgw service_id: rgw service_name: rgw.rgw placement: hosts: - <host_1> - <host_2> - <host_n> networks: - <storage_network> spec: rgw_frontend_port: 8082 rgw_realm: default rgw_zone: default --- service_type: ingress service_id: rgw.default service_name: ingress.rgw.default placement: count: 1 spec: backend_service: rgw.rgw frontend_port: 8080 monitor_port: 8999 virtual_ips_list: - <storage_network_vip> - <external_network_vip> virtual_interface_networks: - <storage_network>",
"cephadm shell -m /tmp/rgw_spec.yaml",
"ceph config set global rgw_keystone_url \"https://<keystone_endpoint>\" ceph config set global rgw_keystone_verify_ssl false ceph config set global rgw_keystone_api_version 3 ceph config set global rgw_keystone_accepted_roles \"member, Member, admin\" ceph config set global rgw_keystone_accepted_admin_roles \"ResellerAdmin, swiftoperator\" ceph config set global rgw_keystone_admin_domain default ceph config set global rgw_keystone_admin_project service ceph config set global rgw_keystone_admin_user swift ceph config set global rgw_keystone_admin_password \"USDSWIFT_PASSWORD\" ceph config set global rgw_keystone_implicit_tenants true ceph config set global rgw_s3_auth_use_keystone true ceph config set global rgw_swift_versioning_enabled true ceph config set global rgw_swift_enforce_content_length true ceph config set global rgw_swift_account_in_url true ceph config set global rgw_trust_forwarded_https true ceph config set global rgw_max_attr_name_len 128 ceph config set global rgw_max_attrs_num_in_req 90 ceph config set global rgw_max_attr_size 1024",
"ceph orch apply -i /mnt/rgw_spec.yaml",
"apiVersion: network.openstack.org/v1beta1 kind: DNSData metadata: labels: component: ceph-storage service: ceph name: ceph-storage namespace: openstack spec: dnsDataLabelSelectorValue: dnsdata hosts: - hostnames: - ceph-rgw-internal-vip.ceph.local ip: 172.18.0.2 - hostnames: - ceph-rgw-external-vip.ceph.local ip: 10.10.10.2",
"oc apply -f <ceph_dns_yaml>",
"oc get svc dnsmasq-dns",
"oc get svc dnsmasq-dns dnsmasq-dns LoadBalancer 10.217.5.130 192.168.122.80 53:30185/UDP 160m",
"oc -n openshift-dns describe dns.operator/default",
"apiVersion: operator.openshift.io/v1 kind: DNS metadata: creationTimestamp: \"2024-03-25T02:49:24Z\" finalizers: - dns.operator.openshift.io/dns-controller generation: 3 name: default resourceVersion: \"164142\" uid: 860b0e61-a48a-470e-8684-3b23118e6083 spec: cache: negativeTTL: 0s positiveTTL: 0s logLevel: Normal nodePlacement: {} operatorLogLevel: Normal servers: - forwardPlugin: policy: Random upstreams: - 10.217.5.130:53 name: ceph zones: - ceph.local upstreamResolvers: policy: Sequential upstreams: - port: 53 type: SystemResolvConf",
". servers: - forwardPlugin: policy: Random upstreams: - 10.217.5.130:53 1 name: ceph zones: - ceph.local .",
"apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: cert-ceph-rgw namespace: openstack spec: duration: 43800h0m0s issuerRef: {'group': 'cert-manager.io', 'kind': 'Issuer', 'name': 'rootca-public'} secretName: cert-ceph-rgw dnsNames: - ceph-rgw-internal-vip.ceph.local - ceph-rgw-external-vip.ceph.local",
"oc apply -f <ceph_cert_yaml>",
"oc get secret <ceph_cert_secret_name> -o yaml",
"[stack@osp-storage-04 ~]USD oc get secret cert-ceph-rgw -o yaml apiVersion: v1 data: ca.crt: <CA> tls.crt: <b64cert> tls.key: <b64key> kind: Secret",
"service_type: rgw service_id: rgw service_name: rgw.rgw placement: hosts: - host1 - host2 networks: - 172.18.0.0/24 spec: rgw_frontend_port: 8082 rgw_realm: default rgw_zone: default ssl: true rgw_frontend_ssl_certificate: | -----BEGIN CERTIFICATE----- MIIDkzCCAfugAwIBAgIRAKNgGd++xV9cBOrwDAeEdQUwDQYJKoZIhvcNAQELBQAw <redacted> -----BEGIN RSA PRIVATE KEY----- MIIEpQIBAAKCAQEAyTL1XRJDcSuaBLpqasAuLsGU2LQdMxuEdw3tE5voKUNnWgjB <redacted> -----END RSA PRIVATE KEY-----",
"service_type: ingress service_id: rgw.default service_name: ingress.rgw.default placement: count: 1 spec: backend_service: rgw.rgw frontend_port: 8080 monitor_port: 8999 virtual_interface_networks: - 172.18.0.0/24 virtual_ip: 172.18.0.2/24 ssl_cert: | -----BEGIN CERTIFICATE----- MIIDkzCCAfugAwIBAgIRAKNgGd++xV9cBOrwDAeEdQUwDQYJKoZIhvcNAQELBQAw <redacted> -----BEGIN RSA PRIVATE KEY----- MIIEpQIBAAKCAQEAyTL1XRJDcSuaBLpqasAuLsGU2LQdMxuEdw3tE5voKUNnWgjB <redacted> -----END RSA PRIVATE KEY-----",
"curl --trace - <host_name>",
"sh-5.1USD curl https://rgw-external-vip.ceph.local:8080 <?xml version=\"1.0\" encoding=\"UTF-8\"?><ListAllMyBucketsResult xmlns=\"http://s3.amazonaws.com/doc/2006-03-01/\"><Owner><ID>anonymous</ID><DisplayName></DisplayName></Owner><Buckets></Buckets></ListAllMyBucketsResult> .1USD sh-5.1USD",
"cephadm shell -- ceph osd get-require-min-compat-client",
"luminous",
"cephadm shell -- ceph osd set-require-min-compat-client mimic",
"rbd trash purge schedule add --pool <pool> <30m>",
"rbd trash purge schedule list --pool <pool>",
"oc get -n openstack OpenStackControlPlane -o jsonpath=\"{range .items[0].status.conditions[?(@.status!='True')]}{.type} is {.status} due to {.message}{'\\n'}{end}\"",
"oc get -n openstack Nova/nova -o jsonpath=\"{range .status.conditions[?(@.status!='True')]}{.type} is {.status} due to {.message}{'\\n'}{end}\"",
"oc get pods -n openstack-operators -lopenstack.org/operator-name",
"oc get -n openstack OpenStackDataPlaneDeployment",
"oc logs -n openstack job/<ansible_job_name>",
"oc get -n openstack OpenStackDataPlaneNodeSet",
"oc logs -n openstack job/<ansible_job_name>",
"debug <pod_name>",
"oc rsh <pod_name>",
"oc get pods | grep glance | grep external-api-0 glance-06f7a-default-external-api-0 3/3 Running 0 2d3h oc debug --container glance-api glance-06f7a-default-external-api-0 Starting pod/glance-06f7a-default-external-api-0-debug-p24v9, command was: /usr/bin/dumb-init --single-child -- /bin/bash -c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start Pod IP: 192.168.25.50 If you don't see a command prompt, try pressing enter. sh-5.1# cat /etc/ceph/ceph.conf Ansible managed [global] fsid = 63bdd226-fbe6-5f31-956e-7028e99f1ee1 mon host = [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0],[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] [client.libvirt] admin socket = /var/run/ceph/USDcluster-USDtype.USDid.USDpid.USDcctid.asok log file = /var/log/ceph/qemu-guest-USDpid.log sh-5.1# python3 Python 3.9.19 (main, Jul 18 2024, 00:00:00) [GCC 11.4.1 20231218 (Red Hat 11.4.1-3)] on linux Type \"help\", \"copyright\", \"credits\" or \"license\" for more information. >>> import socket >>> s = socket.socket() >>> ip=\"192.168.122.100\" >>> port=3300 >>> s.connect((ip,port)) >>>",
"Traceback (most recent call last): File \"<stdin>\", line 1, in <module> TimeoutError: [Errno 110] Connection timed out",
"bash-5.1USD cat /etc/ceph/ceph.client.openstack.keyring [client.openstack] key = \"<redacted>\" caps mgr = allow * caps mon = profile rbd caps osd = profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images bash-5.1USD",
"/usr/bin/rbd --conf /etc/ceph/ceph.conf --keyring /etc/ceph/ceph.client.openstack.keyring --cluster ceph --id openstack ls -l -p <pool_name> | wc -l",
"DATA=USD(date | md5sum | cut -c-12) POOL=images RBD=\"/usr/bin/rbd --conf /etc/ceph/ceph.conf --keyring /etc/ceph/ceph.client.openstack.keyring --cluster ceph --id openstack\" USDRBD create --size 1024 USDPOOL/USDDATA"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/configuring_persistent_storage/assembly_configuring-red-hat-ceph-storage-as-the-backend-for-RHOSP-storage
|
Chapter 2. Architecture
|
Chapter 2. Architecture 2.1. OLM v1 components overview Important Operator Lifecycle Manager (OLM) v1 is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Operator Lifecycle Manager (OLM) v1 comprises the following component projects: Operator Controller Operator Controller is the central component of OLM v1 that extends Kubernetes with an API through which users can install and manage the lifecycle of Operators and extensions. It consumes information from catalogd. Catalogd Catalogd is a Kubernetes extension that unpacks file-based catalog (FBC) content packaged and shipped in container images for consumption by on-cluster clients. As a component of the OLM v1 microservices architecture, catalogd hosts metadata for Kubernetes extensions packaged by the authors of the extensions, and as a result helps users discover installable content. 2.2. Operator Controller Important Operator Lifecycle Manager (OLM) v1 is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Operator Controller is the central component of Operator Lifecycle Manager (OLM) v1 and consumes the other OLM v1 component, catalogd. It extends Kubernetes with an API through which users can install Operators and extensions. 2.2.1. ClusterExtension API Operator Controller provides a new ClusterExtension API object that is a single resource representing an instance of an installed extension, which includes Operators via the registry+v1 bundle format. This clusterextension.olm.operatorframework.io API streamlines management of installed extensions by consolidating user-facing APIs into a single object. Important In OLM v1, ClusterExtension objects are cluster-scoped. This differs from existing OLM where Operators could be either namespace-scoped or cluster-scoped, depending on the configuration of their related Subscription and OperatorGroup objects. For more information about the earlier behavior, see Multitenancy and Operator colocation . Example ClusterExtension object apiVersion: olm.operatorframework.io/v1alpha1 kind: ClusterExtension metadata: name: <operator_name> spec: packageName: <package_name> installNamespace: <namespace_name> channel: <channel_name> version: <version_number> Additional resources Operator Lifecycle Manager (OLM) Multitenancy and Operator colocation 2.2.1.1. Example custom resources (CRs) that specify a target version In Operator Lifecycle Manager (OLM) v1, cluster administrators can declaratively set the target version of an Operator or extension in the custom resource (CR). You can define a target version by specifying any of the following fields: Channel Version number Version range If you specify a channel in the CR, OLM v1 installs the latest version of the Operator or extension that can be resolved within the specified channel. When updates are published to the specified channel, OLM v1 automatically updates to the latest release that can be resolved from the channel. Example CR with a specified channel apiVersion: olm.operatorframework.io/v1alpha1 kind: ClusterExtension metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh installNamespace: <namespace_name> serviceAccount: name: <service_account> channel: latest 1 1 Installs the latest release that can be resolved from the specified channel. Updates to the channel are automatically installed. If you specify the Operator or extension's target version in the CR, OLM v1 installs the specified version. When the target version is specified in the CR, OLM v1 does not change the target version when updates are published to the catalog. If you want to update the version of the Operator that is installed on the cluster, you must manually edit the Operator's CR. Specifying an Operator's target version pins the Operator's version to the specified release. Example CR with the target version specified apiVersion: olm.operatorframework.io/v1alpha1 kind: ClusterExtension metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh installNamespace: <namespace_name> serviceAccount: name: <service_account> version: "1.11.1" 1 1 Specifies the target version. If you want to update the version of the Operator or extension that is installed, you must manually update this field the CR to the desired target version. If you want to define a range of acceptable versions for an Operator or extension, you can specify a version range by using a comparison string. When you specify a version range, OLM v1 installs the latest version of an Operator or extension that can be resolved by the Operator Controller. Example CR with a version range specified apiVersion: olm.operatorframework.io/v1alpha1 kind: ClusterExtension metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh installNamespace: <namespace_name> serviceAccount: name: <service_account> version: ">1.11.1" 1 1 Specifies that the desired version range is greater than version 1.11.1 . For more information, see "Support for version ranges". After you create or update a CR, apply the configuration file by running the following command: Command syntax USD oc apply -f <extension_name>.yaml 2.2.2. Object ownership for cluster extensions In Operator Lifecycle Manager (OLM) v1, a Kubernetes object can only be owned by a single ClusterExtension object at a time. This ensures that objects within an OpenShift Container Platform cluster are managed consistently and prevents conflicts between multiple cluster extensions attempting to control the same object. 2.2.2.1. Single ownership The core ownership principle enforced by OLM v1 is that each object can only have one cluster extension as its owner. This prevents overlapping or conflicting management by multiple cluster extensions, ensuring that each object is uniquely associated with only one bundle. Implications of single ownership Bundles that provide a CustomResourceDefinition (CRD) object can only be installed once. Bundles provide CRDs, which are part of a ClusterExtension object. This means you can install a bundle only once in a cluster. Attempting to install another bundle that provides the same CRD results in failure, as each custom resource can have only one cluster extension as its owner. Cluster extensions cannot share objects. The single-owner policy of OLM v1 means that cluster extensions cannot share ownership of any objects. If one cluster extension manages a specific object, such as a Deployment , CustomResourceDefinition , or Service object, another cluster extension cannot claim ownership of the same object. Any attempt to do so is blocked by OLM v1. 2.2.2.2. Error messages When a conflict occurs due to multiple cluster extensions attempting to manage the same object, Operator Controller returns an error message indicating the ownership conflict, such as the following: Example error message CustomResourceDefinition 'logfilemetricexporters.logging.kubernetes.io' already exists in namespace 'kubernetes-logging' and cannot be managed by operator-controller This error message signals that the object is already being managed by another cluster extension and cannot be reassigned or shared. 2.2.2.3. Considerations As a cluster or extension administrator, review the following considerations: Uniqueness of bundles Ensure that Operator bundles providing the same CRDs are not installed more than once. This can prevent potential installation failures due to ownership conflicts. Avoid object sharing If you need different cluster extensions to interact with similar resources, ensure they are managing separate objects. Cluster extensions cannot jointly manage the same object due to the single-owner enforcement. 2.3. Catalogd Important Operator Lifecycle Manager (OLM) v1 is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Operator Lifecycle Manager (OLM) v1 uses the catalogd component and its resources to manage Operator and extension catalogs. Important Currently, Operator Lifecycle Manager (OLM) v1 cannot authenticate private registries, such as the Red Hat-provided Operator catalogs. This is a known issue. As a result, the OLM v1 procedures that rely on having the Red Hat Operators catalog installed do not work. ( OCPBUGS-36364 ) 2.3.1. About catalogs in OLM v1 You can discover installable content by querying a catalog for Kubernetes extensions, such as Operators and controllers, by using the catalogd component. Catalogd is a Kubernetes extension that unpacks catalog content for on-cluster clients and is part of the Operator Lifecycle Manager (OLM) v1 suite of microservices. Currently, catalogd unpacks catalog content that is packaged and distributed as container images. Important If you try to install an Operator or extension that does not have unique name, the installation might fail or lead to an unpredictable result. This occurs for the following reasons: If mulitple catalogs are installed on a cluster, Operator Lifecycle Manager (OLM) v1 does not include a mechanism to specify a catalog when you install an Operator or extension. OLM v1 requires that all of the Operators and extensions that are available to install on a cluster use a unique name for their bundles and packages. Additional resources File-based catalogs Adding a catalog to a cluster Red Hat-provided catalogs
|
[
"apiVersion: olm.operatorframework.io/v1alpha1 kind: ClusterExtension metadata: name: <operator_name> spec: packageName: <package_name> installNamespace: <namespace_name> channel: <channel_name> version: <version_number>",
"apiVersion: olm.operatorframework.io/v1alpha1 kind: ClusterExtension metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh installNamespace: <namespace_name> serviceAccount: name: <service_account> channel: latest 1",
"apiVersion: olm.operatorframework.io/v1alpha1 kind: ClusterExtension metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh installNamespace: <namespace_name> serviceAccount: name: <service_account> version: \"1.11.1\" 1",
"apiVersion: olm.operatorframework.io/v1alpha1 kind: ClusterExtension metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh installNamespace: <namespace_name> serviceAccount: name: <service_account> version: \">1.11.1\" 1",
"oc apply -f <extension_name>.yaml",
"CustomResourceDefinition 'logfilemetricexporters.logging.kubernetes.io' already exists in namespace 'kubernetes-logging' and cannot be managed by operator-controller"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/extensions/architecture
|
Chapter 1. Introduction to Satellite API
|
Chapter 1. Introduction to Satellite API Red Hat Satellite provides a Representational State Transfer (REST) API. The API provides software developers and system administrators with control over their Red Hat Satellite environment outside of the standard web interface. The REST API is useful for developers and administrators who aim to integrate the functionality of Red Hat Satellite with custom scripts or external applications that access the API over HTTP. 1.1. Overview of the Satellite API The benefits of using the REST API are: Broad client support - any programming language, framework, or system with support for HTTP protocol can use the API. Self-descriptive - client applications require minimal knowledge of the Red Hat Satellite infrastructure because a user discovers many details at runtime. Resource-based model - the resource-based REST model provides a natural way to manage a virtualization platform. You can use the REST API to perform the following tasks: Integrate with enterprise IT systems. Integrate with third-party applications. Perform automated maintenance or error checking tasks. Automate repetitive tasks with scripts. 1.2. Satellite API compared to Hammer CLI For many tasks, you can use both Hammer and Satellite API. You can use Hammer as a human-friendly interface to Satellite API. For example, to test responses to API calls before applying them in a script, use the --debug option to inspect API calls that Hammer issues: hammer --debug organization list . In contrast, scripts that use API commands communicate directly with the Satellite API. For more information, see the Using the Hammer CLI tool .
| null |
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/using_the_satellite_rest_api/introduction-to-satellite-api
|
Providing feedback on Red Hat documentation
|
Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. To provide feedback, open a Jira issue that describes your concerns. Provide as much detail as possible so that your request can be addressed quickly. Prerequisites You have a Red Hat Customer Portal account. This account enables you to log in to the Red Hat Jira Software instance. If you do not have an account, you will be prompted to create one. Procedure To provide your feedback, perform the following steps: Click the following link: Create Issue . In the Summary text box, enter a brief description of the issue. In the Description text box, provide more details about the issue. Include the URL where you found the issue. Provide information for any other required fields. Allow all fields that contain default information to remain at the defaults. Click Create to create the Jira issue for the documentation team. A documentation issue will be created and routed to the appropriate documentation team. Thank you for taking the time to provide feedback.
| null |
https://docs.redhat.com/en/documentation/subscription_central/1-latest/html/creating_and_managing_manifests_for_a_connected_satellite_server/proc-providing-feedback-on-redhat-documentation
|
Image APIs
|
Image APIs OpenShift Container Platform 4.17 Reference guide for image APIs Red Hat OpenShift Documentation Team
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html-single/image_apis/index
|
3.6. Using stunnel
|
3.6. Using stunnel The stunnel program is an encryption wrapper between a client and a server. It listens on the port specified in its configuration file, encrypts the communication with the client, and forwards the data to the original daemon listening on its usual port. This way, you can secure any service that itself does not support any type of encryption, or improve the security of a service that uses a type of encryption that you want to avoid for security reasons, such as SSL versions 2 and 3, affected by the POODLE SSL vulnerability (CVE-2014-3566). See Resolution for POODLE SSLv3.0 vulnerability (CVE-2014-3566) for components that do not allow SSLv3 to be disabled via configuration settings . OpenLDAP older than 2.4.39 (before Red Hat Enterprise Linux 6.6) and CUPS are examples of components that do not provide a way to disable SSL in their own configuration. 3.6.1. Installing stunnel Install the stunnel package by running the following command as root :
|
[
"~]# yum install stunnel"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sec-Using_stunnel
|
Making open source more inclusive
|
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
| null |
https://docs.redhat.com/en/documentation/net/6.0/html/release_notes_for_.net_6.0_containers/making-open-source-more-inclusive
|
Chapter 6. Installing a cluster on IBM Cloud into an existing VPC
|
Chapter 6. Installing a cluster on IBM Cloud into an existing VPC In OpenShift Container Platform version 4.14, you can install a cluster into an existing Virtual Private Cloud (VPC) on IBM Cloud(R). The installation program provisions the rest of the required infrastructure, which you can then further customize. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. 6.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an IBM Cloud(R) account to host the cluster. If you use a firewall, you configured it to allow the sites that your cluster requires access to. You configured the ccoctl utility before you installed the cluster. For more information, see Configuring IAM for IBM Cloud(R) . 6.2. About using a custom VPC In OpenShift Container Platform 4.14, you can deploy a cluster into the subnets of an existing IBM(R) Virtual Private Cloud (VPC). Deploying OpenShift Container Platform into an existing VPC can help you avoid limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. If you cannot obtain the infrastructure creation permissions that are required to create the VPC yourself, use this installation option. Because the installation program cannot know what other components are in your existing subnets, it cannot choose subnet CIDRs and so forth. You must configure networking for the subnets to which you will install the cluster. 6.2.1. Requirements for using your VPC You must correctly configure the existing VPC and its subnets before you install the cluster. The installation program does not create the following components: NAT gateways Subnets Route tables VPC network The installation program cannot: Subdivide network ranges for the cluster to use Set route tables for the subnets Set VPC options like DHCP Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. 6.2.2. VPC validation The VPC and all of the subnets must be in an existing resource group. The cluster is deployed to this resource group. As part of the installation, specify the following in the install-config.yaml file: The name of the resource group The name of VPC The subnets for control plane machines and compute machines To ensure that the subnets that you provide are suitable, the installation program confirms the following: All of the subnets that you specify exist. For each availability zone in the region, you specify: One subnet for control plane machines. One subnet for compute machines. The machine CIDR that you specified contains the subnets for the compute machines and control plane machines. Note Subnet IDs are not supported. 6.2.3. Isolation between clusters If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is reduced in the following ways: You can install multiple OpenShift Container Platform clusters in the same VPC. ICMP ingress is allowed to the entire network. TCP port 22 ingress (SSH) is allowed to the entire network. Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network. Control plane TCP 22623 ingress (MCS) is allowed to the entire network. 6.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.14, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 6.4. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 6.5. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 6.6. Exporting the API key You must set the API key you created as a global variable; the installation program ingests the variable during startup to set the API key. Prerequisites You have created either a user API key or service ID API key for your IBM Cloud(R) account. Procedure Export your API key for your account as a global variable: USD export IC_API_KEY=<api_key> Important You must set the variable name exactly as specified; the installation program expects the variable name to be present during startup. 6.7. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on IBM Cloud(R). Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Note Always delete the ~/.powervs directory to avoid reusing a stale configuration. Run the following command: USD rm -rf ~/.powervs At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select ibmcloud as the platform to target. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster. Enter a descriptive name for your cluster. Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for IBM Cloud(R) 6.7.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 6.1. Minimum resource requirements Machine Operating System vCPU Virtual RAM Storage Input/Output Per Second (IOPS) Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS 2 8 GB 100 GB 300 Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see RHEL Architectures . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 6.7.2. Tested instance types for IBM Cloud The following IBM Cloud(R) instance types have been tested with OpenShift Container Platform. Example 6.1. Machine series bx2-8x32 bx2d-4x16 bx3d-4x20 cx2-8x16 cx2d-4x8 cx3d-8x20 gx2-8x64x1v100 gx3-16x80x1l4 mx2-8x64 mx2d-4x32 mx3d-2x20 ox2-4x32 ox2-8x64 ux2d-2x56 vx2d-4x56 6.7.3. Sample customized install-config.yaml file for IBM Cloud You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and then modify it. apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 3 hyperthreading: Enabled 4 name: master platform: ibmcloud: {} replicas: 3 compute: 5 6 - hyperthreading: Enabled 7 name: worker platform: ibmcloud: {} replicas: 3 metadata: name: test-cluster 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 10 serviceNetwork: - 172.30.0.0/16 platform: ibmcloud: region: eu-gb 11 resourceGroupName: eu-gb-example-network-rg 12 networkResourceGroupName: eu-gb-example-existing-network-rg 13 vpcName: eu-gb-example-network-1 14 controlPlaneSubnets: 15 - eu-gb-example-network-1-cp-eu-gb-1 - eu-gb-example-network-1-cp-eu-gb-2 - eu-gb-example-network-1-cp-eu-gb-3 computeSubnets: 16 - eu-gb-example-network-1-compute-eu-gb-1 - eu-gb-example-network-1-compute-eu-gb-2 - eu-gb-example-network-1-compute-eu-gb-3 credentialsMode: Manual publish: External pullSecret: '{"auths": ...}' 17 fips: false 18 sshKey: ssh-ed25519 AAAA... 19 1 8 11 17 Required. The installation program prompts you for this value. 2 5 If you do not provide these parameters and values, the installation program provides the default value. 3 6 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 7 Enables or disables simultaneous multithreading, also known as Hyper-Threading. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger machine types, such as n1-standard-8 , for your machines if you disable simultaneous multithreading. 9 The machine CIDR must contain the subnets for the compute machines and control plane machines. 10 The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . 12 The name of an existing resource group. All installer-provisioned cluster resources are deployed to this resource group. If undefined, a new resource group is created for the cluster. 13 Specify the name of the resource group that contains the existing virtual private cloud (VPC). The existing VPC and subnets should be in this resource group. The cluster will be installed to this VPC. 14 Specify the name of an existing VPC. 15 Specify the name of the existing subnets to which to deploy the control plane machines. The subnets must belong to the VPC that you specified. Specify a subnet for each availability zone in the region. 16 Specify the name of the existing subnets to which to deploy the compute machines. The subnets must belong to the VPC that you specified. Specify a subnet for each availability zone in the region. 18 Enables or disables FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 19 Optional: provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 6.7.4. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 6.8. Manually creating IAM Installing the cluster requires that the Cloud Credential Operator (CCO) operate in manual mode. While the installation program configures the CCO for manual mode, you must specify the identity and access management secrets for you cloud provider. You can use the Cloud Credential Operator (CCO) utility ( ccoctl ) to create the required IBM Cloud(R) resources. Prerequisites You have configured the ccoctl binary. You have an existing install-config.yaml file. Procedure Edit the install-config.yaml configuration file so that it contains the credentialsMode parameter set to Manual . Example install-config.yaml configuration file apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled 1 This line is added to set the credentialsMode parameter to Manual . To generate the manifests, run the following command from the directory that contains the installation program: USD ./openshift-install create manifests --dir <installation_directory> From the directory that contains the installation program, set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: "1.0" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer Create the service ID for each credential request, assign the policies defined, create an API key, and generate the secret: USD ccoctl ibmcloud create-service-id \ --credentials-requests-dir=<path_to_credential_requests_directory> \ 1 --name=<cluster_name> \ 2 --output-dir=<installation_directory> \ 3 --resource-group-name=<resource_group_name> 4 1 Specify the directory containing the files for the component CredentialsRequest objects. 2 Specify the name of the OpenShift Container Platform cluster. 3 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 4 Optional: Specify the name of the resource group used for scoping the access policies. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. If an incorrect resource group name is provided, the installation fails during the bootstrap phase. To find the correct resource group name, run the following command: USD grep resourceGroupName <installation_directory>/manifests/cluster-infrastructure-02-config.yml Verification Ensure that the appropriate secrets were generated in your cluster's manifests directory. 6.9. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 6.10. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.14. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.14 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 6.11. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources Accessing the web console 6.12. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.14, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources About remote health monitoring 6.13. steps Customize your cluster . Optional: Opt out of remote health reporting .
|
[
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"export IC_API_KEY=<api_key>",
"./openshift-install create install-config --dir <installation_directory> 1",
"rm -rf ~/.powervs",
"apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 3 hyperthreading: Enabled 4 name: master platform: ibmcloud: {} replicas: 3 compute: 5 6 - hyperthreading: Enabled 7 name: worker platform: ibmcloud: {} replicas: 3 metadata: name: test-cluster 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 10 serviceNetwork: - 172.30.0.0/16 platform: ibmcloud: region: eu-gb 11 resourceGroupName: eu-gb-example-network-rg 12 networkResourceGroupName: eu-gb-example-existing-network-rg 13 vpcName: eu-gb-example-network-1 14 controlPlaneSubnets: 15 - eu-gb-example-network-1-cp-eu-gb-1 - eu-gb-example-network-1-cp-eu-gb-2 - eu-gb-example-network-1-cp-eu-gb-3 computeSubnets: 16 - eu-gb-example-network-1-compute-eu-gb-1 - eu-gb-example-network-1-compute-eu-gb-2 - eu-gb-example-network-1-compute-eu-gb-3 credentialsMode: Manual publish: External pullSecret: '{\"auths\": ...}' 17 fips: false 18 sshKey: ssh-ed25519 AAAA... 19",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled",
"./openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: \"1.0\" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer",
"ccoctl ibmcloud create-service-id --credentials-requests-dir=<path_to_credential_requests_directory> \\ 1 --name=<cluster_name> \\ 2 --output-dir=<installation_directory> \\ 3 --resource-group-name=<resource_group_name> 4",
"grep resourceGroupName <installation_directory>/manifests/cluster-infrastructure-02-config.yml",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installing_on_ibm_cloud/installing-ibm-cloud-vpc
|
7.3. Moving Resources Due to Connectivity Changes
|
7.3. Moving Resources Due to Connectivity Changes Setting up the cluster to move resources when external connectivity is lost is a two-step process. Add a ping resource to the cluster. The ping resource uses the system utility of the same name to test if a list of machines (specified by DNS host name or IPv4/IPv6 address) are reachable and uses the results to maintain a node attribute called pingd . Configure a location constraint for the resource that will move the resource to a different node when connectivity is lost. Table 5.1, "Resource Properties" describes the properties you can set for a ping resource. Table 7.1. Properties of a ping resources Field Description dampen The time to wait (dampening) for further changes to occur. This prevents a resource from bouncing around the cluster when cluster nodes notice the loss of connectivity at slightly different times. multiplier The number of connected ping nodes gets multiplied by this value to get a score. Useful when there are multiple ping nodes configured. host_list The machines to contact in order to determine the current connectivity status. Allowed values include resolvable DNS host names, IPv4 and IPv6 addresses. The following example command creates a ping resource that verifies connectivity to www.example.com . In practice, you would verify connectivity to your network gateway/router. You configure the ping resource as a clone so that the resource will run on all cluster nodes. The following example configures a location constraint rule for the existing resource named Webserver . This will cause the Webserver resource to move to a host that is able to ping www.example.com if the host that it is currently running on cannot ping www.example.com
|
[
"pcs resource create ping ocf:pacemaker:ping dampen=5s multiplier=1000 host_list=www.example.com --clone",
"pcs constraint location Webserver rule score=-INFINITY pingd lt 1 or not_defined pingd"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/configuring_the_red_hat_high_availability_add-on_with_pacemaker/s1-moving_resources_due_to_connectivity_changes-haar
|
Chapter 7. Monitoring the Load-balancing service
|
Chapter 7. Monitoring the Load-balancing service To keep load balancing operational, you can use the load-balancer management network and create, modify, and delete load-balancing health monitors. Section 7.1, "Load-balancing management network" Section 7.2, "Load-balancing service instance monitoring" Section 7.3, "Load-balancing service pool member monitoring" Section 7.4, "Load balancer provisioning status monitoring" Section 7.5, "Load balancer functionality monitoring" Section 7.6, "About Load-balancing service health monitors" Section 7.7, "Creating Load-balancing service health monitors" Section 7.8, "Modifying Load-balancing service health monitors" Section 7.9, "Deleting Load-balancing service health monitors" Section 7.10, "Best practices for Load-balancing service HTTP health monitors" 7.1. Load-balancing management network The Red Hat OpenStack Platform (RHOSP) Load-balancing service (octavia) monitors load balancers through a project network referred to as the load-balancing management network . Hosts that run the Load-balancing service must have interfaces to connect to the load-balancing management network. The supported interface configuration works with the neutron Modular Layer 2 plug-in with the Open Virtual Network mechanism driver (ML2/OVN) or the Open vSwitch mechanism driver (ML2/OVS). Use of the interfaces with other mechanism drivers has not been tested. The default interfaces created at deployment are internal Open vSwitch (OVS) ports on the default integration bridge br-int . You must associate these interfaces with actual Networking service (neutron) ports allocated on the load-balancer management network. The default interfaces are by default named, o-hm0 . They are defined through standard interface configuration files on the Load-balancing service hosts. RHOSP director automatically configures a Networking service port and an interface for each Load-balancing service host during deployment. Port information and a template is used to create the interface configuration file, including: IP network address information including the IP and netmask MTU configuration the MAC address the Networking service port ID In the default OVS case, the Networking service port ID is used to register extra data with the OVS port. The Networking service recognizes this interface as belonging to the port and configures OVS so it can communicate on the load-balancer management network. By default, RHOSP configures security groups and firewall rules that allow the Load-balancing service controllers to communicate with its VM instances (amphorae) on TCP port 9443, and allows the heartbeat messages from the amphorae to arrive on the controllers on UDP port 5555. Different mechanism drivers might require additional or alternate requirements to allow communication between load-balancing services and the load balancers. 7.2. Load-balancing service instance monitoring The Load-balancing service (octavia) monitors the load balancing instances (amphorae) and initiates failovers and replacements if the amphorae malfunction. Any time a failover occurs, the Load-balancing service logs the failover in the corresponding health manager log on the controller in /var/log/containers/octavia . Use log analytics to monitor failover trends to address problems early. Problems such as Networking service (neutron) connectivity issues, Denial of Service attacks, and Compute service (nova) malfunctions often lead to higher failover rates for load balancers. 7.3. Load-balancing service pool member monitoring The Load-balancing service (octavia) uses the health information from the underlying load balancing subsystems to determine the health of members of the load-balancing pool. Health information is streamed to the Load-balancing service database, and made available by the status tree or other API methods. For critical applications, you must poll for health information in regular intervals. 7.4. Load balancer provisioning status monitoring You can monitor the provisioning status of a load balancer and send alerts if the provisioning status is ERROR . Do not configure an alert to trigger when an application is making regular changes to the pool and enters several PENDING stages. The provisioning status of load balancer objects reflect the ability of the control plane to contact and successfully provision a create, update, and delete request. The operating status of a load balancer object reports on the current functionality of the load balancer. For example, a load balancer might have a provisioning status of ERROR , but an operating status of ONLINE . This might be caused by a Networking (neutron) failure that blocked that last requested update to the load balancer configuration from successfully completing. In this case, the load balancer continues to process traffic through the load balancer, but might not have applied the latest configuration updates yet. 7.5. Load balancer functionality monitoring You can monitor the operational status of your load balancer and its child objects. You can also use an external monitoring service that connects to your load balancer listeners and monitors them from outside of the cloud. An external monitoring service indicates if there is a failure outside of the Load-balancing service (octavia) that might impact the functionality of your load balancer, such as router failures, network connectivity issues, and so on. 7.6. About Load-balancing service health monitors A Load-balancing service (octavia) health monitor is a process that does periodic health checks on each back end member server to pre-emptively detect failed servers and temporarily pull them out of the pool. If the health monitor detects a failed server, it removes the server from the pool and marks the member in ERROR . After you have corrected the server and it is functional again, the health monitor automatically changes the status of the member from ERROR to ONLINE , and resumes passing traffic to it. Always use health monitors in production load balancers. If you do not have a health monitor, failed servers are not removed from the pool. This can lead to service disruption for web clients. There are several types of health monitors, as briefly described here: HTTP by default, probes the / path on the application server. HTTPS operates exactly like HTTP health monitors, but with TLS back end servers. If the servers perform client certificate validation, HAProxy does not have a valid certificate. In these cases, TLS-HELLO health monitoring is an alternative. TLS-HELLO ensures that the back end server responds to SSLv3-client hello messages. A TLS-HELLO health monitor does not check any other health metrics, like status code or body contents. PING sends periodic ICMP ping requests to the back end servers. You must configure back end servers to allow PINGs so that these health checks pass. Important A PING health monitor checks only if the member is reachable and responds to ICMP echo requests. PING health monitors do not detect if the application that runs on an instance is healthy. Use PING health monitors only in cases where an ICMP echo request is a valid health check. TCP opens a TCP connection to the back end server protocol port. The TCP application opens a TCP connection and, after the TCP handshake, closes the connection without sending any data. UDP-CONNECT performs a basic UDP port connect. A UDP-CONNECT health monitor might not work correctly if Destination Unreachable (ICMP type 3) is not enabled on the member server, or if it is blocked by a security rule. In these cases, a member server might be marked as having an operating status of ONLINE when it is actually down. 7.7. Creating Load-balancing service health monitors Use Load-balancing service (octavia) health monitors to avoid service disruptions for your users. The health monitors run periodic health checks on each back end server to pre-emptively detect failed servers and temporarily pull the servers out of the pool. Procedure Source your credentials file. Example Run the openstack loadbalancer healthmonitor create command, using argument values that are appropriate for your site. All health monitor types require the following configurable arguments: <pool> Name or ID of the pool of back-end member servers to be monitored. --type The type of health monitor. One of HTTP , HTTPS , PING , TCP , TLS-HELLO , or UDP-CONNECT . --delay Number of seconds to wait between health checks. --timeout Number of seconds to wait for any given health check to complete. timeout must always be smaller than delay . --max-retries Number of health checks a back-end server must fail before it is considered down. Also, the number of health checks that a failed back-end server must pass to be considered up again. In addition, HTTP health monitor types also require the following arguments, which are set by default: --url-path Path part of the URL that should be retrieved from the back-end server. By default this is / . --http-method HTTP method that is used to retrieve the url_path . By default this is GET . --expected-codes List of HTTP status codes that indicate an OK health check. By default this is 200 . Example Verification Run the openstack loadbalancer healthmonitor list command and verify that your health monitor is running. Additional resources loadbalancer healthmonitor create in the Command Line Interface Reference 7.8. Modifying Load-balancing service health monitors You can modify the configuration for Load-balancing service (octavia) health monitors when you want to change the interval for sending probes to members, the connection timeout interval, the HTTP method for requests, and so on. Procedure Source your credentials file. Example Modify your health monitor ( my-health-monitor ). In this example, a user is changing the time in seconds that the health monitor waits between sending probes to members. Example Verification Run the openstack loadbalancer healthmonitor show command to confirm your configuration changes. Additional resources loadbalancer healthmonitor set in the Command Line Interface Reference loadbalancer healthmonitor show in the Command Line Interface Reference 7.9. Deleting Load-balancing service health monitors You can remove a Load-balancing service (octavia) health monitor. Tip An alternative to deleting a health monitor is to disable it by using the openstack loadbalancer healthmonitor set --disable command. Procedure Source your credentials file. Example Delete the health monitor ( my-health-monitor ). Example Verification Run the openstack loadbalancer healthmonitor list command to verify that the health monitor you deleted no longer exists. Additional resources loadbalancer healthmonitor delete in the Command Line Interface Reference 7.10. Best practices for Load-balancing service HTTP health monitors When you write the code that generates the health check in your web application, use the following best practices: The health monitor url-path does not require authentication to load. By default, the health monitor url-path returns an HTTP 200 OK status code to indicate a healthy server unless you specify alternate expected-codes . The health check does enough internal checks to ensure that the application is healthy and no more. Ensure that the following conditions are met for the application: Any required database or other external storage connections are up and running. The load is acceptable for the server on which the application runs. Your site is not in maintenance mode. Tests specific to your application are operational. The page generated by the health check should be small in size: It returns in a sub-second interval. It does not induce significant load on the application server. The page generated by the health check is never cached, although the code that runs the health check might reference cached data. For example, you might find it useful to run a more extensive health check using cron and store the results to disk. The code that generates the page at the health monitor url-path incorporates the results of this cron job in the tests it performs. Because the Load-balancing service only processes the HTTP status code returned, and because health checks are run so frequently, you can use the HEAD or OPTIONS HTTP methods to skip processing the entire page.
|
[
"source ~/overcloudrc",
"openstack loadbalancer healthmonitor create --name my-health-monitor --delay 10 --max-retries 4 --timeout 5 --type TCP lb-pool-1",
"source ~/overcloudrc",
"openstack loadbalancer healthmonitor set my_health_monitor --delay 600",
"openstack loadbalancer healthmonitor show my_health_monitor",
"source ~/overcloudrc",
"openstack loadbalancer healthmonitor delete my-health-monitor"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/using_octavia_for_load_balancing-as-a-service/monitor-lb-service_rhosp-lbaas
|
Chapter 8. Using Prometheus and Grafana to monitor the router network
|
Chapter 8. Using Prometheus and Grafana to monitor the router network Prometheus is container-native software built for storing historical data and for monitoring large, scalable systems such as AMQ Interconnect. It gathers data over an extended time, rather than just for the currently running session. You use Prometheus and Alertmanager to monitor and store AMQ Interconnect data so that you can use a graphical tool, such as Grafana, to visualize and run queries on the data. 8.1. Setting up Prometheus and Grafana Before you can view AMQ Interconnect dashboards, you must deploy and configure Prometheus, Alertmanager, and Grafana in the OpenShift project in which AMQ Interconnect is deployed. All of the required configuration files are provided in a GitHub repository. Procedure Clone the qdr-monitoring GitHub repository . This repository contains example configuration files needed to set up Prometheus and Grafana to monitor AMQ Interconnect. Set the NAMESPACE environment variable to the name of the project where you deployed AMQ Interconnect. For example, if you deployed AMQ Interconnect in the example project, set the NAMESPACE environment variable as follows: USD export NAMESPACE=example Run the deploy-monitoring.sh script. This script creates and configures the OpenShift resources needed to deploy Prometheus, Alertmanager, and Grafana in your OpenShift project. It also configures two dashboards that provide metrics for the router network. An alternative method of running this script is to to specify the target project as a parameter. For example: Additional resources For more information about Prometheus, see the Prometheus documentation . For more information about Grafana, see the Grafana documentation . 8.2. Viewing AMQ Interconnect dashboards in Grafana After setting up Prometheus and Grafana, you can visualize the AMQ Interconnect data on the following Grafana dashboards: Qpid Dispatch Router Shows metrics for: Qpid Dispatch Router Shows metrics for: Deliveries ingress Deliveries egress Deliveries ingress route container Deliveries egress route container Deliveries redirected to fallback destination Dropped presettled deliveries Presettled deliveries Auto links Link routes Address count Connection count Link count Qpid Dispatch Router - Delayed Deliveries Shows metrics for: Cumulative delayed 10 seconds Cumulative delayed 1 second Rate of new delayed deliveries For more information about these metrics, see Section 8.3, "Router metrics" . Procedure In the OpenShift web console, switch to Networking Routes , and click the URL for the grafana Route. The Grafana Log In page appears. Enter your user name and password, and then click Log In . The default Grafana user name and password are both admin . After logging in for the first time, you can change the password. On the top header, click the dashboard drop-down menu, and then select the Qpid Dispatch Router or Qpid Dispatch Router - Delayed Deliveries dashboard. Figure 8.1. Delayed Deliveries dashboard 8.3. Router metrics The following metrics are available in Prometheus: qdr_connections_total The total number of network connections to the router. This includes connections from and to any AMQP route container. qdr_links_total The total number of incoming and outgoing links attached to the router. qdr_addresses_total The total number of addresses known to the router. qdr_routers_total The total number of routers known to the router. qdr_link_routes_total The total number of active and inactive link routes configured for the router. See Understanding link routing for more details. qdr_auto_links_total The total number of incoming and outgoing auto links configured for the router. See Configuring brokered messaging for more details about autolinks. qdr_presettled_deliveries_total The total number of presettled deliveries arriving at the router. The router settles the incoming deliveries and propagates the settlement to the message destination, also known as fire and forget . qdr_dropped_presettled_deliveries_total The total number of presettled deliveries that the router dropped due to congestion. The router settles the incoming deliveries and propagates the settlement to the message destination, also known as fire and forget. qdr_accepted_deliveries_total The total number of deliveries accepted at the router. See Understanding message routing for more information on accepted deliveries. qdr_released_deliveries_total The total number of deliveries released at the router. See Understanding message routing for more information on released deliveries. qdr_rejected_deliveries_total The total number of deliveries rejected at the router. See Understanding message routing for more information on rejected deliveries. qdr_modified_deliveries_total The total number of deliveries modified at the router. See Understanding message routing for more information on modified deliveries. qdr_deliveries_ingress_total The total number of messages delivered to the router from clients. This includes management messages, but not route control messages. qdr_deliveries_egress_total The total number of messages sent from the router to clients. This includes management messages, but not route control messages. qdr_deliveries_transit_total , qdr_deliveries_ingress_route_container_total The total number of messages passing through the router for delivery to a different router. qdr_deliveries_egress_route_container_total The total number of deliveries sent to AMQP route containers from the router This includes messages to an AMQ Broker instance and management messages, but not route control messages. qdr_deliveries_delayed_1sec_total The total number of deliveries forwarded by the router that were unsettled for more than one second. qdr_deliveries_delayed_10sec_total The total number of deliveries forwarded by the router that were unsettled for more than ten seconds. qdr_deliveries_stuck_total The total number of deliveries that cannot be delivered. Typically, deliveries cannot be delivered due to lack of credit as described in Message routing flow control qdr_links_blocked_total The total number of links that are blocked. qdr_deliveries_redirected_to_fallback_total The total number of deliveries that were forwarded to a fallback destination. See Handling undeliverable messages for more information. Additional information See Section 8.2, "Viewing AMQ Interconnect dashboards in Grafana" .
|
[
"git clone https://github.com/interconnectedcloud/qdr-monitoring",
"export NAMESPACE=example",
"./deploy-monitoring.sh",
"./deploy-monitoring.sh example"
] |
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/deploying_amq_interconnect_on_openshift/using-prometheus-grafana-monitor-router-network-router-ocp
|
Chapter 290. SAP Component
|
Chapter 290. SAP Component The SAP component is a package consisting of a suite of ten different SAP components. There are remote function call (RFC) components that support the sRFC, tRFC, and qRFC protocols; and there are IDoc components that facilitate communication using messages in IDoc format. The component uses the SAP Java Connector (SAP JCo) library to facilitate bidirectional communication with SAP and the SAP IDoc library to facilitate the transmission of documents in the Intermediate Document (IDoc) format. 290.1. Overview Dependencies Maven users need to add the following dependency to their pom.xml file to use this component: <dependency> <groupId>org.fusesource</groupId> <artifactId>camel-sap</artifactId> <version>x.x.x</version> <dependency> Additional platform restrictions for the SAP component Because the SAP component depends on the third-party JCo 3 and IDoc 3 libraries, it can only be installed on the platforms that these libraries support. For more details about the supported library versions and platform restrictions, see Red Hat JBoss Fuse Supported Configurations . SAP JCo and SAP IDoc libraries A prerequisite for using the SAP component is that the SAP Java Connector (SAP JCo) libraries and the SAP IDoc library are installed into the lib/ directory of the Java runtime. You can download the appropriate set of SAP libraries for your target operating system from the SAP Service Marketplace. Note You must have an SAP Service Marketplace Account to download and use these libraries. The names of the library files vary depending on the target operating system, as shown in Table 290.1, "Required SAP Libraries" . Table 290.1. Required SAP Libraries SAP Component Linux and UNIX Windows SAP JCo 3 sapjco3.jar libsapjco3.so sapjco3.jar sapjco3.dll SAP IDoc sapidoc3.jar sapidoc3.jar For more information, see the SAP Java Connector documentation. 290.2. Installing required SAP Libraries 290.2.1. Deploying in a Fuse OSGi Container You can install the SAP JCo libraries and the SAP IDoc library into the JBoss Fuse OSGi container as follows: Download the SAP JCo libraries and the SAP IDoc library from the SAP Service Marketplace ( http://service.sap.com/public/connectors ), making sure to choose the appropriate version of the libraries for your operating system. Note You must have an SAP Service Marketplace Account to download and use these libraries. Copy the sapjco3.jar , libsapjco3.so (or sapjco3.dll on Windows), and sapidoc3.jar library files into the lib/ directory of your Fuse installation. Open both the configuration properties file, etc/config.properties , and the custom properties file, etc/custom.properties , in a text editor. In the etc/config.properties file, look for the org.osgi.framework.system.packages.extra property and copy the complete property setting (this setting extends over multiple lines, with a backslash character, \ , used to indicate line continuation). Now paste this setting into the etc/custom.properties file. You can now add the extra packages required to support the SAP libraries. In the etc/custom.properties file, add the required packages to the org.osgi.framework.system.packages.extra setting as shown: Don't forget to include a comma and a backslash, , \ , at the end of each line preceding the new entries, so that the list is properly continued. Restart the container for these changes to take effect. Install the camel-sap feature in the container. In the Karaf console, enter the following command: 290.2.2. Deploying in a JBoss EAP container To deploy the SAP component in a JBoss EAP container, perform the following steps: Download the SAP JCo libraries and the SAP IDoc library from the SAP Service Marketplace ( http://service.sap.com/public/connectors ), making sure to choose the appropriate version of the libraries for your operating system. Note You must have an SAP Service Marketplace Account to download and use these libraries. Copy the JCo library files and the IDoc library file into a subdirectory for your JBoss EAP installation. Important Follow the naming convention The native libraries must be installed in a subdirectory that follows the naming standard, in the form of <osname>-<cpuname> . Information and a complete list of allowed names is available in the JBoss Modules manual . For example, if your host platform is 64-bit Linux ( linux-x86_64 ), install the library files as follows: Example Create a new file called USDJBOSS_HOME/modules/system/layers/fuse/org/wildfly/camel/extras/main/module.xml and add the following content: 290.2.3. Deploying in Spring Boot and OpenShift Container Platform To deploy SAP in your project with Maven using maven-resources and maven-jar plugins, follow these steps: Download the libraries Add dependencies Place libraries in the project Add configuration for the libraries Deploy to OpenShift 290.2.3.1. Downloading libraries You need three libraries: Common library for all environments: sapidoc3.jar Libraries for your architecture: sapjco3.jar sapjco3.so For more information, see the SAP Java Connector documentation. Download the SAP JCo libraries and the SAP IDoc library from the SAP Service Marketplace ( http://service.sap.com/public/connectors ), making sure to choose the appropriate version of the libraries for your operating system. Note You must have an SAP Service Marketplace Account to download and use these libraries. 290.2.3.2. Adding dependencies Maven users need to add the following dependency to their pom.xml file to use this component: <dependency> <groupId>org.fusesource</groupId> <artifactId>camel-sap-starter</artifactId> <exclusions> <exclusion> <groupId>com.sap.conn.idoc</groupId> <artifactId>sapidoc3</artifactId> </exclusion> <exclusion> <groupId>com.sap.conn.jco</groupId> <artifactId>sapjco3</artifactId> </exclusion> </exclusions> </dependency> 290.2.3.3. Placing libraries Copy the SAP library files to the lib directory relative to the pom.xml When you run Maven, it follows the instructions in the pom.xml and copy the files to the specified locations. Example: AMD64 Warning Do not add the SAP library files to a custom Maven repository The SAP Java Connector performs validation on the names of the JAR files sapjco3.jar and sapidoc3.jar . If you copy a JAR file to your Maven repository, spring-boot-maven-plugin renames them by appending the version number. This causes validation to fail, preventing the application from deploying properly. 290.2.3.4. Configuring plugins Add the maven configuration to the pom.xml , below the spring-boot-maven-plugin : Add the maven-jar-plugin and set the Class-Path entry to the lib folder location: <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-jar-plugin</artifactId> <configuration> <archive> <manifestEntries> <Class-Path>lib/USD{os.arch}/sapjco3.jar lib/USD{os.arch}/sapidoc3.jar</Class-Path> </manifestEntries> </archive> </configuration> </plugin> This creates the correct structure for the necessary artifacts, and make the SAP libraries deploy to the required target directories. Use the maven-resources-plugin in the pom.xml to copy library <plugin> <artifactId>maven-resources-plugin</artifactId> <executions> <execution> <id>copy-resources01</id> <phase>process-classes</phase> <goals> <goal>copy-resources</goal> </goals> <configuration> <outputDirectory>USD{basedir}/target/lib</outputDirectory> <encoding>UTF-8</encoding> <resources> <resource> <directory>USD{basedir}/lib</directory> <includes> <include>**/**</include> </includes> </resource> </resources> </configuration> </execution> </executions> </plugin> This copies the libraries directory from the relative lib to target/lib when you run the `oc command. 290.2.3.5. Deploying to Openshift Complete the following steps to deploy to OpenShift, and trigger a Maven build. Run oc to create and configure the build: oc new-build --binary=true --image-stream="<current_Fuse_Java_OpenShift_Imagestream_version>" --name=<application_name> -e "ARTIFACT_COPY_ARGS=-a ." -e "MAVEN_ARGS_APPEND=<additional_args> -e "ARTIFACT_DIR=<relative_path_of_target_directory>" Replace values as needed: <current_Fuse_Java_OpenShift_Imagestream_version> : The current image stream. <application_name> : Application name of your choice. <additional_args> : arguments to append to maven. <relative_path_of_target_directory> : relative path of app to target. Example In this example MAVEN_ARGS_APPEND is used to specify to only build a specific project in the spring-boot directory: Start the build (from the multimodule parent directory) This sends sources from the local host to OpenShift, where the maven build runs. Start the app Example 290.2.4. Deploying in Spring Boot and OpenShift Container Platform using JKube To deploy SAP in your project with JKube using the openshift-maven-plugin plugin, follow these steps: Place the connectors in the lib directory of your project: Example: AMD64 Warning Do not add the SAP library files to a custom Maven repository The SAP Java Connector performs validation on the names of the JAR files sapjco3.jar and sapidoc3.jar . If you copy a JAR file to your Maven repository, spring-boot-maven-plugin renames them by appending the version number. This causes validation to fail, preventing the application from deploying properly. Exclude the embedded connectors from the starter in pom.xml : <dependency> <groupId>org.fusesource</groupId> <artifactId>camel-sap-starter</artifactId> <exclusions> <exclusion> <groupId>com.sap.conn.idoc</groupId> <artifactId>sapidoc3</artifactId> </exclusion> <exclusion> <groupId>com.sap.conn.jco</groupId> <artifactId>sapjco3</artifactId> </exclusion> </exclusions> </dependency> Define local connectors as static resources in pom.xml : <resources> <resource> <directory>src/lib/USD{os.arch}/com/sap/conn/idoc</directory> <targetPath>BOOT-INF/lib</targetPath> <includes> <include>*.jar</include> </includes> </resource> <resource> <directory>src/lib/USD{os.arch}/com/sap/conn/jco</directory> <targetPath>BOOT-INF/lib</targetPath> <includes> <include>*.jar</include> </includes> </resource> </resources> Configure resources and deployment configuration, specifying the connector path in pom.xml : <plugin> <groupId>org.eclipse.jkube</groupId> <artifactId>openshift-maven-plugin</artifactId> <version>1.4.0</version> <configuration> <images>  </images> </configuration> <executions> <execution> <goals> <goal>resource</goal> <goal>build</goal> <goal>apply</goal> </goals> </execution> </executions> </plugin> 290.2.4.1. Deploying on Openshift Once the openshift-maven-plugin is configured in pom.xml , you can import the fuse spring-boot image into a specific namespace as a builder image for our application. Start in your application path: Create the project streams: Import the image streams: Create your project Deploy the application with maven: 290.3. URI format There are two different kinds of endpoint provided by the SAP component: the Remote Function Call (RFC) endpoints, and the Intermediate Document (IDoc) endpoints. The URI formats for the RFC endpoints are as follows: The URI formats for the IDoc endpoints are as follows: The URI formats prefixed by sap- endpointKind -destination define destination endpoints (in other words, Camel producer endpoints), and destinationName is the name of a specific outbound connection to a SAP instance. Outbound connections are named and configured at the component level, as described in Section 290.6.2, "Destination Configuration" . The URI formats prefixed by sap- endpointKind -server define server endpoints (in other words, Camel consumer endpoints) and serverName is the name of a specific inbound connection from a SAP instance. Inbound connections are named and configured at the component level, as described in Section 290.6.3, "Server Configuration" . The other components of an RFC endpoint URI are as follows: rfcName (Required) In a destination endpoint URI, the name of the RFC invoked by the endpoint in the connected SAP instance. In a server endpoint URI, the name of the RFC handled by the endpoint when invoked from the connected SAP instance. queueName Specifies the queue this endpoint sends a SAP request to. The other components of an IDoc endpoint URI are as follows: idocType (Required) Specifies the Basic IDoc Type of an IDoc produced by this endpoint. idocTypeExtension Specifies the IDoc Type Extension, if any, of an IDoc produced by this endpoint. systemRelease Specifies the associated SAP Basis Release, if any, of an IDoc produced by this endpoint. applicationRelease Specifes the associated Application Release, if any, of an IDoc produced by this endpoint. queueName Specifies the queue this endpoint sends a SAP request to. 290.4. Options 290.4.1. Options for RFC destination endpoints The RFC destination endpoints ( sap-srfc-destination , sap-trfc-destination , and sap-qrfc-destination ) support the following URI options: Name Default Description stateful false If true , specifies that this endpoint initiates a SAP stateful session transacted false If true , specifies that this endpoint initiates a SAP transaction Options for RFC server endpoints The SAP RFC server endpoints ( sap-srfc-server and sap-trfc-server ) support the following URI options: Name Default Description stateful false If true , specifies that this endpoint initiates a SAP stateful session. propagateExceptions false (sap-trfc-server endpoint only) If true , specifies that this endpoint propagates exceptions back to the caller in SAP, instead of the exchange's exception handler Options for the IDoc List Server endpoint The SAP IDoc List Server endpoint ( sap-idoclist-server ) supports the following URI options: Name Default Description stateful false If true , specifies that this endpoint initiates a SAP stateful session. propagateExceptions false If true , specifies that this endpoint propagates exceptions back to the caller in SAP, instead of the exchange's exception handler 290.5. Summary of the RFC and IDoc endpoints The SAP component package provides the following RFC and IDoc endpoints: sap-srfc-destination JBoss Fuse SAP Synchronous Remote Function Call Destination Camel component. Use this endpoint in cases where Camel routes require synchronous delivery of requests to and responses from a SAP system. Note The sRFC protocol used by this component delivers requests and responses to and from a SAP system with best effort . In case of a communication error while sending a request, the completion status of a remote function call in the receiving SAP system remains in doubt . sap-trfc-destination JBoss Fuse SAP Transactional Remote Function Call Destination Camel component. Use this endpoint in cases where requests must be delivered to the receiving SAP system at most once . To accomplish this, the component generates a transaction ID, tid , which accompanies every request sent through the component in a route's exchange. The receiving SAP system records the tid accompanying a request before delivering the request; if the SAP system receives the request again with the same tid it does not deliver the request. Thus, if a route encounters a communication error when sending a request through an endpoint of this component, it can retry sending the request within the same exchange knowing it is delivered and executed only once. Note The tRFC protocol used by this component is asynchronous and does not return a response. Thus the endpoints of this component do not return a response message. Note This component does not guarantee the order of a series of requests through its endpoints, and the delivery and execution order of these requests may differ on the receiving SAP system due to communication errors and resends of a request. For guaranteed delivery order, please see the JBoss Fuse SAP Queued Remote Function Call Destination Camel component. sap-qrfc-destination JBoss Fuse SAP Queued Remote Function Call Destination Camel component. This component extends the capabilities of the JBoss Fuse Transactional Remote Function Call Destination camel component by adding in order delivery guarantees to the delivery of requests through its endpoints. Use this endpoint in cases where a series of requests depend on each other and must be delivered to the receiving SAP system at most once and in order . The component accomplishes the at most once delivery guarantees using the same mechanisms as the JBoss Fuse SAP Transactional Remote Function Call Destination Camel component. The ordering guarantee is accomplished by serializing the requests in the order they are received by the SAP system to an inbound queue . Inbound queues are processed by the QIN scheduler within SAP. When the inbound queue is activated , the QIN Scheduler executes the queue requests in order. Note The qRFC protocol used by this component is asynchronous and does not return a response. Thus the endpoints of this component do not return a response message. sap-srfc-server JBoss Fuse SAP Synchronous Remote Function Call Server Camel component. Use this component and its endpoints in cases where a Camel route is required to synchronously handle requests from and responses to a SAP system. sap-trfc-server JBoss Fuse SAP Transactional Remote Function Call Server Camel component. Use this endpoint in cases where the sending SAP system requires at most once delivery of its requests to a Camel route. To accomplish this, the sending SAP system generates a transaction ID, tid , which accompanies every request it sends to the component's endpoints. The sending SAP system first checks with the component whether a given tid has been received by it before sending a series of requests associated with the tid . The component checks the list of received tid s it maintains, record the sent tid if it is not in that list, and then respond to the sending SAP system, indicating whether the tid had already been recorded. If the tid has not been previously recorded, the sending SAP system transmits the series of requests. This enables a sending SAP system to reliably send a series of requests once to a camel route. sap-idoc-destination JBoss Fuse SAP IDoc Destination Camel component. Use this endpoint in cases where a Camel route is required to send a list of Intermediate Documents (IDocs) to a SAP system. sap-idoclist-destination JBoss Fuse SAP IDoc List Destination Camel component. Use this endpoint in cases where a Camel route is required to send a list of Intermediate documents (IDocs) to a SAP system. sap-qidoc-destination JBoss Fuse SAP Queued IDoc Destination Camel component. Use this component and its endpoints in cases where a Camel route is required to send a list of Intermediate documents (IDocs) to a SAP system in order . sap-qidoclist-destination JBoss Fuse SAP Queued IDoc List Destination Camel component. Use this component and its endpoints in cases where a camel route is required to send a list of Intermediate documents (IDocs) to a SAP system in order . sap-idoclist-server JBoss Fuse SAP IDoc List Server Camel component. Use this endpoint in cases where a sending SAP system requires delivery of Intermediate Document lists to a Camel route. This component uses the tRFC protocol to communicate with SAP as described in the sap-trfc-server-standalone quick start. SAP RFC destination endpoint An RFC destination endpoint supports outbound communication to SAP, which enable these endpoints to make RFC calls out to ABAP function modules in SAP. An RFC destination endpoint is configured to make an RFC call to a specific ABAP function over a specific connection to a SAP instance. An RFC destination is a logical designation for an outbound connection and has a unique name. An RFC destination is specified by a set of connection parameters called destination data . An RFC destination endpoint extracts an RFC request from the input message of the IN-OUT exchanges it receives and dispatch that request in a function call to SAP. The output message of the exchange contains the response from the function call. Since SAP RFC destination endpoints only support outbound communication, an RFC destination endpoint only supports the creation of producers. SAP RFC server endpoint An RFC server endpoint supports inbound communication from SAP, which enables ABAP applications in SAP to make RFC calls into server endpoints. An ABAP application interacts with an RFC server endpoint as if it were a remote function module. An RFC server endpoint is configured to receive an RFC call to a specific RFC function over a specific connection from a SAP instance. An RFC server is a logical designation for an inbound connection and has a unique name. An RFC server is specified by a set of connection parameters called server data . An RFC server endpoint handles an incoming RFC request and dispatch it as the input message of an IN-OUT exchange. The output message of the exchange is returned as the response of the RFC call. Since SAP RFC server endpoints only support inbound communication, an RFC server endpoint only supports the creation of consumers. SAP IDoc and IDoc list destination endpoints An IDoc destination endpoint supports outbound communication to SAP, which can then perform further processing on the IDoc message. An IDoc document represents a business transaction, which can easily be exchanged with non-SAP systems. An IDoc destination is specified by a set of connection parameters called destination data . An IDoc list destination endpoint is similar to an IDoc destination endpoint, except that the messages it handles consist of a list of IDoc documents. SAP IDoc list server endpoint An IDoc list server endpoint supports inbound communication from SAP, enabling a Camel route to receive a list of IDoc documents from a SAP system. An IDoc list server is specified by a set of connection parameters called server data . metadata repositories A metadata repository is used to store the following kinds of metadata: Interface descriptions of function modules This metadata is used by the JCo and ABAP runtimes to check RFC calls to ensure the type-safe transfer of data between communication partners before dispatching those calls. A repository is populated with repository data. Repository data is a map of named function templates. A function template contains the metadata describing all the parameters and their typing information passed to and from a function module and has the unique name of the function module it describes. IDoc type descriptions This metadata is used by the IDoc runtime to ensure that the IDoc documents are correctly formatted before being sent to a communication partner. A basic IDoc type consists of a name, a list of permitted segments, and a description of the hierarchical relationship between the segments. Some additional constraints can be imposed on the segments: a segment can be mandatory or optional; and it is possible to specify a minimum/maximum range for each segment (defining the number of allowed repetitions of that segment). SAP destination and server endpoints thus require access to a repository to send and receive RFC calls and to send and receive IDoc documents. For RFC calls, the metadata for all function modules invoked and handled by the endpoints must reside within the repository; and for IDoc endpoints, the metadata for all IDoc types and IDoc type extensions handled by the endpoints must reside within the repository. The location of the repository used by a destination and server endpoint is specified in the destination data and the server data of their respective connections. In the case of a SAP destination endpoint, the repository it uses typically resides in a SAP system and it defaults to the SAP system it is connected to. This default requires no explicit configuration in the destination data. Furthermore, the metadata for the remote function call that a destination endpoint makes already exist in a repository for any existing function module that it calls. The metadata for calls made by destination endpoints thus require no configuration in the SAP component. On the other hand, the metadata for function calls handled by server endpoints do not typically reside in the repository of a SAP system and must instead be provided by a repository residing in the SAP component. The SAP component maintains a map of named metadata repositories. The name of a repository corresponds to the name of the server to which it provides metadata. 290.6. Configuration The SAP component maintains three maps to store destination data, server data , and repository data. The destination data store and the server data store use a special configuration object, SapConnectionConfiguration , which automatically gets injected into the SAP component (in the context of Blueprint XML configuration or Spring XML configuration files). The repository data store must be configured directly on the relevant SAP component. 290.6.1. Configuration Overview Overview The SAP component maintains three maps to store destination data, server data , and repository data. The component's property, destinationDataStore , stores destination data keyed by destination name. The property, serverDataStore , stores server data keyed by server name. The property, repositoryDataStore , stores repository data keyed by repository name. You must pass these configurations to the component during its initialization. Example The following example shows how to configure a sample destination data store and a sample server data store in a Blueprint XML file. The sap-configuration bean (of type SapConnectionConfiguration ) is automatically injected into any SAP component used in this XML file. <?xml version="1.0" encoding="UTF-8"?> <blueprint ... > ... <!-- Configures the Inbound and Outbound SAP Connections --> <bean id="sap-configuration" class="org.fusesource.camel.component.sap.SapConnectionConfiguration"> <property name="destinationDataStore"> <map> <entry key="quickstartDest" value-ref="quickstartDestinationData" /> </map> </property> <property name="serverDataStore"> <map> <entry key="quickstartServer" value-ref="quickstartServerData" /> </map> </property> </bean> <!-- Configures an Outbound SAP Connection --> <!-- *** Please enter the connection property values for your environment *** --> <bean id="quickstartDestinationData" class="org.fusesource.camel.component.sap.model.rfc.impl.DestinationDataImpl"> <property name="ashost" value="example.com" /> <property name="sysnr" value="00" /> <property name="client" value="000" /> <property name="user" value="username" /> <property name="passwd" value="passowrd" /> <property name="lang" value="en" /> </bean> <!-- Configures an Inbound SAP Connection --> <!-- *** Please enter the connection property values for your environment ** --> <bean id="quickstartServerData" class="org.fusesource.camel.component.sap.model.rfc.impl.ServerDataImpl"> <property name="gwhost" value="example.com" /> <property name="gwserv" value="3300" /> <!-- Do not change the following property values --> <property name="progid" value="QUICKSTART" /> <property name="repositoryDestination" value="quickstartDest" /> <property name="connectionCount" value="2" /> </bean> </blueprint> 290.6.2. Destination Configuration Overview The configurations for destinations are maintained in the destinationDataStore property of the SAP component. Each entry in this map configures a distinct outbound connection to an SAP instance. The key for each entry is the name of the outbound connection and is used in the destinationName component of a destination endpoint URI as described in the URI format section. The value for each entry is a destination data configuration object - org.fusesource.camel.component.sap.model.rfc.impl.DestinationDataImpl - that specifies the configuration of an outbound SAP connection. Sample destination configuration The following Blueprint XML code shows how to configure a sample destination with the name quickstartDest . <?xml version="1.0" encoding="UTF-8"?> <blueprint ... > ... <!-- Create interceptor to support tRFC processing --> <bean id="currentProcessorDefinitionInterceptor" class="org.fusesource.camel.component.sap.CurrentProcessorDefinitionInterceptStrategy" /> <!-- Configures the Inbound and Outbound SAP Connections --> <bean id="sap-configuration" class="org.fusesource.camel.component.sap.SapConnectionConfiguration"> <property name="destinationDataStore"> <map> <entry key="quickstartDest" value-ref="quickstartDestinationData" /> </map> </property> </bean> <!-- Configures an Outbound SAP Connection --> <!-- *** Please enter the connection property values for your environment *** --> <bean id="quickstartDestinationData" class="org.fusesource.camel.component.sap.model.rfc.impl.DestinationDataImpl"> <property name="ashost" value="example.com" /> <property name="sysnr" value="00" /> <property name="client" value="000" /> <property name="user" value="username" /> <property name="passwd" value="password" /> <property name="lang" value="en" /> </bean> </blueprint> For example, after configuring the destination as shown in the preceding Blueprint XML file, you could invoke the BAPI_FLCUST_GETLIST remote function call on the quickstartDest destination using the following URI: sap-srfc-destination:quickstartDest:BAPI_FLCUST_GETLIST Interceptor for tRFC and qRFC destinations The preceding sample destination configuration shows the instantiation of a CurrentProcessorDefinitionInterceptStrategy object. This object installs an interceptor in the Camel runtime, enabling the Camel SAP component to keep track of its position within a Camel route while handling RFC transactions. For more details, see the section called "Transactional RFC destination endpoints" . Important This interceptor must be installed in the Camel runtime to properly manage outbound transactional RFC communication. It is critically important for transactional RFC destination endpoints (such as sap-trfc-destination and sap-qrfc-destination ). The Destination RFC Transaction Handlers issue warnings into the Camel log if the strategy is not found at runtime, and in this situation the Camel runtime must to be re-provisioned and restarted to properly manage outbound transactional RFC communication. Log-in and authentication options The following table lists the log-in and authentication options for configuring a destination in the SAP destination data store: Name Default Value Description client SAP client, mandatory log-in parameter user Log-in user, log-in parameter for password based authentication aliasUser Log-in user alias, can be used instead of log-in user userId User identity to use for log-in to the ABAP AS. Used by the JCo runtime, if the destination configuration uses SSO/assertion ticket, certificate, current user, or SNC environment for authentication. If there is no user or user alias, the user ID is mandatory. This ID is not used by the SAP backend, the JCo runtime uses it locally. passwd Log-in password, log-in parameter for password-based authentication lang Log-in language to use instead of the user language mysapsso2 Use the specified SAP Cookie Version 2 as a log-in ticket for SSO based authentication x509cert Use the specified X509 certificate for certificate based authentication lcheck Postpone the authentication until the first call - 1 (enable). Use lcheck in special cases only . useSapGui Use a visible, hidden, or do not use SAP GUI codePage Additional log-in parameter to define the codepage that is used to convert the log-in parameters. Use codePage in special cases only. getsso2 Order a SSO ticket after log-in, the obtained ticket is available in the destination attributes denyInitialPassword If set to 1 , using initial passwords leads to an exception (default is 0 ). Connection options The following table lists the connection options for configuring a destination in the SAP destination data store: Name Default Value Description saprouter SAP Router string for connection to systems behind a SAP Router. SAP Router string contains the chain of SAP Routers and its port numbers and has the form: (/H/<host>[/S/<port>])+ sysnr System number of the SAP ABAP application server, mandatory for a direct connection ashost SAP ABAP application server, mandatory for a direct connection mshost SAP message server, mandatory property for a load balancing connection msserv SAP message server port, optional property for a load balancing connection. To resolve the service names sapmsXXX, the network layer of the operating system performs a look-up in etc/services . If using port numbers instead of symbolic service names, there are no look-ups and additional entries are not needed. gwhost Allows specifying a concrete gateway. Use this for establishing the connection to an application server. If not specified the gateway on the application server is used. gwserv Set this when using gwhost . Allows specifying the port used on that gateway. If not specified the port of the gateway on the application server is used. To resolve the service names sapgwXXX, the network layer of the operating system performs a look-up in etc/services . If using port numbers instead of symbolic service names, there are no look-ups and additional entries are not needed. r3name System ID of the SAP system, mandatory property for a load balancing connection. group Group of SAP application servers, mandatory property for a load balancing connection network LAN Set this value depending on the network quality between JCo and your target system to optimize performance. The valid values are LAN or WAN (which is relevant for fast serialization only). WAN uses a slower but more efficient compression algorithm, with data analysis for further compression options. LAN uses a very fast compression algorithm, with only basic data analysis. With the LAN option, the compression ratio is not as efficient but the network transfer time is considered to be less significant. The default setting is LAN . serializationFormat rowBased Format for serialization. Can be rowBased (default) or columnBased (fast serialization). Connection pool options The following table lists the connection pool options for configuring a destination in the SAP destination data store: Name Default Value Description peakLimit 0 The maximum number of simultaneously active outbound connections for a destination. A value of 0 allows an unlimited number of active connections. Otherwise, it is automatically increased to poolCapacity . Default setting is the value of poolCapacity if configured. If poolCapacity is not specified, the default is 0 (unlimited). poolCapacity 1 The maximum number of idle outbound connections kept open by the destination. A value of 0 means no connection pooling (default is 1 ). expirationTime The minimum time in milliseconds a free connection held internally by the destination must be kept open. expirationPeriod The period in milliseconds after which the destination checks expiration for the released connections. maxGetTime The maximum time in milliseconds to wait for a connection, if the maximum allowed number of connections has already been allocated by the application. Secure network connection options The following table lists the secure network options for configuring a destination in the SAP destination data store: Name Default Value Description sncMode Secure network connection (SNC) mode, 0 (off) or 1 (on) sncPartnername SNC partner, for example: p:CN=R3, O=XYZ-INC, C=EN sncQop SNC level of security: 1 to 9 sncMyname Own SNC name. Overrides environment settings sncLibrary Path to library that provides SNC service Repository options The following table lists the repository options for configuring a destination in the SAP destination data store: Name Default Value Description repositoryDest Specifies which destination to use as repository. repositoryUser This defines the user for repository calls, if a repository destination has not been defined. This enables you to use a different user for repository look-ups. repositoryPasswd The password for a repository user. Mandatory when using a repository user. repositorySnc (Optional) If SNC is used for this destination, it is possible to turn it off for repository connections, if this property is set to 0 . Default setting is the value of jco.client.snc_mode . For special cases only. repositoryRoundtripOptimization Enable the RFC_METADATA_GET API, which provides repository data in one round trip. 1 Activates use of RFC_METADATA_GET in ABAP System, 0 Deactivates RFC_METADATA_GET in ABAP System. If the property is not set, the destination initially does a remote call to check whether RFC_METADATA_GET is available. If it is available, the destination uses it. Note If the repository is already initialized (for example because it is used by some other destination) this property does not have any effect. Generally, this property is related to the ABAP System, and should have the same value on all destinations pointing to the same ABAP System. See note 1456826 for backend prerequisites. Trace configuration options The following table lists the trace configuration options for configuring a destination in the SAP destination data store: Name Default Value Description trace Enable/disable RFC trace ( 0 or 1 ) cpicTrace Enable/disable CPIC trace [0..3] 290.6.3. Server Configuration Overview The configurations for servers are maintained in the serverDataStore property of the SAP component. Each entry in this map configures a distinct inbound connection from an SAP instance. The key for each entry is the name of the outbound connection and is used in the serverName component of a server endpoint URI as described in the URI format section. The value for each entry is a server data configuration object , org.fusesource.camel.component.sap.model.rfc.impl.ServerDataImpl , that defines the configuration of an inbound SAP connection. Sample server configuration The following Blueprint XML code shows how to create a sample server configuration with the name, quickstartServer . <?xml version="1.0" encoding="UTF-8"?> <blueprint ... > ... <!-- Configures the Inbound and Outbound SAP Connections --> <bean id="sap-configuration" class="org.fusesource.camel.component.sap.SapConnectionConfiguration"> <property name="destinationDataStore"> <map> <entry key="quickstartDest" value-ref="quickstartDestinationData" /> </map> </property> <property name="serverDataStore"> <map> <entry key="quickstartServer" value-ref="quickstartServerData" /> </map> </property> </bean> <!-- Configures an Outbound SAP Connection --> <!-- *** Please enter the connection property values for your environment *** --> <bean id="quickstartDestinationData" class="org.fusesource.camel.component.sap.model.rfc.impl.DestinationDataImpl"> <property name="ashost" value="example.com" /> <property name="sysnr" value="00" /> <property name="client" value="000" /> <property name="user" value="username" /> <property name="passwd" value="passowrd" /> <property name="lang" value="en" /> </bean> <!-- Configures an Inbound SAP Connection --> <!-- *** Please enter the connection property values for your environment ** --> <bean id="quickstartServerData" class="org.fusesource.camel.component.sap.model.rfc.impl.ServerDataImpl"> <property name="gwhost" value="example.com" /> <property name="gwserv" value="3300" /> <!-- Do not change the following property values --> <property name="progid" value="QUICKSTART" /> <property name="repositoryDestination" value="quickstartDest" /> <property name="connectionCount" value="2" /> </bean> </blueprint> Notice how this example also configures a destination connection, quickstartDest , which the server uses to retrieve metadata from a remote SAP instance. This destination is configured in the server data through the repositoryDestination option. If you do not configure this option, you would need to create a local metadata repository instead (see Section 290.6.4, "Repository Configuration" ). For example, after configuring the destination as shown in the preceding Blueprint XML file, you could handle the BAPI_FLCUST_GETLIST remote function call from an invoking client, using the following URI: sap-srfc-server:quickstartServer:BAPI_FLCUST_GETLIST Required options The required options for the server data configuration object are, as follows: Name Default Value Description gwhost Gateway host to register the server connection with. gwserv Gateway service, which is the port on which a registration can be done. To resolve the service names sapgwXXX , the network layer of the operating system performs a look-up in etc/services . If using port numbers instead of symbolic service names, there are no look-ups and additional entries are not needed. progid The program ID with which the registration is done. Serves as identifier on the gateway and in the destination in the ABAP system. repositoryDestination Specifies a destination name that the server can use to retrieve metadata from a metadata repository hosted in a remote SAP server. connectionCount The number of connections to register with the gateway. Secure network connection options The secure network connection options for the server data configuration object are, as follows: Name Default Value Description sncMode Secure network connection (SNC) mode, 0 (off) or 1 (on) sncQop SNC level of security, 1 to 9 sncMyname SNC name of your server. Overrides the default SNC name. Typically something like p:CN=JCoServer, O=ACompany, C=EN . sncLib Path to library which provides SNC service. If this property is not provided, the value of the jco.middleware.snc_lib property is used instead Other options The other options for the server data configuration object are, as follows: Name Default Value Description saprouter SAP router string to use for a system protected by a firewall, which can therefore only be reached through a SAProuter, when registering the server at the gateway of that ABAP System. A typical router string is /H/firewall.hostname/H/ maxStartupDelay The maximum time (in seconds) between two start-up attempts in case of failures. Initially, the waiting time is doubled from 1 second after each start-up failure until the maximum value is reached, or the server can be started successfully. trace Enable/disable RFC trace ( 0 or 1 ) workerThreadCount The maximum number of threads used by the server connection. If not set, the value for the connectionCount is used as the workerThreadCount . The maximum number of threads can not exceed 99. workerThreadMinCount The minimum number of threads used by server connection. If not set, the value for connectionCount is used as the workerThreadMinCount . 290.6.4. Repository Configuration Overview The configurations for repositories are maintained in the repositoryDataStore property of the SAP Component. Each entry in this map configures a distinct repository. The key for each entry is the name of the repository and this key also corresponds to the name of server to which this repository is attached. The value of each entry is a repository data configuration object, org.fusesource.camel.component.sap.model.rfc.impl.RepositoryDataImpl , that defines the contents of a metadata repository. A repository data object is a map of function template configuration objects, org.fuesource.camel.component.sap.model.rfc.impl.FunctionTemplateImpl . Each entry in this map specifies the interface of a function module and the key for each entry is the name of the function module specified. Repository data example The following code shows a simple example of configuring a metadata repository: <?xml version="1.0" encoding="UTF-8"?> <blueprint ... > ... <!-- Configures the sap-srfc-server component --> <bean id="sap-configuration" class="org.fusesource.camel.component.sap.SapConnectionConfiguration"> <property name="repositoryDataStore"> <map> <entry key="nplServer" value-ref="nplRepositoryData" /> </map> </property> </bean> <!-- Configures a metadata Repository --> <bean id="nplRepositoryData" class="org.fusesource.camel.component.sap.model.rfc.impl.RepositoryDataImpl"> <property name="functionTemplates"> <map> <entry key="BOOK_FLIGHT" value-ref="bookFlightFunctionTemplate" /> </map> </property> </bean> ... </blueprint> Function template properties The interface of a function module consists of four parameter lists by which data is transferred back and forth to the function module in an RFC call. Each parameter list consists of one or more fields, each of which is a named parameter transferred in an RFC call. The following parameter lists and exception list are supported: The import parameter list contains parameter values that are sent to a function module in an RFC call; The export parameter list contains parameter values that are returned by a function module in an RFC call; The changing parameter list contains parameter values that are sent to and returned by a function module in an RFC call; The table parameter list contains internal table values that are sent to and returned by a function module in an RFC call. The interface of a function module also consists of an exception list of ABAP exceptions that may be raised when the module is invoked in an RFC call. A function template describes the name and type of parameters in each parameter list of a function interface and the ABAP exceptions thrown by the function. A function template object maintains five property lists of metadata objects, as described in the following table. Property Description importParameterList A list of list field metadata objects, org.fusesource.camel.component.sap.model.rfc.impl.ListFieldMeataDataImpl . Specifies the parameters that are sent in an RFC call to a function module. changingParameterList A list of list field metadata objects, org.fusesource.camel.component.sap.model.rfc.impl.ListFieldMeataDataImpl . Specifies the parameters that sent and returned in an RFC call to and from a function module. exportParameterList A list of list field metadata objects, org.fusesource.camel.component.sap.model.rfc.impl.ListFieldMeataDataImpl . Specifies the parameters that are returned in an RFC call from a function module. tableParameterList A list of list field metadata objects, org.fusesource.camel.component.sap.model.rfc.impl.ListFieldMeataDataImpl . Specifies the table parameters that are sent and returned in an RFC call to and from a function module. exceptionList A list of ABAP exception metadata objects, org.fusesource.camel.component.sap.model.rfc.impl.AbapExceptionImpl . Specifies the ABAP exceptions potentially raised in an RFC call of function module. Function template example The following example shows an outline of how to configure a function template: List field metadata properties A list field metadata object, org.fusesource.camel.component.sap.model.rfc.impl.ListFieldMeataDataImpl , specifies the name and type of a field in a parameter list. For an elementary parameter field ( CHAR , DATE , BCD , TIME , BYTE , NUM , FLOAT , INT , INT1 , INT2 , DECF16 , DECF34 , STRING , XSTRING ), the following table lists the configuration properties that may be set on a list field metadata object: Name Default Value Description name - The name of the parameter field. type - The parameter type of the field. byteLength - The field length in bytes for a non-Unicode layout. This value depends on the parameter type. See Section 290.9, "Message Body for RFC" . unicodeByteLength - The field length in bytes for a Unicode layout. This value depends on the parameter type. See Section 290.9, "Message Body for RFC" . decimals 0 The number of decimals in field value; only required for parameter types BCD and FLOAT. See Section 290.9, "Message Body for RFC" . optional false If true , the field is optional and need not be set in a RFC call Note All elementary parameter fields require that the name , type , byteLength and unicodeByteLength properties be specified in the field metadata object. In addition, the BCD , FLOAT , DECF16 and DECF34 fields require the decimal property to be specified in the field metadata object. For a complex parameter field of type TABLE or STRUCTURE , the following table lists the configuration properties that may be set on a list field metadata object: Name Default Value Description name - The name of the parameter field type - The parameter type of the field recordMetaData - The metadata for the structure or table. A record metadata object, org.fusesource.camel.component.sap.model.rfc.impl.RecordMetaDataImpl , is passed to specify the fields in the structure or table rows. optional false If true , the field is optional and need not be set in a RFC call Note All complex parameter fields require that the name , type and recordMetaData properties be specified in the field metadata object. The value of the recordMetaData property is a record field metadata object, org.fusesource.camel.component.sap.model.rfc.impl.RecordMetaDataImpl , which specifies the structure of a nested structure or the structure of a table row. Elementary list field metadata example The following metadata configuration specifies an optional, 24-digit packed BCD number parameter with two decimal places named TICKET_PRICE : <bean class="org.fusesource.camel.component.sap.model.rfc.impl.ListFieldMetaDataImpl"> <property name="name" value="TICKET_PRICE" /> <property name="type" value="BCD" /> <property name="byteLength" value="12" /> <property name="unicodeByteLength" value="24" /> <property name="decimals" value="2" /> <property name="optional" value="true" /> </bean> Complex list field metadata example The following metadata configuration specifies a required TABLE parameter named CONNINFO with a row structure specified by the connectionInfo record metadata object: Record metadata properties A record metadata object, org.fusesource.camel.component.sap.model.rfc.impl.RecordMetaDataImpl , specifies the name and contents of a nested STRUCTURE or the row of a TABLE parameter. A record metadata object maintains a list of record field metadata objects, org.fusesource.camel.component.sap.model.rfc.impl.FieldMetaDataImpl , which specify the parameters that reside in the nested structure or table row. The following table lists configuration properties that may be set on a record metadata object: Name Default Value Description name - The name of the record. recordFieldMetaData - The list of record field metadata objects, org.fusesource.camel.component.sap.model.rfc.impl.FieldMetaDataImpl . Specifies the fields contained within the structure. Note All properties of the record metadata object are required. Record metadata example The following example shows how to configure a record metadata object: <bean id="connectionInfo" class="org.fusesource.camel.component.sap.model.rfc.impl.RecordMetaDataImpl"> <property name="name" value="CONNECTION_INFO" /> <property name="recordFieldMetaData"> <list> ... </list> </property> </bean> Record field metadata properties A record field metadata object, org.fusesource.camel.component.sap.model.rfc.impl.FieldMetaDataImpl , specifies the name and type of a parameter field withing a structure. A record field metadata object is similar to a parameter field metadata object, except that the offsets of the individual field locations within the nested structure or table row must be additionally specified. The non-Unicode and Unicode offsets of an individual field must be calculated and specified from the sum of non-Unicode and Unicode byte lengths of the preceding fields in the structure or row. Note Failure to properly specify the offsets of fields in nested structures and table rows causes the field storage of parameters in the underlying JCo and ABAP runtimes to overlap and prevent the proper transfer of values in RFC calls. For an elementary parameter field ( CHAR , DATE , BCD , TIME , BYTE , NUM , FLOAT , INT , INT1 , INT2 , DECF16 , DECF34 , STRING , XSTRING ), the following table lists the configuration properties that may be set on a record field metadata object: Name Default Value Description name - The name of the parameter field type - The parameter type of the field byteLength - The field length in bytes for a non-Unicode layout. This value depends on the parameter type. See Section 290.9, "Message Body for RFC" . unicodeByteLength - The field length in bytes for a Unicode layout. This value depends on the parameter type. See Section 290.9, "Message Body for RFC" . byteOffset - The field offset in bytes for non-Unicode layout. This offset is the byte location of the field within the enclosing structure. unicodeByteOffset - The field offset in bytes for Unicode layout. This offset is the byte location of the field within the enclosing structure. decimals 0 The number of decimals in field value; only required for parameter types BCD and FLOAT . See Section 290.9, "Message Body for RFC" . For a complex parameter field of type TABLE or STRUCTURE , the following table lists the configuration properties that may be set on a record field metadata object: Name Default Value Description name - The name of the parameter field type - The parameter type of the field byteOffset - The field offset in bytes for non-Unicode layout. This offset is the byte location of the field within the enclosing structure. unicodeByteOffset - The field offset in bytes for Unicode layout. This offset is the byte location of the field within the enclosing structure. recordMetaData - The metadata for the structure or table. A record metadata object, org.fusesource.camel.component.sap.model.rfc.impl.RecordMetaDataImpl , is passed to specify the fields in the structure or table rows. Elementary record field metadata example The following metadata configuration specifies a DATE field parameter named ARRDATE located 85 bytes into the enclosing structure in the case of a non-Unicode layout and located 170 bytes into the enclosing structure in the case of a Unicode layout: Complex record field metadata example The following metadata configuration specifies a STRUCTURE field parameter named FLTINFO with a structure specified by the flightInfo record metadata object. The parameter is located at the beginning of the enclosing structure in both the case of a non-Unicode and Unicode layout. 290.7. Message Headers The SAP component supports the following message headers: Header Description CamelSap.scheme The URI scheme of the last endpoint to process the message. Use one of the following values: sap-srfc-destination sap-trfc-destination sap-qrfc-destination sap-srfc-server sap-trfc-server sap-idoc-destination sap-idoclist-destination sap-qidoc-destination sap-qidoclist-destination sap-idoclist-server CamelSap.destinationName The destination name of the last destination endpoint to process the message. CamelSap.serverName The server name of the last server endpoint to process the message. CamelSap.queueName The queue name of the last queuing endpoint to process the message. CamelSap.rfcName The RFC name of the last RFC endpoint to process the message. CamelSap.idocType The IDoc type of the last IDoc endpoint to process the message. CamelSap.idocTypeExtension The IDoc type extension, if any, of the last IDoc endpoint to process the message. CamelSap.systemRelease The system release, if any, of the last IDoc endpoint to process the message. CamelSap.applicationRelease The application release, if any, of the last IDoc endpoint to process the message. 290.8. Exchange Properties The SAP component adds the following exchange properties: Property Description CamelSap.destinationPropertiesMap A map containing the properties of each SAP destination encountered by the exchange. The map is keyed by destination name and each entry is a java.util.Properties object containing the configuration properties of that destination. CamelSap.serverPropertiesMap A map containing the properties of each SAP server encountered by the exchange. The map is keyed by server name and each entry is a java.util.Properties object containing the configuration properties of that server. 290.9. Message Body for RFC Request and response objects An SAP endpoint expects to receive a message with a message body containing an SAP request object and returns a message with a message body containing an SAP response object. SAP requests and responses are fixed map data structures containing named fields with each field having a predefined data type. Note The named fields in an SAP request and response are specific to an SAP endpoint, with each endpoint defining the parameters in the SAP request and the acceptable response. An SAP endpoint provides factory methods to create the request and response objects that are specific to it. Structure objects Both SAP request and response objects are represented in Java as a structure object which supports the org.fusesource.camel.component.sap.model.rfc.Structure interface. This interface extends both the java.util.Map and org.eclipse.emf.ecore.EObject interfaces. The field values in a structure object are accessed through the field's getter methods in the map interface. In addition, the structure interface provides a type-restricted method to retrieve field values. Structure objects are implemented in the component runtime using the Eclipse Modeling Framework (EMF) and support that framework's EObject interface. Instances of a structure object have attached metadata which define and restrict the structure and contents of the map of fields it provides. This metadata can be accessed and introspected using the standard methods provided by EMF. Please refer to the EMF documentation for further details. Note Attempts to get a parameter not defined on a structure object returns null. Attempts to set a parameter not defined on a structure throws an exception as well as attempts to set the value of a parameter with an incorrect type. As discussed in the following sections, structure objects can contain fields that contain values of the complex field types, STRUCTURE and TABLE . Note It is unnecessary to create instances of these types and add them to the structure. Instances of these field values are created on demand if necessary when accessed in the enclosing structure. Field types The fields that reside within the structure object of an SAP request or response may be either elementary or complex . An elementary field contains a single scalar value, whereas a complex field contains one or more fields of an elementary or complex type. Elementary field types An elementary field may be a character, numeric, hexadecimal or string field type. The following table summarizes the types of elementary fields that may reside in a structure object: Field Type Corresponding Java Type Byte Length Unicode Byte Length Number Decimals Digits Description CHAR java.lang.String 1 to 65535 1 to 65535 - ABAP Type 'C': Fixed sized character string DATE java.util.Date 8 16 - ABAP Type 'D': Date (format: YYYYMMDD) BCD java.math.BigDecimal 1 to 16 1 to 16 0 to 14 ABAP Type 'P': Packed BCD number. A BCD number contains two digits per byte. TIME java.util.Date 6 12 - ABAP Type 'T': Time (format: HHMMSS) BYTE byte[] 1 to 65535 1 to 65535 - ABAP Type 'X':Fixed sized byte array NUM java.lang.String 1 to 65535 1 to 65535 - ABAP Type 'N': Fixed sized numeric character string FLOAT java.lang.Double 8 8 0 to 15 ABAP Type 'F': Floating point number INT java.lang.Integer 4 4 - ABAP Type 'I': 4-byte Integer INT2 java.lang.Integer 2 2 - ABAP Type 'S': 2-byte Integer INT1 java.lang.Integer 1 1 - ABAP Type 'B': 1-byte Integer DECF16 java.match.BigDecimal 8 8 16 ABAP Type 'decfloat16': 8 -byte Decimal Floating Point Number DECF34 java.math.BigDecimal 16 16 34 ABAP Type 'decfloat34': 16-byte Decimal Floating Point Number STRING java.lang.String 8 8 - ABAP Type 'G': Variable length character string XSTRING byte[] 8 8 - ABAP Type 'Y': Variable length byte array Character field types A character field contains a fixed sized character string that may use either a non-Unicode or Unicode character encoding in the underlying JCo and ABAP runtimes. Non-Unicode character strings encode one character per byte. Unicode characters strings are encoded in two bytes using UTF-16 encoding. Character field values are represented in Java as java.lang.String objects and the underlying JCo runtime is responsible for the conversion to their ABAP representation. A character field declares its field length in its associated byteLength and unicodeByteLength properties, which determine the length of the field's character string in each encoding system. CHAR A CHAR character field is a text field containing alphanumeric characters and corresponds to the ABAP type C. NUM A NUM character field is a numeric text field containing numeric characters only and corresponds to the ABAP type N. DATE A DATE character field is an 8 character date field with the year, month and day formatted as YYYYMMDD and corresponds to the ABAP type D. TIME A TIME character field is a 6 character time field with the hours, minutes and seconds formatted as HHMMSS and corresponds to the ABAP type T. Numeric field types A numeric field contains a number. The following numeric field types are supported: INT An INT numeric field is an integer field stored as a 4-byte integer value in the underlying JCo and ABAP runtimes and corresponds to the ABAP type I. An INT field value is represented in Java as a java.lang.Integer object. INT2 An INT2 numeric field is an integer field stored as a 2-byte integer value in the underlying JCo and ABAP runtimes and corresponds to the ABAP type S. An INT2 field value is represented in Java as a java.lang.Integer object. INT1 An INT1 field is an integer field stored as a 1-byte integer value in the underlying JCo and ABAP runtimes value and corresponds to the ABAP type B. An INT1 field value is represented in Java as a java.lang.Integer object. FLOAT A FLOAT field is a binary floating point number field stored as an 8-byte double value in the underlying JCo and ABAP runtimes and corresponds to the ABAP type F. A FLOAT field declares the number of decimal digits that the field's value contains in its associated decimal property. In the case of a FLOAT field, this decimal property can have a value between 1 and 15 digits. A FLOAT field value is represented in Java as a java.lang.Double object. BCD A BCD field is a binary coded decimal field stored as a 1 to 16 byte packed number in the underlying JCo and ABAP runtimes and corresponds to the ABAP type P. A packed number stores two decimal digits per byte. A BCD field declares its field length in its associated byteLength and unicodeByteLength properties. In the case of a BCD field, these properties can have a value between 1 and 16 bytes and both properties have the same value. A BCD field declares the number of decimal digits that the field's value contains in its associated decimal property. In the case of a BCD field, this decimal property can have a value between 1 and 14 digits. A BCD field value is represented in Java as a java.math.BigDecimal . DECF16 A DECF16 field is a decimal floating point stored as an 8-byte IEEE 754 decimal64 floating point value in the underlying JCo and ABAP runtimes and corresponds to the ABAP type decfloat16 . The value of a DECF16 field has 16 decimal digits. The value of a DECF16 field is represented in Java as java.math.BigDecimal . DECF34 A DECF34 field is a decimal floating point stored as a 16-byte IEEE 754 decimal128 floating point value in the underlying JCo and ABAP runtimes and corresponds to the ABAP type decfloat34 . The value of a DECF34 field has 34 decimal digits. The value of a DECF34 field is represented in Java as java.math.BigDecimal . Hexadecimal field types A hexadecimal field contains raw binary data. The following hexadecimal field types are supported: BYTE A BYTE field is a fixed sized byte string stored as a byte array in the underlying JCo and ABAP runtimes and corresponds to the ABAP type X. A BYTE field declares its field length in its associated byteLength and unicodeByteLength properties. In the case of a BYTE field, these properties can have a value between 1 and 65535 bytes and both properties have the same value. The value of a BYTE field is represented in Java as a byte[] object. String field types A string field references a variable length string value. The length of that string value is not fixed until runtime. The storage for the string value is dynamically created in the underlying JCo and ABAP runtimes. The storage for the string field itself is fixed and contains only a string header. STRING A STRING field refers to a character string and is stored in the underlying JCo and ABAP runtimes as an 8-byte value. It corresponds to the ABAP type G. The value of the STRING field is represented in Java as a java.lang.String object. XSTRING An XSTRING field refers to a byte string and is stored in the underlying JCo and ABAP runtimes as an 8-byte value. It corresponds to the ABAP type Y. The value of the STRING field is represented in Java as a byte[] object. Complex field types A complex field may be either a structure or table field type. The following table summarizes these complex field types. Field Type Corresponding Java Type Byte Length Unicode Byte Length Number Decimals Digits Description STRUCTURE org.fusesource.camel.component.sap.model.rfc.Structure Total of individual field byte lengths Total of individual field Unicode byte lengths - ABAP Type 'u' & 'v': Heterogeneous Structure TABLE org.fusesource.camel.component.sap.model.rfc.Table Byte length of row structure Unicode byte length of row structure - ABAP Type 'h': Table Structure field types A STRUCTURE field contains a structure object and is stored in the underlying JCo and ABAP runtimes as an ABAP structure record. It corresponds to either an ABAP type u or v . The value of a STRUCTURE field is represented in Java as a structure object with the interface org.fusesource.camel.component.sap.model.rfc.Structure . Table field types A TABLE field contains a table object and is stored in the underlying JCo and ABAP runtimes as an ABAP internal table. It corresponds to the ABAP type h . The value of the field is represented in Java by a table object with the interface org.fusesource.camel.component.sap.model.rfc.Table . Table objects A table object is a homogeneous list data structure containing rows of structure objects with the same structure. This interface extends both the java.util.List and org.eclipse.emf.ecore.EObject interfaces. public interface Table<S extends Structure> extends org.eclipse.emf.ecore.EObject, java.util.List<S> { /** * Creates and adds table row at end of row list */ S add(); /** * Creates and adds table row at index in row list */ S add(int index); } The list of rows in a table object are accessed and managed using the standard methods defined in the list interface. In addition, the table interface provides two factory methods for creating and adding structure objects to the row list. Table objects are implemented in the component runtime using the Eclipse Modeling Framework (EMF) and support that framework's EObject interface. Instances of a table object have attached metadata which define and restrict the structure and contents of the rows it provides. This metadata can be accessed and introspected using the standard methods provided by EMF. Please refer to the EMF documentation for further details. Note Attempts to add or set a row structure value of the wrong type throws an exception. 290.10. Message Body for IDoc IDoc message type When using one of the IDoc Camel SAP endpoints, the type of the message body depends on which particular endpoint you are using. For a sap-idoc-destination endpoint or a sap-qidoc-destination endpoint, the message body is of Document type: For a sap-idoclist-destination endpoint, a sap-qidoclist-destination endpoint, or a sap-idoclist-server endpoint, the message body is of DocumentList type: The IDoc document model For the Camel SAP component, an IDoc document is modelled using the Eclipse Modelling Framework (EMF), which provides a wrapper API around the underlying SAP IDoc API. The most important types in this model are: The Document type represents an IDoc document instance. In outline, the Document interface exposes the following methods: The following kinds of method are exposed by the Document interface: Methods for accessing the control record Most of the methods are for accessing or modifying field values of the IDoc control record. These methods are of the form AttributeName , AttributeName , where AttributeName is the name of a field value (see Table 290.2, "IDoc Document Attributes" ). Method for accessing the document contents The getRootSegment method provides access to the document contents (IDoc data records), returning the contents as a Segment object. Each Segment object can contain an arbitrary number of child segments, and the segments can be nested to an arbitrary degree. Note The precise layout of the segment hierarchy is defined by the particular IDoc type of the document. When creating (or reading) a segment hierarchy, therefore, you must be sure to follow the exact structure as defined by the IDoc type. The Segment type is used to access the data records of the IDoc document, where the segments are laid out in accordance with the structure defined by the document's IDoc type. In outline, the Segment interface exposes the following methods: The getChildren(String segmentType) method is particularly useful for adding new (nested) children to a segment. It returns an object of type, SegmentList , which is defined as follows: Hence, to create a data record of E1SCU_CRE type, you could use Java code like the following: Segment rootSegment = document.getRootSegment(); Segment E1SCU_CRE_Segment = rootSegment.getChildren("E1SCU_CRE").add(); How an IDoc is related to a Document object According to the SAP documentation, an IDoc document consists of the following main parts: Control record The control record (which contains the metadata for the IDoc document) is represented by the attributes on the Document object - see Table 290.2, "IDoc Document Attributes" for details. Data records The data records are represented by the Segment objects, which are constructed as a nested hierarchy of segments. You can access the root segment through the Document.getRootSegment method. Status records In the Camel SAP component, the status records are not represented by the document model. But you do have access to the latest status value through the status attribute on the control record. Example of creating a Document instance For example, Example 290.1, "Creating an IDoc Document in Java" shows how to create an IDoc document with the IDoc type, FLCUSTOMER_CREATEFROMDATA01 , using the IDoc model API in Java. Example 290.1. Creating an IDoc Document in Java Document attributes Table 290.2, "IDoc Document Attributes" shows the control record attributes that you can set on the Document object. Table 290.2. IDoc Document Attributes Attribute Length SAP Field Description archiveKey 70 ARCKEY EDI archive key client 3 MANDT Client creationDate 8 CREDAT Date IDoc was created creationTime 6 CRETIM Time IDoc was created direction 1 DIRECT Direction eDIMessage 14 REFMES Reference to message eDIMessageGroup 14 REFGRP Reference to message group eDIMessageType 6 STDMES EDI message type eDIStandardFlag 1 STD EDI standard eDIStandardVersion 6 STDVRS Version of EDI standard eDITransmissionFile 14 REFINT Reference to interchange file iDocCompoundType 8 DOCTYP IDoc type iDocNumber 16 DOCNUM IDoc number iDocSAPRelease 4 DOCREL SAP Release of IDoc iDocType 30 IDOCTP Name of basic IDoc type iDocTypeExtension 30 CIMTYP Name of extension type messageCode 3 MESCOD Logical message code messageFunction 3 MESFCT Logical message function messageType 30 MESTYP Logical message type outputMode 1 OUTMOD Output mode recipientAddress 10 RCVSAD Receiver address (SADR) recipientLogicalAddress 70 RCVLAD Logical address of receiver recipientPartnerFunction 2 RCVPFC Partner function of receiver recipientPartnerNumber 10 RCVPRN Partner number of receiver recipientPartnerType 2 RCVPRT Partner type of receiver recipientPort 10 RCVPOR Receiver port (SAP System, EDI subsystem) senderAddress SNDSAD Sender address (SADR) senderLogicalAddress 70 SNDLAD Logical address of sender senderPartnerFunction 2 SNDPFC Partner function of sender senderPartnerNumber 10 SNDPRN Partner number of sender senderPartnerType 2 SNDPRT Partner type of sender senderPort 10 SNDPOR Sender port (SAP System, EDI subsystem) serialization 20 SERIAL EDI/ALE: Serialization field status 2 STATUS Status of IDoc testFlag 1 TEST Test flag Setting document attributes in Java When setting the control record attributes in Java (from Table 290.2, "IDoc Document Attributes" ), the usual convention for Java bean properties is followed. That is, a name attribute can be accessed through the getName and setName methods, for getting and setting the attribute value. For example, the iDocType , iDocTypeExtension , and messageType attributes can be set as follows on a Document object: // Java document.setIDocType("FLCUSTOMER_CREATEFROMDATA01"); document.setIDocTypeExtension(""); document.setMessageType("FLCUSTOMER_CREATEFROMDATA"); Setting document attributes in XML When setting the control record attributes in XML, the attributes must be set on the idoc:Document element. For example, the iDocType , iDocTypeExtension , and messageType attributes can be set as follows: <?xml version="1.0" encoding="ASCII"?> <idoc:Document ... iDocType="FLCUSTOMER_CREATEFROMDATA01" iDocTypeExtension="" messageType="FLCUSTOMER_CREATEFROMDATA" ... > ... </idoc:Document> 290.11. Transaction Support BAPI transaction model The SAP Component supports the BAPI transaction model for outbound communication with SAP. A destination endpoint with a URL containing the transacted option set to true initiates a stateful session on the outbound connection of the endpoint and registers a Camel Synchronization object with the exchange. This synchronization object calls the BAPI service method BAPI_TRANSACTION_COMMIT and end the stateful session when the processing of the message exchange is complete. If the processing of the message exchange fails, the synchronization object calls the BAPI server method BAPI_TRANSACTION_ROLLBACK and end the stateful session. RFC transaction model The tRFC protocol accomplishes an AT-MOST-ONCE delivery and processing guarantee by identifying each transactional request with a unique transaction identifier (TID). A TID accompanies each request sent in the protocol. A sending application using the tRFC protocol must identify each instance of a request with a unique TID when sending the request. An application may send a request with a given TID multiple times, but the protocol ensures that the request is delivered and processed in the receiving system at most once. An application may choose to resend a request with a given TID when encountering a communication or system error when sending the request, and is thus in doubt as to whether that request was delivered and processed in the receiving system. By resending a request when encountering an communication error, a client application using the tRFC protocol can thus ensure EXACTLY-ONCE delivery and processing guarantees for its request. Which transaction model to use? A BAPI transaction is an application level transaction, in the sense that it imposes ACID guarantees on the persistent data changes performed by a BAPI method or RFC function in the SAP database. An RFC transaction is a communication transaction, in the sense that it imposes delivery guarantees (AT-MOST-ONCE, EXACTLY-ONCE, EXACTLY-ONCE-IN-ORDER) on requests to a BAPI method and/or RFC function. Transactional RFC destination endpoints The following destination endpoints support RFC transactions: sap-trfc-destination sap-qrfc-destination A single Camel route can include multiple transactional RFC destination endpoints, sending messages to multiple RFC destinations and even sending messages to the same RFC destination multiple times. This implies that the Camel SAP component potentially needs to keep track of many transaction IDs (TIDs) for each Exchange object passing along a route. Now if the route processing fails and must be retried, the situation gets quite complicated. The RFC transaction semantics demand that each RFC destination along the route must be invoked using the same TID that was used the first time around (and where the TIDs for each of the destinations are distinct from each other). In other words, the Camel SAP component must keep track of which TID was used at which point along the route, and remember this information, so that the TIDs can be replayed in the correct order. By default, Camel does not provide a mechanism that enables an Exchange to know where it is in a route. To provide such a mechanism, it is necessary to install the CurrentProcessorDefinitionInterceptStrategy interceptor into the Camel runtime. This interceptor must be installed into the Camel runtime, to to keep track of the TIDs in a route with the Camel SAP component. For details of how to configure the interceptor, see the section called "Interceptor for tRFC and qRFC destinations" . Transactional RFC server endpoints The following server endpoints support RFC transactions: sap-trfc-server When a Camel exchange processing a transactional request encounters a processing error, Camel handles the processing error through its standard error handling mechanisms. If the Camel route processing the exchange is configured to propagate the error back to the caller, the SAP server endpoint that initiated the exchange takes note of the failure and the sending SAP system is notified of the error. The sending SAP system can then respond by sending another transaction request with the same TID to process the request again. 290.12. XML Serialization for RFC Overview SAP request and response objects support an XML serialization format which enable these objects to be serialized to and from an XML document. XML namespace Each RFC in a repository defines a specific XML namespace for the elements which compose the serialized forms of its Request and Response objects. The form of this namespace URL is as follows: RFC namespace URLs have a common http://sap.fusesource.org/rfc prefix followed by the name of the repository in which the RFC's metadata is defined. The final component in the URL is the name of the RFC itself. Request and response XML documents An SAP request object is serialized into an XML document with the root element of that document named Request and scoped by the namespace of the request's RFC. <?xml version="1.0" encoding="ASCII"?> <BOOK_FLIGHT:Request xmlns:BOOK_FLIGHT="http://sap.fusesource.org/rfc/nplServer/BOOK_FLIGHT"> ... </BOOK_FLIGHT:Request> An SAP response object is serialized into an XML document with the root element of that document named Response and scoped by the namespace of the response's RFC. <?xml version="1.0" encoding="ASCII"?> <BOOK_FLIGHT:Response xmlns:BOOK_FLIGHT="http://sap.fusesource.org/rfc/nplServer/BOOK_FLIGHT"> ... </BOOK_FLIGHT:Response> Structure fields Structure fields in parameter lists or nested structures are serialized as elements. The element name of the serialized structure corresponds to the field name of the structure within the enclosing parameter list, structure or table row entry it resides. <BOOK_FLIGHT:FLTINFO xmlns:BOOK_FLIGHT="http://sap.fusesource.org/rfc/nplServer/BOOK_FLIGHT"> ... </BOOK_FLIGHT:FLTINFO> Note The type name of the structure element in the RFC namespace corresponds to the name of the record metadata object which defines the structure, as in the following example: <xs:schema targetNamespace="http://sap.fusesource.org/rfc/nplServer/BOOK_FLIGHT"> xmlns:xs="http://www.w3.org/2001/XMLSchema"> ... <xs:complexType name="FLTINFO_STRUCTURE"> ... </xs:complexType> ... </xs:schema> This distinction is important when specifying a JAXB bean to marshal and unmarshal the structure as is seen in Section 290.14.3, "Example 3: Handling Requests from SAP" . Table fields Table fields in parameter lists or nested structures are serialized as elements. The element name of the serialized structure corresponds to the field name of the table within the enclosing parameter list, structure, or table row entry it resides. The table element contains a series of row elements to hold the serialized values of the table's row entries. <BOOK_FLIGHT:CONNINFO xmlns:BOOK_FLIGHT="http://sap.fusesource.org/rfc/nplServer/BOOK_FLIGHT"> <row ... > ... </row> ... <row ... > ... </row> </BOOK_FLIGHT:CONNINFO> Note The type name of the table element in the RFC namespace corresponds to the name of the record metadata object which defines the row structure of the table suffixed by _TABLE . The type name of the table row element in the RFC name corresponds to the name of the record metadata object which defines the row structure of the table, as in the following example: <xs:schema targetNamespace="http://sap.fusesource.org/rfc/nplServer/BOOK_FLIGHT" xmlns:xs="http://www.w3.org/2001/XMLSchema"> ... <xs:complextType name="CONNECTION_INFO_STRUCTURE_TABLE"> <xs:sequence> <xs:element name="row" minOccures="0" maxOccurs="unbounded" type="CONNECTION_INFO_STRUCTURE"/> ... <xs:sequence> </xs:sequence> </xs:complexType> <xs:complextType name="CONNECTION_INFO_STRUCTURE"> ... </xs:complexType> ... </xs:schema> This distinction is important when specifying a JAXB bean to marshal and unmarshal the structure as is seen in Section 290.14.3, "Example 3: Handling Requests from SAP" . Elementary fields Elementary fields in parameter lists or nested structures are serialized as attributes on the element of the enclosing parameter list or structure. The attribute name of the serialized field corresponds to the field name of the field within the enclosing parameter list, structure, or table row entry it resides, as in the following example: <?xml version="1.0" encoding="ASCII"?> <BOOK_FLIGHT:Request xmlns:BOOK_FLIGHT="http://sap.fusesource.org/rfc/nplServer/BOOK_FLIGHT" CUSTNAME="James Legrand" PASSFORM="Mr" PASSNAME="Travelin Joe" PASSBIRTH="1990-03-17T00:00:00.000-0500" FLIGHTDATE="2014-03-19T00:00:00.000-0400" TRAVELAGENCYNUMBER="00000110" DESTINATION_FROM="SFO" DESTINATION_TO="FRA"/> Date and time formats Date and Time fields are serialized into attribute values using the following format: Date fields is serialized with only the year, month, day and timezone components set: DEPDATE="2014-03-19T00:00:00.000-0400" Time fields is serialized with only the hour, minute, second, millisecond and timezone components set: DEPTIME="1970-01-01T16:00:00.000-0500" 290.13. XML Serialization for IDoc Overview An IDoc message body can be serialized into an XML string format, with the help of a built-in type converter. XML namespace Each serialized IDoc is associated with an XML namespace, which has the following general format: Both the repositoryName (name of the remote SAP metadata repository) and the idocType (IDoc document type) are mandatory, but the other components of the namespace can be left blank. For example, you could have an XML namespace like the following: http://sap.fusesource.org/idoc/MY_REPO/FLCUSTOMER_CREATEFROMDATA01/// Built-in type converter The Camel SAP component has a built-in type converter, which is capable of converting a Document object or a DocumentList object to and from a String type. For example, to serialize a Document object to an XML string, you can simply add the following line to a route in XML DSL: <convertBodyTo type="java.lang.String"/> You can also use this approach to a serialized XML message into a Document object. For example, given that the current message body is a serialized XML string, you can convert it back into a Document object by adding the following line to a route in XML DSL: <convertBodyTo type="org.fusesource.camel.component.sap.model.idoc.Document"/> Sample IDoc message body in XML format When you convert an IDoc message to a String , it is serialized into an XML document, where the root element is either idoc:Document (for a single document) or idoc:DocumentList (for a list of documents). Example 290.2, "IDoc Message Body in XML" shows a single IDoc document that has been serialized to an idoc:Document element. Example 290.2. IDoc Message Body in XML <?xml version="1.0" encoding="ASCII"?> <idoc:Document xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:FLCUSTOMER_CREATEFROMDATA01---="http://sap.fusesource.org/idoc/XXX/FLCUSTOMER_CREATEFROMDATA01///" xmlns:idoc="http://sap.fusesource.org/idoc" creationDate="2015-01-28T12:39:13.980-0500" creationTime="2015-01-28T12:39:13.980-0500" iDocType="FLCUSTOMER_CREATEFROMDATA01" iDocTypeExtension="" messageType="FLCUSTOMER_CREATEFROMDATA" recipientPartnerNumber="QUICKCLNT" recipientPartnerType="LS" senderPartnerNumber="QUICKSTART" senderPartnerType="LS"> <rootSegment xsi:type="FLCUSTOMER_CREATEFROMDATA01---:ROOT" document="/"> <segmentChildren parent="//@rootSegment"> <E1SCU_CRE parent="//@rootSegment" document="/"> <segmentChildren parent="//@rootSegment/@segmentChildren/@E1SCU_CRE.0"> <E1BPSCUNEW parent="//@rootSegment/@segmentChildren/@E1SCU_CRE.0" document="/" CUSTNAME="Fred Flintstone" FORM="Mr." STREET="123 Rubble Lane" POSTCODE="01234" CITY="Bedrock" COUNTR="US" PHONE="800-555-1212" EMAIL="[email protected]" CUSTTYPE="P" DISCOUNT="005" LANGU="E"/> </segmentChildren> </E1SCU_CRE> </segmentChildren> </rootSegment> </idoc:Document> 290.14. SAP Examples 290.14.1. Example 1: Reading Data from SAP Overview This example demonstrates a route that reads FlightCustomer business object data from SAP. The route invokes the FlightCustomer BAPI method, BAPI_FLCUST_GETLIST , using a SAP synchronous RFC destination endpoint to retrieve the data. Java DSL for route The Java DSL for the example route is as follows: from("direct:getFlightCustomerInfo") .to("bean:createFlightCustomerGetListRequest") .to("sap-srfc-destination:nplDest:BAPI_FLCUST_GETLIST") .to("bean:returnFlightCustomerInfo"); XML DSL for route And the Spring DSL for the same route is as follows: <route> <from uri="direct:getFlightCustomerInfo"/> <to uri="bean:createFlightCustomerGetListRequest"/> <to uri="sap-srfc-destination:nplDest:BAPI_FLCUST_GETLIST"/> <to uri="bean:returnFlightCustomerInfo"/> </route> createFlightCustomerGetListRequest bean The createFlightCustomerGetListRequest bean is responsible for building a SAP request object in its exchange method used in the RFC call of the subsequent SAP endpoint . The following code snippet demonstrates the sequence of operations to build the request object: public void create(Exchange exchange) throws Exception { // Get SAP Endpoint to be called from context. SapSynchronousRfcDestinationEndpoint endpoint = exchange.getContext().getEndpoint("sap-srfc-destination:nplDest:BAPI_FLCUST_GETLIST", SapSynchronousRfcDestinationEndpoint.class); // Retrieve bean from message containing Flight Customer name to // look up. BookFlightRequest bookFlightRequest = exchange.getIn().getBody(BookFlightRequest.class); // Create SAP Request object from target endpoint. Structure request = endpoint.getRequest(); // Add Customer Name to request if set if (bookFlightRequest.getCustomerName() != null && bookFlightRequest.getCustomerName().length() > 0) { request.put("CUSTOMER_NAME", bookFlightRequest.getCustomerName()); } } else { throw new Exception("No Customer Name"); } // Put request object into body of exchange message. exchange.getIn().setBody(request); } returnFlightCustomerInfo bean The returnFlightCustomerInfo bean is responsible for extracting data from the SAP response object in its exchange method that it receives from the SAP endpoint. The following code snippet demonstrates the sequence of operations to extract the data from the response object: public void createFlightCustomerInfo(Exchange exchange) throws Exception { // Retrieve SAP response object from body of exchange message. Structure flightCustomerGetListResponse = exchange.getIn().getBody(Structure.class); if (flightCustomerGetListResponse == null) { throw new Exception("No Flight Customer Get List Response"); } // Check BAPI return parameter for errors @SuppressWarnings("unchecked") Table<Structure> bapiReturn = flightCustomerGetListResponse.get("RETURN", Table.class); Structure bapiReturnEntry = bapiReturn.get(0); if (bapiReturnEntry.get("TYPE", String.class) != "S") { String message = bapiReturnEntry.get("MESSAGE", String.class); throw new Exception("BAPI call failed: " + message); } // Get customer list table from response object. @SuppressWarnings("unchecked") Table<? extends Structure> customerList = flightCustomerGetListResponse.get("CUSTOMER_LIST", Table.class); if (customerList == null || customerList.size() == 0) { throw new Exception("No Customer Info."); } // Get Flight Customer data from first row of table. Structure customer = customerList.get(0); // Create bean to hold Flight Customer data. FlightCustomerInfo flightCustomerInfo = new FlightCustomerInfo(); // Get customer id from Flight Customer data and add to bean. String customerId = customer.get("CUSTOMERID", String.class); if (customerId != null) { flightCustomerInfo.setCustomerNumber(customerId); } ... // Put bean into body of exchange message. exchange.getIn().setHeader("flightCustomerInfo", flightCustomerInfo); } 290.14.2. Example 2: Writing Data to SAP Overview This example demonstrates a route that creates a FlightTrip business object instance in SAP. The route invokes the FlightTrip BAPI method, BAPI_FLTRIP_CREATE , using a destination endpoint to create the object. Java DSL for route The Java DSL for the example route is as follows: from("direct:createFlightTrip") .to("bean:createFlightTripRequest") .to("sap-srfc-destination:nplDest:BAPI_FLTRIP_CREATE?transacted=true") .to("bean:returnFlightTripResponse"); XML DSL for route And the Spring DSL for the same route is as follows: <route> <from uri="direct:createFlightTrip"/> <to uri="bean:createFlightTripRequest"/> <to uri="sap-srfc-destination:nplDest:BAPI_FLTRIP_CREATE?transacted=true"/> <to uri="bean:returnFlightTripResponse"/> </route> Transaction support Note The URL for the SAP endpoint has the transacted option set to true . As discussed in Section 290.11, "Transaction Support" , when this option is enabled the endpoint ensures that a SAP transaction session has been initiated before invoking the RFC call. Because this endpoint's RFC creates new data in SAP, this option is necessary to make the route's changes permanent in SAP. Populating request parameters The createFlightTripRequest and returnFlightTripResponse beans are responsible for populating request parameters into the SAP request and extracting response parameters from the SAP response respectively, following the same sequence of operations as demonstrated in the example. 290.14.3. Example 3: Handling Requests from SAP Overview This example demonstrates a route that handles a request from SAP to the BOOK_FLIGHT RFC, which is implemented by the route. In addition, it demonstrates the component's XML serialization support, using JAXB to unmarshal and marshal SAP request objects and response objects to custom beans. This route creates a FlightTrip business object on behalf of a travel agent, FlightCustomer . The route first unmarshals the SAP request object received by the SAP server endpoint into a custom JAXB bean. This custom bean is then multicasted in the exchange to three sub-routes, which gather the travel agent, flight connection, and passenger information required to create the flight trip. The final sub-route creates the flight trip object in SAP, as demonstrated in the example. The final sub-route also creates and returns a custom JAXB bean which is marshaled into a SAP response object and returned by the server endpoint. Java DSL for route The Java DSL for the example route is as follows: DataFormat jaxb = new JaxbDataFormat("org.fusesource.sap.example.jaxb"); from("sap-srfc-server:nplserver:BOOK_FLIGHT") .unmarshal(jaxb) .multicast() .to("direct:getFlightConnectionInfo", "direct:getFlightCustomerInfo", "direct:getPassengerInfo") .end() .to("direct:createFlightTrip") .marshal(jaxb); XML DSL for route And the XML DSL for the same route is as follows: <route> <from uri="sap-srfc-server:nplserver:BOOK_FLIGHT"/> <unmarshal> <jaxb contextPath="org.fusesource.sap.example.jaxb"/> </unmarshal> <multicast> <to uri="direct:getFlightConnectionInfo"/> <to uri="direct:getFlightCustomerInfo"/> <to uri="direct:getPassengerInfo"/> </multicast> <to uri="direct:createFlightTrip"/> <marshal> <jaxb contextPath="org.fusesource.sap.example.jaxb"/> </marshal> </route> BookFlightRequest bean The following listing illustrates a JAXB bean which unmarshals from the serialized form of a SAP BOOK_FLIGHT request object: @XmlRootElement(name="Request", namespace="http://sap.fusesource.org/rfc/nplServer/BOOK_FLIGHT") @XmlAccessorType(XmlAccessType.FIELD) public class BookFlightRequest { @XmlAttribute(name="CUSTNAME") private String customerName; @XmlAttribute(name="FLIGHTDATE") @XmlJavaTypeAdapter(DateAdapter.class) private Date flightDate; @XmlAttribute(name="TRAVELAGENCYNUMBER") private String travelAgencyNumber; @XmlAttribute(name="DESTINATION_FROM") private String startAirportCode; @XmlAttribute(name="DESTINATION_TO") private String endAirportCode; @XmlAttribute(name="PASSFORM") private String passengerFormOfAddress; @XmlAttribute(name="PASSNAME") private String passengerName; @XmlAttribute(name="PASSBIRTH") @XmlJavaTypeAdapter(DateAdapter.class) private Date passengerDateOfBirth; @XmlAttribute(name="CLASS") private String flightClass; ... } BookFlightResponse bean The following listing illustrates a JAXB bean which marshals to the serialized form of a SAP BOOK_FLIGHT response object: @XmlRootElement(name="Response", namespace="http://sap.fusesource.org/rfc/nplServer/BOOK_FLIGHT") @XmlAccessorType(XmlAccessType.FIELD) public class BookFlightResponse { @XmlAttribute(name="TRIPNUMBER") private String tripNumber; @XmlAttribute(name="TICKET_PRICE") private BigDecimal ticketPrice; @XmlAttribute(name="TICKET_TAX") private BigDecimal ticketTax; @XmlAttribute(name="CURRENCY") private String currency; @XmlAttribute(name="PASSFORM") private String passengerFormOfAddress; @XmlAttribute(name="PASSNAME") private String passengerName; @XmlAttribute(name="PASSBIRTH") @XmlJavaTypeAdapter(DateAdapter.class) private Date passengerDateOfBirth; @XmlElement(name="FLTINFO") private FlightInfo flightInfo; @XmlElement(name="CONNINFO") private ConnectionInfoTable connectionInfo; ... } Note The complex parameter fields of the response object are serialized as child elements of the response. FlightInfo bean The following listing illustrates a JAXB bean which marshals to the serialized form of the complex structure parameter FLTINFO : @XmlRootElement(name="FLTINFO", namespace="http://sap.fusesource.org/rfc/nplServer/BOOK_FLIGHT") @XmlAccessorType(XmlAccessType.FIELD) public class FlightInfo { @XmlAttribute(name="FLIGHTTIME") private String flightTime; @XmlAttribute(name="CITYFROM") private String cityFrom; @XmlAttribute(name="DEPDATE") @XmlJavaTypeAdapter(DateAdapter.class) private Date departureDate; @XmlAttribute(name="DEPTIME") @XmlJavaTypeAdapter(DateAdapter.class) private Date departureTime; @XmlAttribute(name="CITYTO") private String cityTo; @XmlAttribute(name="ARRDATE") @XmlJavaTypeAdapter(DateAdapter.class) private Date arrivalDate; @XmlAttribute(name="ARRTIME") @XmlJavaTypeAdapter(DateAdapter.class) private Date arrivalTime; ... } ConnectionInfoTable bean The following listing illustrates a JAXB bean which marshals to the serialized form of the complex table parameter, CONNINFO . Note The name of the root element type of the JAXB bean corresponds to the name of the row structure type suffixed with _TABLE and the bean contains a list of row elements. @XmlRootElement(name="CONNINFO_TABLE", namespace="http://sap.fusesource.org/rfc/nplServer/BOOK_FLIGHT") @XmlAccessorType(XmlAccessType.FIELD) public class ConnectionInfoTable { @XmlElement(name="row") List<ConnectionInfo> rows; ... } ConnectionInfo bean The following listing illustrates a JAXB bean, which marshals to the serialized form of the above tables row elements: @XmlRootElement(name="CONNINFO", namespace="http://sap.fusesource.org/rfc/nplServer/BOOK_FLIGHT") @XmlAccessorType(XmlAccessType.FIELD) public class ConnectionInfo { @XmlAttribute(name="CONNID") String connectionId; @XmlAttribute(name="AIRLINE") String airline; @XmlAttribute(name="PLANETYPE") String planeType; @XmlAttribute(name="CITYFROM") String cityFrom; @XmlAttribute(name="DEPDATE") @XmlJavaTypeAdapter(DateAdapter.class) Date departureDate; @XmlAttribute(name="DEPTIME") @XmlJavaTypeAdapter(DateAdapter.class) Date departureTime; @XmlAttribute(name="CITYTO") String cityTo; @XmlAttribute(name="ARRDATE") @XmlJavaTypeAdapter(DateAdapter.class) Date arrivalDate; @XmlAttribute(name="ARRTIME") @XmlJavaTypeAdapter(DateAdapter.class) Date arrivalTime; ... }
|
[
"<dependency> <groupId>org.fusesource</groupId> <artifactId>camel-sap</artifactId> <version>x.x.x</version> <dependency>",
":experimental: // Standard document attributes to be used in the documentation // // The following are shared by all documents :toc: :toclevels: 4 :numbered:",
"org.osgi.framework.system.packages.extra ... , com.sap.conn.idoc, com.sap.conn.idoc.jco, com.sap.conn.jco, com.sap.conn.jco.ext, com.sap.conn.jco.monitor, com.sap.conn.jco.rt, com.sap.conn.jco.server",
"JBossFuse:karaf@root> features:install camel-sap",
"cp sapjco3.jar sapidoc3.jar USDJBOSS_HOME/modules/system/layers/fuse/com/sap/conn/jco/main/ mkdir -p USDJBOSS_HOME/modules/system/layers/fuse/com/sap/conn/jco/main/lib/linux-x86_64 cp libsapjco3.so USDJBOSS_HOME/modules/system/layers/fuse/com/sap/conn/jco/main/lib/linux-x86_64/",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <module xmlns=\"urn:jboss:module:1.1\" name=\"org.wildfly.camel.extras\"> <dependencies> <module name=\"org.fusesource.camel.component.sap\" export=\"true\" services=\"export\" /> </dependencies> </module>",
"<dependency> <groupId>org.fusesource</groupId> <artifactId>camel-sap-starter</artifactId> <exclusions> <exclusion> <groupId>com.sap.conn.idoc</groupId> <artifactId>sapidoc3</artifactId> </exclusion> <exclusion> <groupId>com.sap.conn.jco</groupId> <artifactId>sapjco3</artifactId> </exclusion> </exclusions> </dependency>",
"src βββ lib βββ amd64.com.sap.conn βββ idoc β βββ sapidoc3.jar βββ jco βββ sapjco3.jar βββ sapjco3.so",
"<plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-jar-plugin</artifactId> <configuration> <archive> <manifestEntries> <Class-Path>lib/USD{os.arch}/sapjco3.jar lib/USD{os.arch}/sapidoc3.jar</Class-Path> </manifestEntries> </archive> </configuration> </plugin>",
"<plugin> <artifactId>maven-resources-plugin</artifactId> <executions> <execution> <id>copy-resources01</id> <phase>process-classes</phase> <goals> <goal>copy-resources</goal> </goals> <configuration> <outputDirectory>USD{basedir}/target/lib</outputDirectory> <encoding>UTF-8</encoding> <resources> <resource> <directory>USD{basedir}/lib</directory> <includes> <include>**/**</include> </includes> </resource> </resources> </configuration> </execution> </executions> </plugin>",
"new-build --binary=true --image-stream=\"<current_Fuse_Java_OpenShift_Imagestream_version>\" --name=<application_name> -e \"ARTIFACT_COPY_ARGS=-a .\" -e \"MAVEN_ARGS_APPEND=<additional_args> -e \"ARTIFACT_DIR=<relative_path_of_target_directory>\"",
"new-build --binary=true --image-stream=\"fuse7-java-openshift:1.4\" --name=sapik6 -e \"ARTIFACT_COPY_ARGS=-a .\" -e \"MAVEN_ARGS_APPEND=-pl spring-boot/sap-srfc-destination-spring-boot\" -e \"ARTIFACT_DIR=spring-boot/sap-srfc-destination-spring-boot/target\"",
"start-build sapik6 --from-dir=.",
"new-app --image-stream=<name>:<version>",
"new-app --image-stream=sapik6:latest",
"src βββ lib βββ amd64.com.sap.conn βββ idoc β βββ sapidoc3.jar βββ jco βββ sapjco3.jar βββ sapjco3.so",
"<dependency> <groupId>org.fusesource</groupId> <artifactId>camel-sap-starter</artifactId> <exclusions> <exclusion> <groupId>com.sap.conn.idoc</groupId> <artifactId>sapidoc3</artifactId> </exclusion> <exclusion> <groupId>com.sap.conn.jco</groupId> <artifactId>sapjco3</artifactId> </exclusion> </exclusions> </dependency>",
"<resources> <resource> <directory>src/lib/USD{os.arch}/com/sap/conn/idoc</directory> <targetPath>BOOT-INF/lib</targetPath> <includes> <include>*.jar</include> </includes> </resource> <resource> <directory>src/lib/USD{os.arch}/com/sap/conn/jco</directory> <targetPath>BOOT-INF/lib</targetPath> <includes> <include>*.jar</include> </includes> </resource> </resources>",
"<plugin> <groupId>org.eclipse.jkube</groupId> <artifactId>openshift-maven-plugin</artifactId> <version>1.4.0</version> <configuration> <images>  </images> </configuration> <executions> <execution> <goals> <goal>resource</goal> <goal>build</goal> <goal>apply</goal> </goals> </execution> </executions> </plugin>",
"cd <sap_application_path>",
"new-project streams",
"import-image streams/fuse7-java-openshift:1.11 --from=registry.redhat.io/fuse7/fuse-java-openshift-rhel8:1.11-32 --confirm -n streams (JDK8)",
"new-project <your_project>",
"mvn clean oc:deploy -Djkube.docker.imagePullPolicy=Always -Popenshift -Djkube.generator.from=streams/fuse7-java-openshift:1.11 -Djkube.resourceDir=./src/main/jkube -Djkube.openshiftManifest=target/classes/META-INF/jkube/openshift.yml -Djkube.generator.fromMode=istag",
"sap-srfc-destination: destinationName : rfcName sap-trfc-destination: destinationName : rfcName sap-qrfc-destination: destinationName : queueName : rfcName sap-srfc-server: serverName : rfcName [? options ] sap-trfc-server: serverName : rfcName [? options ]",
"sap-idoc-destination: destinationName : idocType [: idocTypeExtension [: systemRelease [: applicationRelease ]]] sap-idoclist-destination: destinationName : idocType [: idocTypeExtension [: systemRelease [: applicationRelease ]]] sap-qidoc-destination: destinationName : queueName : idocType [: idocTypeExtension [: systemRelease [: applicationRelease ]]] sap-qidoclist-destination: destinationName : queueName : idocType [: idocTypeExtension [: systemRelease [: applicationRelease ]]] sap-idoclist-server: serverName : idocType [: idocTypeExtension [: systemRelease [: applicationRelease ]]][? options ]",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <blueprint ... > <!-- Configures the Inbound and Outbound SAP Connections --> <bean id=\"sap-configuration\" class=\"org.fusesource.camel.component.sap.SapConnectionConfiguration\"> <property name=\"destinationDataStore\"> <map> <entry key=\"quickstartDest\" value-ref=\"quickstartDestinationData\" /> </map> </property> <property name=\"serverDataStore\"> <map> <entry key=\"quickstartServer\" value-ref=\"quickstartServerData\" /> </map> </property> </bean> <!-- Configures an Outbound SAP Connection --> <!-- *** Please enter the connection property values for your environment *** --> <bean id=\"quickstartDestinationData\" class=\"org.fusesource.camel.component.sap.model.rfc.impl.DestinationDataImpl\"> <property name=\"ashost\" value=\"example.com\" /> <property name=\"sysnr\" value=\"00\" /> <property name=\"client\" value=\"000\" /> <property name=\"user\" value=\"username\" /> <property name=\"passwd\" value=\"passowrd\" /> <property name=\"lang\" value=\"en\" /> </bean> <!-- Configures an Inbound SAP Connection --> <!-- *** Please enter the connection property values for your environment ** --> <bean id=\"quickstartServerData\" class=\"org.fusesource.camel.component.sap.model.rfc.impl.ServerDataImpl\"> <property name=\"gwhost\" value=\"example.com\" /> <property name=\"gwserv\" value=\"3300\" /> <!-- Do not change the following property values --> <property name=\"progid\" value=\"QUICKSTART\" /> <property name=\"repositoryDestination\" value=\"quickstartDest\" /> <property name=\"connectionCount\" value=\"2\" /> </bean> </blueprint>",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <blueprint ... > <!-- Create interceptor to support tRFC processing --> <bean id=\"currentProcessorDefinitionInterceptor\" class=\"org.fusesource.camel.component.sap.CurrentProcessorDefinitionInterceptStrategy\" /> <!-- Configures the Inbound and Outbound SAP Connections --> <bean id=\"sap-configuration\" class=\"org.fusesource.camel.component.sap.SapConnectionConfiguration\"> <property name=\"destinationDataStore\"> <map> <entry key=\"quickstartDest\" value-ref=\"quickstartDestinationData\" /> </map> </property> </bean> <!-- Configures an Outbound SAP Connection --> <!-- *** Please enter the connection property values for your environment *** --> <bean id=\"quickstartDestinationData\" class=\"org.fusesource.camel.component.sap.model.rfc.impl.DestinationDataImpl\"> <property name=\"ashost\" value=\"example.com\" /> <property name=\"sysnr\" value=\"00\" /> <property name=\"client\" value=\"000\" /> <property name=\"user\" value=\"username\" /> <property name=\"passwd\" value=\"password\" /> <property name=\"lang\" value=\"en\" /> </bean> </blueprint>",
"sap-srfc-destination:quickstartDest:BAPI_FLCUST_GETLIST",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <blueprint ... > <!-- Configures the Inbound and Outbound SAP Connections --> <bean id=\"sap-configuration\" class=\"org.fusesource.camel.component.sap.SapConnectionConfiguration\"> <property name=\"destinationDataStore\"> <map> <entry key=\"quickstartDest\" value-ref=\"quickstartDestinationData\" /> </map> </property> <property name=\"serverDataStore\"> <map> <entry key=\"quickstartServer\" value-ref=\"quickstartServerData\" /> </map> </property> </bean> <!-- Configures an Outbound SAP Connection --> <!-- *** Please enter the connection property values for your environment *** --> <bean id=\"quickstartDestinationData\" class=\"org.fusesource.camel.component.sap.model.rfc.impl.DestinationDataImpl\"> <property name=\"ashost\" value=\"example.com\" /> <property name=\"sysnr\" value=\"00\" /> <property name=\"client\" value=\"000\" /> <property name=\"user\" value=\"username\" /> <property name=\"passwd\" value=\"passowrd\" /> <property name=\"lang\" value=\"en\" /> </bean> <!-- Configures an Inbound SAP Connection --> <!-- *** Please enter the connection property values for your environment ** --> <bean id=\"quickstartServerData\" class=\"org.fusesource.camel.component.sap.model.rfc.impl.ServerDataImpl\"> <property name=\"gwhost\" value=\"example.com\" /> <property name=\"gwserv\" value=\"3300\" /> <!-- Do not change the following property values --> <property name=\"progid\" value=\"QUICKSTART\" /> <property name=\"repositoryDestination\" value=\"quickstartDest\" /> <property name=\"connectionCount\" value=\"2\" /> </bean> </blueprint>",
"sap-srfc-server:quickstartServer:BAPI_FLCUST_GETLIST",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <blueprint ... > <!-- Configures the sap-srfc-server component --> <bean id=\"sap-configuration\" class=\"org.fusesource.camel.component.sap.SapConnectionConfiguration\"> <property name=\"repositoryDataStore\"> <map> <entry key=\"nplServer\" value-ref=\"nplRepositoryData\" /> </map> </property> </bean> <!-- Configures a metadata Repository --> <bean id=\"nplRepositoryData\" class=\"org.fusesource.camel.component.sap.model.rfc.impl.RepositoryDataImpl\"> <property name=\"functionTemplates\"> <map> <entry key=\"BOOK_FLIGHT\" value-ref=\"bookFlightFunctionTemplate\" /> </map> </property> </bean> </blueprint>",
"<bean id=\"bookFlightFunctionTemplate\" class=\"org.fusesource.camel.component.sap.model.rfc.impl.FunctionTemplateImpl\"> <property name=\"importParameterList\"> <list> </list> </property> <property name=\"changingParameterList\"> <list> </list> </property> <property name=\"exportParameterList\"> <list> </list> </property> <property name=\"tableParameterList\"> <list> </list> </property> <property name=\"exceptionList\"> <list> </list> </property> </bean>",
"<bean class=\"org.fusesource.camel.component.sap.model.rfc.impl.ListFieldMetaDataImpl\"> <property name=\"name\" value=\"TICKET_PRICE\" /> <property name=\"type\" value=\"BCD\" /> <property name=\"byteLength\" value=\"12\" /> <property name=\"unicodeByteLength\" value=\"24\" /> <property name=\"decimals\" value=\"2\" /> <property name=\"optional\" value=\"true\" /> </bean>",
"<bean class=\"org.fusesource.camel.component.sap.model.rfc.impl.ListFieldMetaDataImpl\"> <property name=\"name\" value=\"CONNINFO\" /> <property name=\"type\" value=\"TABLE\" /> <property name=\"recordMetaData\" ref=\"connectionInfo\" /> </bean>",
"<bean id=\"connectionInfo\" class=\"org.fusesource.camel.component.sap.model.rfc.impl.RecordMetaDataImpl\"> <property name=\"name\" value=\"CONNECTION_INFO\" /> <property name=\"recordFieldMetaData\"> <list> </list> </property> </bean>",
"<bean class=\"org.fusesource.camel.component.sap.model.rfc.impl.FieldMetaDataImpl\"> <property name=\"name\" value=\"ARRDATE\" /> <property name=\"type\" value=\"DATE\" /> <property name=\"byteLength\" value=\"8\" /> <property name=\"unicodeByteLength\" value=\"16\" /> <property name=\"byteOffset\" value=\"85\" /> <property name=\"unicodeByteOffset\" value=\"170\" /> </bean>",
"<bean class=\"org.fusesource.camel.component.sap.model.rfc.impl.FieldMetaDataImpl\"> <property name=\"name\" value=\"FLTINFO\" /> <property name=\"type\" value=\"STRUCTURE\" /> <property name=\"byteOffset\" value=\"0\" /> <property name=\"unicodeByteOffset\" value=\"0\" /> <property name=\"recordMetaData\" ref=\"flightInfo\" /> </bean>",
"public class SAPEndpoint { public Structure getRequest() throws Exception; public Structure getResponse() throws Exception; }",
"public interface Structure extends org.eclipse.emf.ecore.EObject, java.util.Map<String, Object> { <T> T get(Object key, Class<T> type); }",
"public interface Table<S extends Structure> extends org.eclipse.emf.ecore.EObject, java.util.List<S> { /** * Creates and adds table row at end of row list */ S add(); /** * Creates and adds table row at index in row list */ S add(int index); }",
"org.fusesource.camel.component.sap.model.idoc.Document",
"org.fusesource.camel.component.sap.model.idoc.DocumentList",
"org.fusesource.camel.component.sap.model.idoc.Document org.fusesource.camel.component.sap.model.idoc.Segment",
"// Java package org.fusesource.camel.component.sap.model.idoc; public interface Document extends EObject { // Access the field values from the IDoc control record String getArchiveKey(); void setArchiveKey(String value); String getClient(); void setClient(String value); // Access the IDoc document contents Segment getRootSegment(); }",
"// Java package org.fusesource.camel.component.sap.model.idoc; public interface Segment extends EObject, java.util.Map<String, Object> { // Returns the value of the '<em><b>Parent</b></em>' reference. Segment getParent(); // Return a immutable list of all child segments <S extends Segment> EList<S> getChildren(); // Returns a list of child segments of the specified segment type. <S extends Segment> SegmentList<S> getChildren(String segmentType); EList<String> getTypes(); Document getDocument(); String getDescription(); String getType(); String getDefinition(); int getHierarchyLevel(); String getIdocType(); String getIdocTypeExtension(); String getSystemRelease(); String getApplicationRelease(); int getNumFields(); long getMaxOccurrence(); long getMinOccurrence(); boolean isMandatory(); boolean isQualified(); int getRecordLength(); <T> T get(Object key, Class<T> type); }",
"// Java package org.fusesource.camel.component.sap.model.idoc; public interface SegmentList<S extends Segment> extends EObject, EList<S> { S add(); S add(int index); }",
"Segment rootSegment = document.getRootSegment(); Segment E1SCU_CRE_Segment = rootSegment.getChildren(\"E1SCU_CRE\").add();",
"// Java import org.fusesource.camel.component.sap.model.idoc.Document; import org.fusesource.camel.component.sap.model.idoc.Segment; import org.fusesource.camel.component.sap.util.IDocUtil; import org.fusesource.camel.component.sap.model.idoc.Document; import org.fusesource.camel.component.sap.model.idoc.DocumentList; import org.fusesource.camel.component.sap.model.idoc.IdocFactory; import org.fusesource.camel.component.sap.model.idoc.IdocPackage; import org.fusesource.camel.component.sap.model.idoc.Segment; import org.fusesource.camel.component.sap.model.idoc.SegmentChildren; // // Create a new IDoc instance using the modelling classes // // Get the SAP Endpoint bean from the Camel context. // In this example, it's a 'sap-idoc-destination' endpoint. SapTransactionalIDocDestinationEndpoint endpoint = exchange.getContext().getEndpoint( \"bean: SapEndpointBeanID \", SapTransactionalIDocDestinationEndpoint.class ); // The endpoint automatically populates some required control record attributes Document document = endpoint.createDocument() // Initialize additional control record attributes document.setMessageType(\"FLCUSTOMER_CREATEFROMDATA\"); document.setRecipientPartnerNumber(\"QUICKCLNT\"); document.setRecipientPartnerType(\"LS\"); document.setSenderPartnerNumber(\"QUICKSTART\"); document.setSenderPartnerType(\"LS\"); Segment rootSegment = document.getRootSegment(); Segment E1SCU_CRE_Segment = rootSegment.getChildren(\"E1SCU_CRE\").add(); Segment E1BPSCUNEW_Segment = E1SCU_CRE_Segment.getChildren(\"E1BPSCUNEW\").add(); E1BPSCUNEW_Segment.put(\"CUSTNAME\", \"Fred Flintstone\"); E1BPSCUNEW_Segment.put(\"FORM\", \"Mr.\"); E1BPSCUNEW_Segment.put(\"STREET\", \"123 Rubble Lane\"); E1BPSCUNEW_Segment.put(\"POSTCODE\", \"01234\"); E1BPSCUNEW_Segment.put(\"CITY\", \"Bedrock\"); E1BPSCUNEW_Segment.put(\"COUNTR\", \"US\"); E1BPSCUNEW_Segment.put(\"PHONE\", \"800-555-1212\"); E1BPSCUNEW_Segment.put(\"EMAIL\", \" [email protected] \"); E1BPSCUNEW_Segment.put(\"CUSTTYPE\", \"P\"); E1BPSCUNEW_Segment.put(\"DISCOUNT\", \"005\"); E1BPSCUNEW_Segment.put(\"LANGU\", \"E\");",
"// Java document.setIDocType(\"FLCUSTOMER_CREATEFROMDATA01\"); document.setIDocTypeExtension(\"\"); document.setMessageType(\"FLCUSTOMER_CREATEFROMDATA\");",
"<?xml version=\"1.0\" encoding=\"ASCII\"?> <idoc:Document iDocType=\"FLCUSTOMER_CREATEFROMDATA01\" iDocTypeExtension=\"\" messageType=\"FLCUSTOMER_CREATEFROMDATA\" ... > </idoc:Document>",
"http://sap.fusesource.org/rfc/<Repository Name>/<RFC Name>",
"<?xml version=\"1.0\" encoding=\"ASCII\"?> <BOOK_FLIGHT:Request xmlns:BOOK_FLIGHT=\"http://sap.fusesource.org/rfc/nplServer/BOOK_FLIGHT\"> </BOOK_FLIGHT:Request>",
"<?xml version=\"1.0\" encoding=\"ASCII\"?> <BOOK_FLIGHT:Response xmlns:BOOK_FLIGHT=\"http://sap.fusesource.org/rfc/nplServer/BOOK_FLIGHT\"> </BOOK_FLIGHT:Response>",
"<BOOK_FLIGHT:FLTINFO xmlns:BOOK_FLIGHT=\"http://sap.fusesource.org/rfc/nplServer/BOOK_FLIGHT\"> </BOOK_FLIGHT:FLTINFO>",
"<xs:schema targetNamespace=\"http://sap.fusesource.org/rfc/nplServer/BOOK_FLIGHT\"> xmlns:xs=\"http://www.w3.org/2001/XMLSchema\"> <xs:complexType name=\"FLTINFO_STRUCTURE\"> </xs:complexType> </xs:schema>",
"<BOOK_FLIGHT:CONNINFO xmlns:BOOK_FLIGHT=\"http://sap.fusesource.org/rfc/nplServer/BOOK_FLIGHT\"> <row ... > ... </row> <row ... > ... </row> </BOOK_FLIGHT:CONNINFO>",
"<xs:schema targetNamespace=\"http://sap.fusesource.org/rfc/nplServer/BOOK_FLIGHT\" xmlns:xs=\"http://www.w3.org/2001/XMLSchema\"> <xs:complextType name=\"CONNECTION_INFO_STRUCTURE_TABLE\"> <xs:sequence> <xs:element name=\"row\" minOccures=\"0\" maxOccurs=\"unbounded\" type=\"CONNECTION_INFO_STRUCTURE\"/> <xs:sequence> </xs:sequence> </xs:complexType> <xs:complextType name=\"CONNECTION_INFO_STRUCTURE\"> </xs:complexType> </xs:schema>",
"<?xml version=\"1.0\" encoding=\"ASCII\"?> <BOOK_FLIGHT:Request xmlns:BOOK_FLIGHT=\"http://sap.fusesource.org/rfc/nplServer/BOOK_FLIGHT\" CUSTNAME=\"James Legrand\" PASSFORM=\"Mr\" PASSNAME=\"Travelin Joe\" PASSBIRTH=\"1990-03-17T00:00:00.000-0500\" FLIGHTDATE=\"2014-03-19T00:00:00.000-0400\" TRAVELAGENCYNUMBER=\"00000110\" DESTINATION_FROM=\"SFO\" DESTINATION_TO=\"FRA\"/>",
"yyyy-MM-dd'T'HH:mm:ss.SSSZ",
"DEPDATE=\"2014-03-19T00:00:00.000-0400\"",
"DEPTIME=\"1970-01-01T16:00:00.000-0500\"",
"http://sap.fusesource.org/idoc/ repositoryName / idocType / idocTypeExtension / systemRelease / applicationRelease",
"http://sap.fusesource.org/idoc/MY_REPO/FLCUSTOMER_CREATEFROMDATA01///",
"<convertBodyTo type=\"java.lang.String\"/>",
"<convertBodyTo type=\"org.fusesource.camel.component.sap.model.idoc.Document\"/>",
"<?xml version=\"1.0\" encoding=\"ASCII\"?> <idoc:Document xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:FLCUSTOMER_CREATEFROMDATA01---=\"http://sap.fusesource.org/idoc/XXX/FLCUSTOMER_CREATEFROMDATA01///\" xmlns:idoc=\"http://sap.fusesource.org/idoc\" creationDate=\"2015-01-28T12:39:13.980-0500\" creationTime=\"2015-01-28T12:39:13.980-0500\" iDocType=\"FLCUSTOMER_CREATEFROMDATA01\" iDocTypeExtension=\"\" messageType=\"FLCUSTOMER_CREATEFROMDATA\" recipientPartnerNumber=\"QUICKCLNT\" recipientPartnerType=\"LS\" senderPartnerNumber=\"QUICKSTART\" senderPartnerType=\"LS\"> <rootSegment xsi:type=\"FLCUSTOMER_CREATEFROMDATA01---:ROOT\" document=\"/\"> <segmentChildren parent=\"//@rootSegment\"> <E1SCU_CRE parent=\"//@rootSegment\" document=\"/\"> <segmentChildren parent=\"//@rootSegment/@segmentChildren/@E1SCU_CRE.0\"> <E1BPSCUNEW parent=\"//@rootSegment/@segmentChildren/@E1SCU_CRE.0\" document=\"/\" CUSTNAME=\"Fred Flintstone\" FORM=\"Mr.\" STREET=\"123 Rubble Lane\" POSTCODE=\"01234\" CITY=\"Bedrock\" COUNTR=\"US\" PHONE=\"800-555-1212\" EMAIL=\"[email protected]\" CUSTTYPE=\"P\" DISCOUNT=\"005\" LANGU=\"E\"/> </segmentChildren> </E1SCU_CRE> </segmentChildren> </rootSegment> </idoc:Document>",
"from(\"direct:getFlightCustomerInfo\") .to(\"bean:createFlightCustomerGetListRequest\") .to(\"sap-srfc-destination:nplDest:BAPI_FLCUST_GETLIST\") .to(\"bean:returnFlightCustomerInfo\");",
"<route> <from uri=\"direct:getFlightCustomerInfo\"/> <to uri=\"bean:createFlightCustomerGetListRequest\"/> <to uri=\"sap-srfc-destination:nplDest:BAPI_FLCUST_GETLIST\"/> <to uri=\"bean:returnFlightCustomerInfo\"/> </route>",
"public void create(Exchange exchange) throws Exception { // Get SAP Endpoint to be called from context. SapSynchronousRfcDestinationEndpoint endpoint = exchange.getContext().getEndpoint(\"sap-srfc-destination:nplDest:BAPI_FLCUST_GETLIST\", SapSynchronousRfcDestinationEndpoint.class); // Retrieve bean from message containing Flight Customer name to // look up. BookFlightRequest bookFlightRequest = exchange.getIn().getBody(BookFlightRequest.class); // Create SAP Request object from target endpoint. Structure request = endpoint.getRequest(); // Add Customer Name to request if set if (bookFlightRequest.getCustomerName() != null && bookFlightRequest.getCustomerName().length() > 0) { request.put(\"CUSTOMER_NAME\", bookFlightRequest.getCustomerName()); } } else { throw new Exception(\"No Customer Name\"); } // Put request object into body of exchange message. exchange.getIn().setBody(request); }",
"public void createFlightCustomerInfo(Exchange exchange) throws Exception { // Retrieve SAP response object from body of exchange message. Structure flightCustomerGetListResponse = exchange.getIn().getBody(Structure.class); if (flightCustomerGetListResponse == null) { throw new Exception(\"No Flight Customer Get List Response\"); } // Check BAPI return parameter for errors @SuppressWarnings(\"unchecked\") Table<Structure> bapiReturn = flightCustomerGetListResponse.get(\"RETURN\", Table.class); Structure bapiReturnEntry = bapiReturn.get(0); if (bapiReturnEntry.get(\"TYPE\", String.class) != \"S\") { String message = bapiReturnEntry.get(\"MESSAGE\", String.class); throw new Exception(\"BAPI call failed: \" + message); } // Get customer list table from response object. @SuppressWarnings(\"unchecked\") Table<? extends Structure> customerList = flightCustomerGetListResponse.get(\"CUSTOMER_LIST\", Table.class); if (customerList == null || customerList.size() == 0) { throw new Exception(\"No Customer Info.\"); } // Get Flight Customer data from first row of table. Structure customer = customerList.get(0); // Create bean to hold Flight Customer data. FlightCustomerInfo flightCustomerInfo = new FlightCustomerInfo(); // Get customer id from Flight Customer data and add to bean. String customerId = customer.get(\"CUSTOMERID\", String.class); if (customerId != null) { flightCustomerInfo.setCustomerNumber(customerId); } // Put bean into body of exchange message. exchange.getIn().setHeader(\"flightCustomerInfo\", flightCustomerInfo); }",
"from(\"direct:createFlightTrip\") .to(\"bean:createFlightTripRequest\") .to(\"sap-srfc-destination:nplDest:BAPI_FLTRIP_CREATE?transacted=true\") .to(\"bean:returnFlightTripResponse\");",
"<route> <from uri=\"direct:createFlightTrip\"/> <to uri=\"bean:createFlightTripRequest\"/> <to uri=\"sap-srfc-destination:nplDest:BAPI_FLTRIP_CREATE?transacted=true\"/> <to uri=\"bean:returnFlightTripResponse\"/> </route>",
"DataFormat jaxb = new JaxbDataFormat(\"org.fusesource.sap.example.jaxb\"); from(\"sap-srfc-server:nplserver:BOOK_FLIGHT\") .unmarshal(jaxb) .multicast() .to(\"direct:getFlightConnectionInfo\", \"direct:getFlightCustomerInfo\", \"direct:getPassengerInfo\") .end() .to(\"direct:createFlightTrip\") .marshal(jaxb);",
"<route> <from uri=\"sap-srfc-server:nplserver:BOOK_FLIGHT\"/> <unmarshal> <jaxb contextPath=\"org.fusesource.sap.example.jaxb\"/> </unmarshal> <multicast> <to uri=\"direct:getFlightConnectionInfo\"/> <to uri=\"direct:getFlightCustomerInfo\"/> <to uri=\"direct:getPassengerInfo\"/> </multicast> <to uri=\"direct:createFlightTrip\"/> <marshal> <jaxb contextPath=\"org.fusesource.sap.example.jaxb\"/> </marshal> </route>",
"@XmlRootElement(name=\"Request\", namespace=\"http://sap.fusesource.org/rfc/nplServer/BOOK_FLIGHT\") @XmlAccessorType(XmlAccessType.FIELD) public class BookFlightRequest { @XmlAttribute(name=\"CUSTNAME\") private String customerName; @XmlAttribute(name=\"FLIGHTDATE\") @XmlJavaTypeAdapter(DateAdapter.class) private Date flightDate; @XmlAttribute(name=\"TRAVELAGENCYNUMBER\") private String travelAgencyNumber; @XmlAttribute(name=\"DESTINATION_FROM\") private String startAirportCode; @XmlAttribute(name=\"DESTINATION_TO\") private String endAirportCode; @XmlAttribute(name=\"PASSFORM\") private String passengerFormOfAddress; @XmlAttribute(name=\"PASSNAME\") private String passengerName; @XmlAttribute(name=\"PASSBIRTH\") @XmlJavaTypeAdapter(DateAdapter.class) private Date passengerDateOfBirth; @XmlAttribute(name=\"CLASS\") private String flightClass; }",
"@XmlRootElement(name=\"Response\", namespace=\"http://sap.fusesource.org/rfc/nplServer/BOOK_FLIGHT\") @XmlAccessorType(XmlAccessType.FIELD) public class BookFlightResponse { @XmlAttribute(name=\"TRIPNUMBER\") private String tripNumber; @XmlAttribute(name=\"TICKET_PRICE\") private BigDecimal ticketPrice; @XmlAttribute(name=\"TICKET_TAX\") private BigDecimal ticketTax; @XmlAttribute(name=\"CURRENCY\") private String currency; @XmlAttribute(name=\"PASSFORM\") private String passengerFormOfAddress; @XmlAttribute(name=\"PASSNAME\") private String passengerName; @XmlAttribute(name=\"PASSBIRTH\") @XmlJavaTypeAdapter(DateAdapter.class) private Date passengerDateOfBirth; @XmlElement(name=\"FLTINFO\") private FlightInfo flightInfo; @XmlElement(name=\"CONNINFO\") private ConnectionInfoTable connectionInfo; }",
"@XmlRootElement(name=\"FLTINFO\", namespace=\"http://sap.fusesource.org/rfc/nplServer/BOOK_FLIGHT\") @XmlAccessorType(XmlAccessType.FIELD) public class FlightInfo { @XmlAttribute(name=\"FLIGHTTIME\") private String flightTime; @XmlAttribute(name=\"CITYFROM\") private String cityFrom; @XmlAttribute(name=\"DEPDATE\") @XmlJavaTypeAdapter(DateAdapter.class) private Date departureDate; @XmlAttribute(name=\"DEPTIME\") @XmlJavaTypeAdapter(DateAdapter.class) private Date departureTime; @XmlAttribute(name=\"CITYTO\") private String cityTo; @XmlAttribute(name=\"ARRDATE\") @XmlJavaTypeAdapter(DateAdapter.class) private Date arrivalDate; @XmlAttribute(name=\"ARRTIME\") @XmlJavaTypeAdapter(DateAdapter.class) private Date arrivalTime; }",
"@XmlRootElement(name=\"CONNINFO_TABLE\", namespace=\"http://sap.fusesource.org/rfc/nplServer/BOOK_FLIGHT\") @XmlAccessorType(XmlAccessType.FIELD) public class ConnectionInfoTable { @XmlElement(name=\"row\") List<ConnectionInfo> rows; }",
"@XmlRootElement(name=\"CONNINFO\", namespace=\"http://sap.fusesource.org/rfc/nplServer/BOOK_FLIGHT\") @XmlAccessorType(XmlAccessType.FIELD) public class ConnectionInfo { @XmlAttribute(name=\"CONNID\") String connectionId; @XmlAttribute(name=\"AIRLINE\") String airline; @XmlAttribute(name=\"PLANETYPE\") String planeType; @XmlAttribute(name=\"CITYFROM\") String cityFrom; @XmlAttribute(name=\"DEPDATE\") @XmlJavaTypeAdapter(DateAdapter.class) Date departureDate; @XmlAttribute(name=\"DEPTIME\") @XmlJavaTypeAdapter(DateAdapter.class) Date departureTime; @XmlAttribute(name=\"CITYTO\") String cityTo; @XmlAttribute(name=\"ARRDATE\") @XmlJavaTypeAdapter(DateAdapter.class) Date arrivalDate; @XmlAttribute(name=\"ARRTIME\") @XmlJavaTypeAdapter(DateAdapter.class) Date arrivalTime; }"
] |
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/sap
|
5.108. initscripts
|
5.108. initscripts 5.108.1. RHBA-2012:1275 - initscripts bug fix update Updated initscripts packages that fix one bug are now available for Red Hat Enterprise Linux 6. The initscripts package contains basic system scripts to boot the system, change runlevels, activate and deactivate most network interfaces, and shut the system down cleanly. Bug Fix BZ# 854852 Previously, the naming policy for VLAN names was too strict. Consequently, the if-down utility did not properly remove descriptively-named interfaces from the /proc/net/vlan/config file. This update removes the name format check and if-down now works as expected in the described scenario. All users of initscripts are advised to upgrade to these updated packages, which fix this bug. 5.108.2. RHBA-2012:0816 - initscripts bug fix and enhancement update Updated initscripts packages that fix multiple bugs and add one enhancement are now available for Red Hat Enterprise Linux 6. The initscripts package contains system scripts to boot your system, change runlevels, activate and deactivate most network interfaces, and shut the system down cleanly. Bug Fixes BZ# 781493 The version of initscripts did not support IPv6 routing in the same way as IPv4 routing. IPv6 addressing and routing could be achieved only by specifying the ip commands explicitly with the -6 flag in the /etc/sysconfig/network-scripts/rule- DEVICE_NAME configuration file where DEVICE_NAME is the name of the respective network interface. With this update, the related network scripts have been modified to provide support for IPv6-based policy routing and IPv6 routing is now configured separately in the /etc/sysconfig/network-scripts/rule6- DEVICE_NAME configuration file. BZ# 786404 During the first boot after system installation, the kernel entropy was relatively low to generate high-quality keys for sshd . With this update, the entropy created by the disk activity during system installation is saved in the /var/lib/random-seed file and used for key generation. This provides enough randomness and allows generation of keys based on sufficient entropy. BZ# 582002 In emergency mode, every read request from the /dev/tty device ended with an error and consequently, it was not possible to read from the /dev/tty device. This happened because, when activating single-user mode, the rc.sysinit script called the sulogin application directly. However, sulogin needs to be the console owner to operate correctly. With this update, rc.sysinit starts the rcS-emergency job, which then runs sulogin with the correct console setting. BZ# 588993 The ifconfig utility was not able to handle 20-byte MAC addresses in InfiniBand environments and reported that the provided addresses were too long. With this update, the respective ifconfig commands have been changed to aliases to the respective ip commands and ifconfig now handles 20-byte MAC addresses correctly. BZ# 746045 Due to a logic error, the sysfs() call did not remove the arp_ip_target correctly. As a consequence, the following error was reported when attempting to shut down a bonding device: This update modifies the script so that the error no longer occurs and arp_ip_target is now removed correctly. BZ# 746808 The serial.conf file now contains improved comments on how to create an /etc/init/tty<device>.conf file that corresponds to the active serial device. BZ# 802119 The network service showed error messages on service startup similar to the following: This was due to incorrect splitting of the parsed arguments. With this update, the arguments are processed correctly and the problem no longer occurs. BZ# 754984 The halt initscript did not contain support for the apcupsd daemon, the daemon for power mangement and controlling of APC's UPS (Uninterruptible Power Supply) supplies. Consequently, the supplies were not turned off on power failure. This update adds the support to the script and the UPS models are now turned off in power-failure situations as expected. BZ# 755175 In the version of initscripts, the comments with descriptions of variables kernel.msgmnb and kernel.msgmax were incorrect. With this update, the comments have been fixed and the variables are now described correctly. BZ# 787107 Due to an incorrect logic operator, the following error was returned on network service shutdown as the shutdown process failed: With this update, the code of the shutdown initscript has been modified and the error is no longer returned on network service shutdown. BZ# 760018 The system could remain unresponsive for some time during shutdown. This happened because initscript did not check if there were any CIFS (Common Internet File System) share mounts and failed to unmount any mounted CIFS shares before shutdown. With this update, a CIFS shares check has been added and the shares are stopped prior to shutdown. BZ# 721010 The ifup-aliases script was using the ifconfig tool when starting IP alias devices. Consequently, the ifup execution was gradually slowing down significatly with the increasing number of the devices on the NIC (Network Interface Card) device. With this update, IP aliases now use the ip tool instead of ifconfig and the performance of the ifup-aliases script remains constant in the scenario described. BZ# 765835 Prior to this update, the netconsole script could not discover and resolve the MAC address of a router specified in the /etc/sysconfig/netconsole file. This happened because the address was resolved as two identical addresses and the script failed. This update modifies the netconsole script so that it handles the MAC address correctly and the device is discovered as expected. BZ# 757637 In the Malay ( ms_MY ) locale, some services did not work properly. This happened due to a typographical mistake in the ms.po file. This update fixes the mistake and services in the ms_MY locale run as expected. BZ# 749610 The primary option for bonding in the ifup-eth tool had a timing issue when bonding NIC devices. Consequently, the bonding was configured, but it was the active interface that was enslaved first. With this update, the timing of bonding with the primary option has been corrected and the device defined in the primary option is enslaved first as expected. Enhancement BZ# 704919 Users can now set the NIS (Network Information Service) domain name by configuring the NISDOMAIN parameter in the /etc/sysconfig/network file, or other relevant configuration files. Users of initscripts should upgrade to these updated packages, which fix these bugs and add this enhancement.
|
[
"ifdown-eth: line 64: echo: write error: Invalid argument",
"Error: either \"dev\" is duplicate, or \"20\" is a garbage.",
"69: echo: write error: Invalid argument"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/initscripts
|
Chapter 2. Preparing Ceph Storage nodes for deployment
|
Chapter 2. Preparing Ceph Storage nodes for deployment Red Hat Ceph Storage nodes are bare metal systems with IPMI power management. Director installs Red Hat Enterprise Linux on each node. Director communicates with each node through the Provisioning network during the introspection and provisioning processes. All nodes connect to the Provisioning network through the native VLAN. For more information about bare metal provisioning before overcloud deployment, see Provisioning and deploying your overcloud in Installing and managing Red Hat OpenStack Platform with director guide. For a complete guide to bare metal provisioning, see Configuring the Bare Metal Provisioning service . 2.1. Cleaning Ceph Storage node disks Ceph Storage OSDs and journal partitions require factory clean disks. All data and metadata must be erased by the Bare Metal Provisioning service (ironic) from these disks before installing the Ceph OSD services. You can configure director to delete all disk data and metadata by default by using the Bare Metal Provisioning service. When director is configured to perform this task, the Bare Metal Provisioning service performs an additional step to boot the nodes each time a node is set to available . Warning The Bare Metal Provisioning service uses the wipefs --force --all command. This command deletes all data and metadata on the disk but it does not perform a secure erase. A secure erase takes much longer. Procedure Open /home/stack/undercloud.conf and add the following parameter: Save /home/stack/undercloud.conf . Update the undercloud configuration. 2.2. Registering nodes Register the nodes to enable communication with director. Procedure Create a node inventory JSON file in /home/stack . Enter hardware and power management details for each node. For example: Save the new file. Initialize the stack user: Import the JSON inventory file into director and register nodes Replace <inventory_file> with the name of the file created in the first step. Assign the kernel and ramdisk images to each node: 2.3. Verifying available Red Hat Ceph Storage packages Verify all required packages are available to avoid overcloud deployment failures. 2.3.1. Verifying cephadm package installation Verify the cephadm package is installed on at least one overcloud node. The cephadm package is used to bootstrap the first node of the Ceph Storage cluster. The cephadm package is included in the overcloud-hardened-uefi-full.qcow2 image. The tripleo_cephadm role uses the Ansible package module to ensure it is present in the image. 2.4. Defining the root disk for multi-disk Ceph clusters Ceph Storage nodes typically use multiple disks. Director must identify the root disk in multiple disk configurations. The overcloud image is written to the root disk during the provisioning process. Hardware properties are used to identify the root disk. For more information about properties you can use to identify the root disk, see Properties that identify the root disk . Procedure Verify the disk information from the hardware introspection of each node: Replace <node_uuid> with the UUID of the node. Replace <output_file_name> with the name of the file that contains the output of the node introspection. For example, the data for one node might show three disks: Set the root disk for the node by using a unique hardware property: (undercloud)USD openstack baremetal node set --property root_device='{<property_value>}' <node-uuid> Replace <property_value> with the unique hardware property value from the introspection data to use to set the root disk. Replace <node_uuid> with the UUID of the node. Note A unique hardware property is any property from the hardware introspection step that uniquely identifies the disk. For example, the following command uses the disk serial number to set the root disk: (undercloud)USD openstack baremetal node set --property root_device='{"serial": "61866da04f380d001ea4e13c12e36ad6"}' 1a4e30da-b6dc-499d-ba87-0bd8a3819bc0 Configure the BIOS of each node to first boot from the network and then the root disk. Director identifies the specific disk to use as the root disk. When you run the openstack overcloud node provision command, director provisions and writes the overcloud image to the root disk. 2.4.1. Properties that identify the root disk There are several properties that you can define to help director identify the root disk: model (String): Device identifier. vendor (String): Device vendor. serial (String): Disk serial number. hctl (String): Host:Channel:Target:Lun for SCSI. size (Integer): Size of the device in GB. wwn (String): Unique storage identifier. wwn_with_extension (String): Unique storage identifier with the vendor extension appended. wwn_vendor_extension (String): Unique vendor storage identifier. rotational (Boolean): True for a rotational device (HDD), otherwise false (SSD). name (String): The name of the device, for example: /dev/sdb1. Important Use the name property for devices with persistent names. Do not use the name property to set the root disk for devices that do not have persistent names because the value can change when the node boots. 2.5. Using the overcloud-minimal image to avoid using a Red Hat subscription entitlement The default image for a Red Hat OpenStack Platform (RHOSP) deployment is overcloud-hardened-uefi-full.qcow2 . The overcloud-hardened-uefi-full.qcow2 image uses a valid Red Hat OpenStack Platform (RHOSP) subscription. You can use the overcloud-minimal image when you do not want to consume your subscription entitlements, to avoid reaching the limit of your paid Red Hat subscriptions. This is useful, for example, when you want to provision nodes with only Ceph daemons, or when you want to provision a bare operating system (OS) where you do not want to run any other OpenStack services. For information about how to obtain the overcloud-minimal image, see Obtaining images for overcloud nodes . Note The overcloud-minimal image supports only standard Linux bridges. The overcloud-minimal image does not support Open vSwitch (OVS) because OVS is an OpenStack service that requires a Red Hat OpenStack Platform subscription entitlement. OVS is not required to deploy Ceph Storage nodes. Use linux_bond instead of ovs_bond to define bonds. Procedure Open your /home/stack/templates/overcloud-baremetal-deploy.yaml file. Add or update the image property for the nodes that you want to use the overcloud-minimal image. You can set the image to overcloud-minimal on specific nodes, or for all nodes for a role. Note The overcloud minimal image is not a whole disk image. The kernel and ramdisk must be specified in the /home/stack/templates/overcloud-baremetal-deploy.yaml file. Specific nodes All nodes for a specific role In the roles_data.yaml role definition file, set the rhsm_enforce parameter to False . Run the provisioning command: Pass the overcloud-baremetal-deployed.yaml environment file to the openstack overcloud ceph deploy command. 2.6. Designating nodes for Red Hat Ceph Storage To designate nodes for Red Hat Ceph Storage, you must create a new role file to configure the CephStorage role, and configure the bare metal nodes with a resource class for CephStorage . Procedure Log in to the undercloud as the stack user. Source the stackrc file: Generate a new roles data file named roles_data.yaml that includes the Controller , Compute , and CephStorage roles: Open roles_data.yaml and ensure it has the following parameters and sections: Section/Parameter Value Role comment Role: CephStorage Role name name: CephStorage description Ceph node role HostnameFormatDefault %stackname%-novaceph-%index% deprecated_nic_config_name ceph.yaml Register the Ceph nodes for the overcloud by adding them to your node definition template. Inspect the node hardware: Tag each bare metal node that you want to designate for Ceph with a custom Ceph resource class: Replace <node> with the ID of the bare metal node. Add the CephStorage role to your overcloud-baremetal-deploy.yaml file, and define any predictive node placements, resource classes, or other attributes that you want to assign to your nodes: Run the provisioning command: Monitor the provisioning progress in a separate terminal. When provisioning is successful, the node state changes from available to active : Additional resources For more information on node registration, see Section 2.2, "Registering nodes" . For more information inspecting node hardware, see Creating an inventory of the bare-metal node hardware in the Installing and managing Red Hat OpenStack Platform with director guide.
|
[
"clean_nodes=true",
"openstack undercloud install",
"{ \"nodes\":[ { \"mac\":[ \"b1:b1:b1:b1:b1:b1\" ], \"cpu\":\"4\", \"memory\":\"6144\", \"disk\":\"40\", \"arch\":\"x86_64\", \"pm_type\":\"ipmi\", \"pm_user\":\"admin\", \"pm_password\":\"p@55w0rd!\", \"pm_addr\":\"192.0.2.205\" }, { \"mac\":[ \"b2:b2:b2:b2:b2:b2\" ], \"cpu\":\"4\", \"memory\":\"6144\", \"disk\":\"40\", \"arch\":\"x86_64\", \"pm_type\":\"ipmi\", \"pm_user\":\"admin\", \"pm_password\":\"p@55w0rd!\", \"pm_addr\":\"192.0.2.206\" }, { \"mac\":[ \"b3:b3:b3:b3:b3:b3\" ], \"cpu\":\"4\", \"memory\":\"6144\", \"disk\":\"40\", \"arch\":\"x86_64\", \"pm_type\":\"ipmi\", \"pm_user\":\"admin\", \"pm_password\":\"p@55w0rd!\", \"pm_addr\":\"192.0.2.207\" }, { \"mac\":[ \"c1:c1:c1:c1:c1:c1\" ], \"cpu\":\"4\", \"memory\":\"6144\", \"disk\":\"40\", \"arch\":\"x86_64\", \"pm_type\":\"ipmi\", \"pm_user\":\"admin\", \"pm_password\":\"p@55w0rd!\", \"pm_addr\":\"192.0.2.208\" }, { \"mac\":[ \"c2:c2:c2:c2:c2:c2\" ], \"cpu\":\"4\", \"memory\":\"6144\", \"disk\":\"40\", \"arch\":\"x86_64\", \"pm_type\":\"ipmi\", \"pm_user\":\"admin\", \"pm_password\":\"p@55w0rd!\", \"pm_addr\":\"192.0.2.209\" }, { \"mac\":[ \"c3:c3:c3:c3:c3:c3\" ], \"cpu\":\"4\", \"memory\":\"6144\", \"disk\":\"40\", \"arch\":\"x86_64\", \"pm_type\":\"ipmi\", \"pm_user\":\"admin\", \"pm_password\":\"p@55w0rd!\", \"pm_addr\":\"192.0.2.210\" }, { \"mac\":[ \"d1:d1:d1:d1:d1:d1\" ], \"cpu\":\"4\", \"memory\":\"6144\", \"disk\":\"40\", \"arch\":\"x86_64\", \"pm_type\":\"ipmi\", \"pm_user\":\"admin\", \"pm_password\":\"p@55w0rd!\", \"pm_addr\":\"192.0.2.211\" }, { \"mac\":[ \"d2:d2:d2:d2:d2:d2\" ], \"cpu\":\"4\", \"memory\":\"6144\", \"disk\":\"40\", \"arch\":\"x86_64\", \"pm_type\":\"ipmi\", \"pm_user\":\"admin\", \"pm_password\":\"p@55w0rd!\", \"pm_addr\":\"192.0.2.212\" }, { \"mac\":[ \"d3:d3:d3:d3:d3:d3\" ], \"cpu\":\"4\", \"memory\":\"6144\", \"disk\":\"40\", \"arch\":\"x86_64\", \"pm_type\":\"ipmi\", \"pm_user\":\"admin\", \"pm_password\":\"p@55w0rd!\", \"pm_addr\":\"192.0.2.213\" } ] }",
"source ~/stackrc",
"openstack overcloud node import <inventory_file>",
"openstack overcloud node configure <node>",
"(undercloud)USD openstack baremetal introspection data save <node_uuid> --file <output_file_name>",
"[ { \"size\": 299439751168, \"rotational\": true, \"vendor\": \"DELL\", \"name\": \"/dev/sda\", \"wwn_vendor_extension\": \"0x1ea4dcc412a9632b\", \"wwn_with_extension\": \"0x61866da04f3807001ea4dcc412a9632b\", \"model\": \"PERC H330 Mini\", \"wwn\": \"0x61866da04f380700\", \"serial\": \"61866da04f3807001ea4dcc412a9632b\" } { \"size\": 299439751168, \"rotational\": true, \"vendor\": \"DELL\", \"name\": \"/dev/sdb\", \"wwn_vendor_extension\": \"0x1ea4e13c12e36ad6\", \"wwn_with_extension\": \"0x61866da04f380d001ea4e13c12e36ad6\", \"model\": \"PERC H330 Mini\", \"wwn\": \"0x61866da04f380d00\", \"serial\": \"61866da04f380d001ea4e13c12e36ad6\" } { \"size\": 299439751168, \"rotational\": true, \"vendor\": \"DELL\", \"name\": \"/dev/sdc\", \"wwn_vendor_extension\": \"0x1ea4e31e121cfb45\", \"wwn_with_extension\": \"0x61866da04f37fc001ea4e31e121cfb45\", \"model\": \"PERC H330 Mini\", \"wwn\": \"0x61866da04f37fc00\", \"serial\": \"61866da04f37fc001ea4e31e121cfb45\" } ]",
"- name: Ceph count: 3 instances: - hostname: overcloud-ceph-0 name: node00 image: href: file:///var/lib/ironic/images/overcloud-minimal.raw kernel: file://var/lib/ironic/images/overcloud-minimal.vmlinuz ramdisk: file://var/lib/ironic/images/overcloud-minimal.initrd - hostname: overcloud-ceph-1 name: node01 image: href: file:///var/lib/ironic/images/overcloud-minimal.raw kernel: file://var/lib/ironic/images/overcloud-minimal.vmlinuz ramdisk: file://var/lib/ironic/images/overcloud-minimal.initrd - hostname: overcloud-ceph-2 name: node02 image: href: file:///var/lib/ironic/images/overcloud-minimal.raw kernel: file://var/lib/ironic/images/overcloud-minimal.vmlinuz ramdisk: file://var/lib/ironic/images/overcloud-minimal.initrd",
"- name: Ceph count: 3 defaults: image: href: file:///var/lib/ironic/images/overcloud-minimal.raw kernel: file://var/lib/ironic/images/overcloud-minimal.vmlinuz ramdisk: file://var/lib/ironic/images/overcloud-minimal.initrd instances: - hostname: overcloud-ceph-0 name: node00 - hostname: overcloud-ceph-1 name: node01 - hostname: overcloud-ceph-2 name: node02",
"rhsm_enforce: False",
"(undercloud)USD openstack overcloud node provision --stack overcloud --output /home/stack/templates/overcloud-baremetal-deployed.yaml /home/stack/templates/overcloud-baremetal-deploy.yaml",
"[stack@director ~]USD source ~/stackrc",
"(undercloud)USD openstack overcloud roles generate Controller Compute CephStorage -o /home/stack/templates/roles_data.yaml \\",
"(undercloud)USD openstack overcloud node introspect --all-manageable --provide",
"(undercloud)USD openstack baremetal node set --resource-class baremetal.CEPH <node>",
"- name: Controller count: 3 - name: Compute count: 3 - name: CephStorage count: 5 defaults: resource_class: baremetal.CEPH",
"(undercloud)USD openstack overcloud node provision --stack stack --output /home/stack/templates/overcloud-baremetal-deployed.yaml /home/stack/templates/overcloud-baremetal-deploy.yaml",
"(undercloud)USD watch openstack baremetal node list"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/deploying_red_hat_ceph_storage_and_red_hat_openstack_platform_together_with_director/assembly_prepare-ceph-storage-nodes-for-overcloud-deployment_deployingcontainerizedrhcs
|
Chapter 13. ImageContentPolicy [config.openshift.io/v1]
|
Chapter 13. ImageContentPolicy [config.openshift.io/v1] Description ImageContentPolicy holds cluster-wide information about how to handle registry mirror rules. When multiple policies are defined, the outcome of the behavior is defined on each field. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 13.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec holds user settable values for configuration 13.1.1. .spec Description spec holds user settable values for configuration Type object Property Type Description repositoryDigestMirrors array repositoryDigestMirrors allows images referenced by image digests in pods to be pulled from alternative mirrored repository locations. The image pull specification provided to the pod will be compared to the source locations described in RepositoryDigestMirrors and the image may be pulled down from any of the mirrors in the list instead of the specified repository allowing administrators to choose a potentially faster mirror. To pull image from mirrors by tags, should set the "allowMirrorByTags". Each "source" repository is treated independently; configurations for different "source" repositories don't interact. If the "mirrors" is not specified, the image will continue to be pulled from the specified repository in the pull spec. When multiple policies are defined for the same "source" repository, the sets of defined mirrors will be merged together, preserving the relative order of the mirrors, if possible. For example, if policy A has mirrors a, b, c and policy B has mirrors c, d, e , the mirrors will be used in the order a, b, c, d, e . If the orders of mirror entries conflict (e.g. a, b vs. b, a ) the configuration is not rejected but the resulting order is unspecified. repositoryDigestMirrors[] object RepositoryDigestMirrors holds cluster-wide information about how to handle mirrors in the registries config. 13.1.2. .spec.repositoryDigestMirrors Description repositoryDigestMirrors allows images referenced by image digests in pods to be pulled from alternative mirrored repository locations. The image pull specification provided to the pod will be compared to the source locations described in RepositoryDigestMirrors and the image may be pulled down from any of the mirrors in the list instead of the specified repository allowing administrators to choose a potentially faster mirror. To pull image from mirrors by tags, should set the "allowMirrorByTags". Each "source" repository is treated independently; configurations for different "source" repositories don't interact. If the "mirrors" is not specified, the image will continue to be pulled from the specified repository in the pull spec. When multiple policies are defined for the same "source" repository, the sets of defined mirrors will be merged together, preserving the relative order of the mirrors, if possible. For example, if policy A has mirrors a, b, c and policy B has mirrors c, d, e , the mirrors will be used in the order a, b, c, d, e . If the orders of mirror entries conflict (e.g. a, b vs. b, a ) the configuration is not rejected but the resulting order is unspecified. Type array 13.1.3. .spec.repositoryDigestMirrors[] Description RepositoryDigestMirrors holds cluster-wide information about how to handle mirrors in the registries config. Type object Required source Property Type Description allowMirrorByTags boolean allowMirrorByTags if true, the mirrors can be used to pull the images that are referenced by their tags. Default is false, the mirrors only work when pulling the images that are referenced by their digests. Pulling images by tag can potentially yield different images, depending on which endpoint we pull from. Forcing digest-pulls for mirrors avoids that issue. mirrors array (string) mirrors is zero or more repositories that may also contain the same images. If the "mirrors" is not specified, the image will continue to be pulled from the specified repository in the pull spec. No mirror will be configured. The order of mirrors in this list is treated as the user's desired priority, while source is by default considered lower priority than all mirrors. Other cluster configuration, including (but not limited to) other repositoryDigestMirrors objects, may impact the exact order mirrors are contacted in, or some mirrors may be contacted in parallel, so this should be considered a preference rather than a guarantee of ordering. source string source is the repository that users refer to, e.g. in image pull specifications. 13.2. API endpoints The following API endpoints are available: /apis/config.openshift.io/v1/imagecontentpolicies DELETE : delete collection of ImageContentPolicy GET : list objects of kind ImageContentPolicy POST : create an ImageContentPolicy /apis/config.openshift.io/v1/imagecontentpolicies/{name} DELETE : delete an ImageContentPolicy GET : read the specified ImageContentPolicy PATCH : partially update the specified ImageContentPolicy PUT : replace the specified ImageContentPolicy /apis/config.openshift.io/v1/imagecontentpolicies/{name}/status GET : read status of the specified ImageContentPolicy PATCH : partially update status of the specified ImageContentPolicy PUT : replace status of the specified ImageContentPolicy 13.2.1. /apis/config.openshift.io/v1/imagecontentpolicies Table 13.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of ImageContentPolicy Table 13.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 13.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind ImageContentPolicy Table 13.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 13.5. HTTP responses HTTP code Reponse body 200 - OK ImageContentPolicyList schema 401 - Unauthorized Empty HTTP method POST Description create an ImageContentPolicy Table 13.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.7. Body parameters Parameter Type Description body ImageContentPolicy schema Table 13.8. HTTP responses HTTP code Reponse body 200 - OK ImageContentPolicy schema 201 - Created ImageContentPolicy schema 202 - Accepted ImageContentPolicy schema 401 - Unauthorized Empty 13.2.2. /apis/config.openshift.io/v1/imagecontentpolicies/{name} Table 13.9. Global path parameters Parameter Type Description name string name of the ImageContentPolicy Table 13.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete an ImageContentPolicy Table 13.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 13.12. Body parameters Parameter Type Description body DeleteOptions schema Table 13.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ImageContentPolicy Table 13.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 13.15. HTTP responses HTTP code Reponse body 200 - OK ImageContentPolicy schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ImageContentPolicy Table 13.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.17. Body parameters Parameter Type Description body Patch schema Table 13.18. HTTP responses HTTP code Reponse body 200 - OK ImageContentPolicy schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ImageContentPolicy Table 13.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.20. Body parameters Parameter Type Description body ImageContentPolicy schema Table 13.21. HTTP responses HTTP code Reponse body 200 - OK ImageContentPolicy schema 201 - Created ImageContentPolicy schema 401 - Unauthorized Empty 13.2.3. /apis/config.openshift.io/v1/imagecontentpolicies/{name}/status Table 13.22. Global path parameters Parameter Type Description name string name of the ImageContentPolicy Table 13.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified ImageContentPolicy Table 13.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 13.25. HTTP responses HTTP code Reponse body 200 - OK ImageContentPolicy schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified ImageContentPolicy Table 13.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.27. Body parameters Parameter Type Description body Patch schema Table 13.28. HTTP responses HTTP code Reponse body 200 - OK ImageContentPolicy schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified ImageContentPolicy Table 13.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.30. Body parameters Parameter Type Description body ImageContentPolicy schema Table 13.31. HTTP responses HTTP code Reponse body 200 - OK ImageContentPolicy schema 201 - Created ImageContentPolicy schema 401 - Unauthorized Empty
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/config_apis/imagecontentpolicy-config-openshift-io-v1
|
Configuration Reference
|
Configuration Reference Red Hat OpenStack Platform 16.2 Configuring Red Hat OpenStack Platform environments OpenStack Documentation Team [email protected] OpenStack Documentation Team [email protected] Abstract This document is for system administrators who want to look up configuration options. It contains lists of configuration options available with OpenStack and uses auto-generation to generate options and the descriptions from the code for each project.
| null |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/configuration_reference/index
|
Chapter 45. Known Issues
|
Chapter 45. Known Issues RHEV OVA images of Atomic Host do not work RHEV OVA images of Atomic Host currently cannot be imported into RHEV. See this Bugzilla for details. podman , buildah , and scopeo need full name of the image Currently, to work with an image using podman , buildah , and scopeo , you cannot use the image's short name, such as rhel7-aarch64 . Instead, specify its full name, including the registry, for example registry.access.redhat.com/rhel7-aarch64 : etcd 3.2.22-24 image contains incorrect version of the binary On Thursday January 31, 2019 Red Hat released to the public a version of etcd that was incorrectly labeled. This incorrect image was released in the rhel-7-server-extras-rpms channel and the Red Hat Container Catalog. Specifically, an etcd container labeled 3.2.22-24 was released with etcd 3.3.11 inside of the container image. By Tuesday February 5th, 2019 Red Hat realized the issue and reverted the etcd container and RPM channels back to the correct image. The etcd container 3.2.22-18 is now the correct container. OpenShift 3.0 to 3.9 uses RPMs by default for etcd installations. OpenShift 3.10 to 3.11 uses container images for etcd installations. Not all customers are affected by this issue. If you run an RPM-based etcd with a RHEL7 channel attached to the RHEL hosts and you issued a yum update for all RPMs installed on your etcd hosts between the 6 days of January 31 to Feb 5th, you have pulled the incorrect RPMs. Or, if you are running a container based etcd and you have upgraded your cluster, scaled up the etcd nodes or manually installed the latest etcd container image between the 6 days of January 31 to Feb 5th, you have pulled the incorrect container image. See this Knowledgebase article for details on this issue and for how to check whether your systems have been affected. The buildah package is not included by default The buildah package is not included by default in RHEL Atomic Host 7.5.3. To add it, run: The current plan is to include the buildah package in RHEL Atomic Host 7.5.4, so it will not be necessary to install it separately. The checkpoint and restore feature of podman is not working Due to a bug introduced in CRIU, the checkpoint and restore feature of podman is not working". Docker is not impacted. ostree remote configuration might be missing on new installations The 'ostree' remote configuration might be missing on new installations of RHEL Atomic Host 7.5.0. Consequently, when the rpm-ostreed daemon starts, it does not find configuration of the remote, which causes the rpm-ostree command to hang. So far, this issue has been found on new Kickstart installations, but not on ISO or cloud installations. To fix the problem, follow these steps: Populate the /etc/ostree/remotes.d/ directory with an ostree remote configuration. This configuration should match the remote in the .origin file that is in /sysroot/ostree/deploy/rhel-atomic-host/deploy/ . Example contents of /etc/ostree/remotes.d/redhat.conf : Restart the rpm-ostreed service: Alternatively, you can fix the problem by simply registering the system with subscription-manager . Containers running systemd do not work Prior to Atomic Host 7.5.0, due to a bug, the container_manage_cgroup SELinux boolean permitted containers to modify cgroup settings whether the boolean is on or off. In 7.5.0, this has been fixed. Now, if you need to run containers with systemd, you need to set the boolean to on : See this Knowledgebase solution for more information. Old LVM configuration file sometimes not available after upgrading If an LVM operation happens during an Atomic Host upgrade, the old LVM configuration file might not be available after the upgrade. You would see this error message: To work around this, ensure that no LVM operation happens during an upgrade. A common LVM operation that might happen is thin-pool auto-extension. To prevent thin-pool auto-extension, upgrade as follows: Disable auto-extension: Upgrade: After upgrade or reboot, enable auto-extension: In an extremely rare case, this scenario will break LVM. To allow recovery from broken LVM, back up /etc/lvm before upgrading. ( BZ#1365297 ) The root partition might have too little space for upgrades The default Atomic Host root partition might be too small for upgrades. To upgrade, you might need to expand the root logical volume. See these sections: Changing the Default Size of the Root Partition During Installation Changing the Size of the Root Partition After Installation Alternatively, you can free space on the root partition by pruning the deployment. For background information on the root partition, see Managing Storage in Red Hat Enterprise Linux Atomic Host . atomic uninstall uninstalls all sssd containers Running this command on an sssd container: incorrectly uninstalls not only the container-name sssd container, but all sssd containers. To mitigate this, do not uninstall an sssd container if you use any other sssd containers. Cannot use memory cgroups without swap on IBM POWER8 series The "runc exec" command on the little-endian variant of IBM Power Systems uses significantly more memory than on AMD64 and Intel 64. Therefore, to prevent running out of memory, do not set cgroup memory limit to less than 100 megabytes. By default, no user namespaces are allowed By default, the new 7.4 kernel restricts the number of user namespaces to 0. To work around this, increase the user namespace limit: Cockpit can start dockerd when using docker, but not docker-latest Beginning with RHEL Atomic Host 7.3.5, service-related functions in Cockpit might not work as expected if you run with docker-latest instead of docker . Notably, Cockpit fails to start the docker daemon when running with docker-latest . Exposing the docker daemon through a TCP port is not secure The docker daemon does no authentication, so binding it to a TCP port would give root access to any process with access to that TCP port. Red Hat advices against binding docker to a TCP port. See Access port options for details. atomic scan will try to connect to the Internet if you do not use atomic install first When you install the openscap container image with the atomic install command, the /etc/oscapd/oscapd.ini configuration file is placed on the host machine and gets exposed to the container. The oscapd.ini file contains the information about where to fetch Open Vulnerability and Assessment Language (OVAL) content from. The default setting is to use the CVE data from inside the container and won't connect to the Internet unless you explicitly configure it so. When you do not use atomic install and directly start scanning with atomic scan , atomic will fetch the container and run it immediately ignoring the INSTALL label. This means that /etc/oscapd/oscapd.ini won't be placed on the host system and be exposed to the container and the default behavior of the openscap-daemon itself inside the container will be used. The default behavior is to download CVE data from Red Hat's URL, connecting to the Internet. Because of this. it is recommended that you use atomic install before scanning containers so that the settings from the opscapd.ini file are used. If not, scanning will still work, but be aware of the difference in the behavior of the openscap-daemon in both cases. Red Hat Enterprise Linux Atomic Host does not support FIPS mode FIPS mode cannot be enabled on RHEL Atomic Host. Upgrade to 7.3 from release versions older than 7.2.7 fails with an error on Atomic Host Attempting to upgrade from RHEL Atomic Host 7.2.6-1 or older to 7.3 fails with the following error: There are three possible workarounds: 1) Disable SELinux and upgrade as usual: 2) Stop rpm-ostreed and change the SELinux context: 3) Deploy Atomic Host 7.2.7 first and then upgrade: Atomic Host does not support /usr as a mount point Atomic Host does not support /usr as a mount point. As a consequence, Anaconda could crash if such a partition layout is configured. To work around this issue, do not make /usr a mount point. etcdctl backup now reuses backup of the etcd member to avoid data loss Previously, a member failed to be added to the etcd cluster when the database size was more than 700 MB, resulting in data loss. To work around this usse, the etcdctl backup command has been extended with options to reuse backup of the etcd member. rhel-push-plugin service does not restart after package upgrade The docker service requires rhel-push-plugin to be started before itself. However, after upgrading the docker and docker-rhel-push-plugin packages, the docker daemon restarts while using the already existing rhel-push-plugin service in memory without restarting it. To work around this issue, manually restart rhel-push-plugin first, and the docker service afterwards. etcd will not start if its current version is older than the etcd cluster version etcd checks if the etcd version is older than the etcd cluster version. If this is the case, etcd will not start and applications dependent on etcd can fail. This issue prevents RHEL Atomic Host from cleanly rolling back from version 7.2.6 to earlier versions. In a kubernetes cluster, if the nodes are newer than the master, they may fail to start. In a kubernetes cluster, if the master contains an older version of kubernetes than the nodes, the nodes may fail to start. To work around this issue, always upgrade the master nodes first. As a result, the cluster will continue to function as expected. docker 1.10 introduced a secomp filter which will cause some syscalls to fail inside containers. As a workaround, pass the --security-opt seccomp:unconfined option to docker when creating a container. Docker maintains a help page with a comprehensive list of blocked calls and the reasoning behind them, see https://docs.docker.com/engine/security/seccomp/ . Note that the list is not entirely identical to what is blocked in Red Hat Enterprise Linux. Upgrade of docker from 1.9 to 1.10 loses image metadata Under certain circumstances, upgrading from docker 1.9 to docker 1.10 can result in a loss of docker image tag metadata. The underlying image layers remain intact and can be seen by running docker images -a. The metadata can be recovered, if it is present on a remote registry by simply re-running docker pull. This command will restore the metadata while avoiding a transfer of the already existing layer data. Atomic Host installation offers BTRFS but it is not supported. The RHEL Atomic Host installer offers BTRFS as a partition option, but the tree does not include btrfs-progs. Consequently, if you choose this option in the installer, you will not be able to proceed with the installation until you choose another option. When the root partition runs out of free space RHEL Atomic Host allocates 3GB of storage to the root partition, which includes the docker volumes (units of storage that a running container can request from the host system). This makes it easy for the root partition to run out of storage space. If insufficient space is available, upgrading with atomic host upgrade will fail. In order to support more volume space, more physical storage must be added to the system, or the root Logical Volume must be extended. By default, 40% from the other volume, will be reserved for storing the container images. The other 60% can be used to extend the root partition. For detailed instructions, see https://access.redhat.com/documentation/en/red-hat-enterprise-linux-atomic-host/version-7/getting-started-with-containers/#changing_the_size_of_the_root_partition_after_installation . Rescue mode does not work in RHEL Atomic Host. The Anaconda installer is unable to find a previously installed Atomic Host system when in rescue mode. Consequently, rescue mode does not work and should not be used. The brandbot.path service may cause subscription-manager to change the /etc/os-release file in 7.1 installations. The /etc/os-release file may still specify the 7.1 version even after Atomic Host has been upgraded to 7.2 using the atomic host upgrade command. This occurs because the underlying ostree tool preserves modified files in /etc . As a workaround, after upgrading to 7.2, run the following command: This way, the /etc/os-release file will return to an unmodified state, and because brandbot.path is masked in 7.2.0, it will not be modified in the future by subscription-manager, and future upgrades will show the correct version. When running kube-apiserver on port 443 in secure mode, some capabilities are missing. As a workaround, the kube-apiserver binary has to be modified by running
|
[
"podman pull registry.access.redhat.com/rhel7-aarch64",
"rpm-ostree install buildah",
"[remote \"rhel-atomic-host-ostree\"] url=file:///install/ostree/repo",
"systemctl restart rpm-ostreed.service",
"setebool -P container_manage_cgroup on",
"Failed to read modified config file 'lvm/...'",
"lvchange --monitor n VG/ThinPoolLV",
"atomic host upgrade",
"lvchange --monitor y VG/ThinPoolLV",
"atomic uninstall --name= container-name",
"echo 15000 > /proc/sys/user/max_user_namespaces",
"\"error: fsetxattr: Invalid argument\"",
"setenforce 0 atomic host upgrade",
"# systemctl stop rpm-ostreed # cp /usr/libexec/rpm-ostreed /usr/local/bin/rpm-ostreed # chcon -t install_exec_t /usr/local/bin/rpm-ostreed # /usr/local/bin/rpm-ostreed # atomic host upgrade",
"# atomic host deploy 7.2.7 # systemctl reboot # atomic host upgrade",
"cp /usr/etc/os-release /etc",
"chown root:root /usr/bin/kube-apiserver chmod 700 /usr/bin/kube-apiserver setcap CAP_NET_BIND_SERVICE=ep /usr/bin/kube-apiserver"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_atomic_host/7/html/release_notes/known_issues
|
Chapter 3. Installation and update
|
Chapter 3. Installation and update 3.1. About OpenShift Container Platform installation The OpenShift Container Platform installation program offers four methods for deploying a cluster which are detailed in the following list: Interactive : You can deploy a cluster with the web-based Assisted Installer . This is an ideal approach for clusters with networks connected to the internet. The Assisted Installer is the easiest way to install OpenShift Container Platform, it provides smart defaults, and it performs pre-flight validations before installing the cluster. It also provides a RESTful API for automation and advanced configuration scenarios. Local Agent-based : You can deploy a cluster locally with the Agent-based Installer for disconnected environments or restricted networks. It provides many of the benefits of the Assisted Installer, but you must download and configure the Agent-based Installer first. Configuration is done with a command-line interface. This approach is ideal for disconnected environments. Automated : You can deploy a cluster on installer-provisioned infrastructure. The installation program uses each cluster host's baseboard management controller (BMC) for provisioning. You can deploy clusters in connected or disconnected environments. Full control : You can deploy a cluster on infrastructure that you prepare and maintain, which provides maximum customizability. You can deploy clusters in connected or disconnected environments. Each method deploys a cluster with the following characteristics: Highly available infrastructure with no single points of failure, which is available by default. Administrators can control what updates are applied and when. 3.1.1. About the installation program You can use the installation program to deploy each type of cluster. The installation program generates the main assets, such as Ignition config files for the bootstrap, control plane, and compute machines. You can start an OpenShift Container Platform cluster with these three machine configurations, provided you correctly configured the infrastructure. The OpenShift Container Platform installation program uses a set of targets and dependencies to manage cluster installations. The installation program has a set of targets that it must achieve, and each target has a set of dependencies. Because each target is only concerned with its own dependencies, the installation program can act to achieve multiple targets in parallel with the ultimate target being a running cluster. The installation program recognizes and uses existing components instead of running commands to create them again because the program meets the dependencies. Figure 3.1. OpenShift Container Platform installation targets and dependencies 3.1.2. About Red Hat Enterprise Linux CoreOS (RHCOS) Post-installation, each cluster machine uses Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. RHCOS is the immutable container host version of Red Hat Enterprise Linux (RHEL) and features a RHEL kernel with SELinux enabled by default. RHCOS includes the kubelet , which is the Kubernetes node agent, and the CRI-O container runtime, which is optimized for Kubernetes. Every control plane machine in an OpenShift Container Platform 4.15 cluster must use RHCOS, which includes a critical first-boot provisioning tool called Ignition. This tool enables the cluster to configure the machines. Operating system updates are delivered as a bootable container image, using OSTree as a backend, that is deployed across the cluster by the Machine Config Operator. Actual operating system changes are made in-place on each machine as an atomic operation by using rpm-ostree . Together, these technologies enable OpenShift Container Platform to manage the operating system like it manages any other application on the cluster, by in-place upgrades that keep the entire platform up to date. These in-place updates can reduce the burden on operations teams. If you use RHCOS as the operating system for all cluster machines, the cluster manages all aspects of its components and machines, including the operating system. Because of this, only the installation program and the Machine Config Operator can change machines. The installation program uses Ignition config files to set the exact state of each machine, and the Machine Config Operator completes more changes to the machines, such as the application of new certificates or keys, after installation. 3.1.3. Supported platforms for OpenShift Container Platform clusters In OpenShift Container Platform 4.15, you can install a cluster that uses installer-provisioned infrastructure on the following platforms: Alibaba Cloud Amazon Web Services (AWS) Bare metal Google Cloud Platform (GCP) IBM Cloud(R) Microsoft Azure Microsoft Azure Stack Hub Nutanix Red Hat OpenStack Platform (RHOSP) The latest OpenShift Container Platform release supports both the latest RHOSP long-life release and intermediate release. For complete RHOSP release compatibility, see the OpenShift Container Platform on RHOSP support matrix . VMware vSphere For these clusters, all machines, including the computer that you run the installation process on, must have direct internet access to pull images for platform containers and provide telemetry data to Red Hat. Important After installation, the following changes are not supported: Mixing cloud provider platforms. Mixing cloud provider components. For example, using a persistent storage framework from a another platform on the platform where you installed the cluster. In OpenShift Container Platform 4.15, you can install a cluster that uses user-provisioned infrastructure on the following platforms: AWS Azure Azure Stack Hub Bare metal GCP IBM Power(R) IBM Z(R) or IBM(R) LinuxONE RHOSP The latest OpenShift Container Platform release supports both the latest RHOSP long-life release and intermediate release. For complete RHOSP release compatibility, see the OpenShift Container Platform on RHOSP support matrix . VMware Cloud on AWS VMware vSphere Depending on the supported cases for the platform, you can perform installations on user-provisioned infrastructure, so that you can run machines with full internet access, place your cluster behind a proxy, or perform a disconnected installation. In a disconnected installation, you can download the images that are required to install a cluster, place them in a mirror registry, and use that data to install your cluster. While you require internet access to pull images for platform containers, with a disconnected installation on vSphere or bare metal infrastructure, your cluster machines do not require direct internet access. The OpenShift Container Platform 4.x Tested Integrations page contains details about integration testing for different platforms. 3.1.4. Installation process Except for the Assisted Installer, when you install an OpenShift Container Platform cluster, you must download the installation program from the appropriate Cluster Type page on the OpenShift Cluster Manager Hybrid Cloud Console. This console manages: REST API for accounts. Registry tokens, which are the pull secrets that you use to obtain the required components. Cluster registration, which associates the cluster identity to your Red Hat account to facilitate the gathering of usage metrics. In OpenShift Container Platform 4.15, the installation program is a Go binary file that performs a series of file transformations on a set of assets. The way you interact with the installation program differs depending on your installation type. Consider the following installation use cases: To deploy a cluster with the Assisted Installer, you must configure the cluster settings by using the Assisted Installer . There is no installation program to download and configure. After you finish setting the cluster configuration, you download a discovery ISO and then boot cluster machines with that image. You can install clusters with the Assisted Installer on Nutanix, vSphere, and bare metal with full integration, and other platforms without integration. If you install on bare metal, you must provide all of the cluster infrastructure and resources, including the networking, load balancing, storage, and individual cluster machines. To deploy clusters with the Agent-based Installer, you can download the Agent-based Installer first. You can then configure the cluster and generate a discovery image. You boot cluster machines with the discovery image, which installs an agent that communicates with the installation program and handles the provisioning for you instead of you interacting with the installation program or setting up a provisioner machine yourself. You must provide all of the cluster infrastructure and resources, including the networking, load balancing, storage, and individual cluster machines. This approach is ideal for disconnected environments. For clusters with installer-provisioned infrastructure, you delegate the infrastructure bootstrapping and provisioning to the installation program instead of doing it yourself. The installation program creates all of the networking, machines, and operating systems that are required to support the cluster, except if you install on bare metal. If you install on bare metal, you must provide all of the cluster infrastructure and resources, including the bootstrap machine, networking, load balancing, storage, and individual cluster machines. If you provision and manage the infrastructure for your cluster, you must provide all of the cluster infrastructure and resources, including the bootstrap machine, networking, load balancing, storage, and individual cluster machines. For the installation program, the program uses three sets of files during installation: an installation configuration file that is named install-config.yaml , Kubernetes manifests, and Ignition config files for your machine types. Important You can modify Kubernetes and the Ignition config files that control the underlying RHCOS operating system during installation. However, no validation is available to confirm the suitability of any modifications that you make to these objects. If you modify these objects, you might render your cluster non-functional. Because of this risk, modifying Kubernetes and Ignition config files is not supported unless you are following documented procedures or are instructed to do so by Red Hat support. The installation configuration file is transformed into Kubernetes manifests, and then the manifests are wrapped into Ignition config files. The installation program uses these Ignition config files to create the cluster. The installation configuration files are all pruned when you run the installation program, so be sure to back up all the configuration files that you want to use again. Important You cannot modify the parameters that you set during installation, but you can modify many cluster attributes after installation. The installation process with the Assisted Installer Installation with the Assisted Installer involves creating a cluster configuration interactively by using the web-based user interface or the RESTful API. The Assisted Installer user interface prompts you for required values and provides reasonable default values for the remaining parameters, unless you change them in the user interface or with the API. The Assisted Installer generates a discovery image, which you download and use to boot the cluster machines. The image installs RHCOS and an agent, and the agent handles the provisioning for you. You can install OpenShift Container Platform with the Assisted Installer and full integration on Nutanix, vSphere, and bare metal. Additionally, you can install OpenShift Container Platform with the Assisted Installer on other platforms without integration. OpenShift Container Platform manages all aspects of the cluster, including the operating system itself. Each machine boots with a configuration that references resources hosted in the cluster that it joins. This configuration allows the cluster to manage itself as updates are applied. If possible, use the Assisted Installer feature to avoid having to download and configure the Agent-based Installer. The installation process with Agent-based infrastructure Agent-based installation is similar to using the Assisted Installer, except that you must initially download and install the Agent-based Installer . An Agent-based installation is useful when you want the convenience of the Assisted Installer, but you need to install a cluster in a disconnected environment. If possible, use the Agent-based installation feature to avoid having to create a provisioner machine with a bootstrap VM, and then provision and maintain the cluster infrastructure. The installation process with installer-provisioned infrastructure The default installation type uses installer-provisioned infrastructure. By default, the installation program acts as an installation wizard, prompting you for values that it cannot determine on its own and providing reasonable default values for the remaining parameters. You can also customize the installation process to support advanced infrastructure scenarios. The installation program provisions the underlying infrastructure for the cluster. You can install either a standard cluster or a customized cluster. With a standard cluster, you provide minimum details that are required to install the cluster. With a customized cluster, you can specify more details about the platform, such as the number of machines that the control plane uses, the type of virtual machine that the cluster deploys, or the CIDR range for the Kubernetes service network. If possible, use this feature to avoid having to provision and maintain the cluster infrastructure. In all other environments, you use the installation program to generate the assets that you require to provision your cluster infrastructure. With installer-provisioned infrastructure clusters, OpenShift Container Platform manages all aspects of the cluster, including the operating system itself. Each machine boots with a configuration that references resources hosted in the cluster that it joins. This configuration allows the cluster to manage itself as updates are applied. The installation process with user-provisioned infrastructure You can also install OpenShift Container Platform on infrastructure that you provide. You use the installation program to generate the assets that you require to provision the cluster infrastructure, create the cluster infrastructure, and then deploy the cluster to the infrastructure that you provided. If you do not use infrastructure that the installation program provisioned, you must manage and maintain the cluster resources yourself. The following list details some of these self-managed resources: The underlying infrastructure for the control plane and compute machines that make up the cluster Load balancers Cluster networking, including the DNS records and required subnets Storage for the cluster infrastructure and applications If your cluster uses user-provisioned infrastructure, you have the option of adding RHEL compute machines to your cluster. Installation process details When a cluster is provisioned, each machine in the cluster requires information about the cluster. OpenShift Container Platform uses a temporary bootstrap machine during initial configuration to provide the required information to the permanent control plane. The temporary bootstrap machine boots by using an Ignition config file that describes how to create the cluster. The bootstrap machine creates the control plane machines that make up the control plane. The control plane machines then create the compute machines, which are also known as worker machines. The following figure illustrates this process: Figure 3.2. Creating the bootstrap, control plane, and compute machines After the cluster machines initialize, the bootstrap machine is destroyed. All clusters use the bootstrap process to initialize the cluster, but if you provision the infrastructure for your cluster, you must complete many of the steps manually. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. Consider using Ignition config files within 12 hours after they are generated, because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Bootstrapping a cluster involves the following steps: The bootstrap machine boots and starts hosting the remote resources required for the control plane machines to boot. If you provision the infrastructure, this step requires manual intervention. The bootstrap machine starts a single-node etcd cluster and a temporary Kubernetes control plane. The control plane machines fetch the remote resources from the bootstrap machine and finish booting. If you provision the infrastructure, this step requires manual intervention. The temporary control plane schedules the production control plane to the production control plane machines. The Cluster Version Operator (CVO) comes online and installs the etcd Operator. The etcd Operator scales up etcd on all control plane nodes. The temporary control plane shuts down and passes control to the production control plane. The bootstrap machine injects OpenShift Container Platform components into the production control plane. The installation program shuts down the bootstrap machine. If you provision the infrastructure, this step requires manual intervention. The control plane sets up the compute nodes. The control plane installs additional services in the form of a set of Operators. The result of this bootstrapping process is a running OpenShift Container Platform cluster. The cluster then downloads and configures remaining components needed for the day-to-day operations, including the creation of compute machines in supported environments. Installation scope The scope of the OpenShift Container Platform installation program is intentionally narrow. It is designed for simplicity and ensured success. You can complete many more configuration tasks after installation completes. Additional resources See Available cluster customizations for details about OpenShift Container Platform configuration resources. 3.2. About the OpenShift Update Service The OpenShift Update Service (OSUS) provides update recommendations to OpenShift Container Platform, including Red Hat Enterprise Linux CoreOS (RHCOS). It provides a graph, or diagram, that contains the vertices of component Operators and the edges that connect them. The edges in the graph show which versions you can safely update to. The vertices are update payloads that specify the intended state of the managed cluster components. The Cluster Version Operator (CVO) in your cluster checks with the OpenShift Update Service to see the valid updates and update paths based on current component versions and information in the graph. When you request an update, the CVO uses the corresponding release image to update your cluster. The release artifacts are hosted in Quay as container images. To allow the OpenShift Update Service to provide only compatible updates, a release verification pipeline drives automation. Each release artifact is verified for compatibility with supported cloud platforms and system architectures, as well as other component packages. After the pipeline confirms the suitability of a release, the OpenShift Update Service notifies you that it is available. Important The OpenShift Update Service displays all recommended updates for your current cluster. If an update path is not recommended by the OpenShift Update Service, it might be because of a known issue with the update or the target release. Two controllers run during continuous update mode. The first controller continuously updates the payload manifests, applies the manifests to the cluster, and outputs the controlled rollout status of the Operators to indicate whether they are available, upgrading, or failed. The second controller polls the OpenShift Update Service to determine if updates are available. Important Only updating to a newer version is supported. Reverting or rolling back your cluster to a version is not supported. If your update fails, contact Red Hat support. During the update process, the Machine Config Operator (MCO) applies the new configuration to your cluster machines. The MCO cordons the number of nodes specified by the maxUnavailable field on the machine configuration pool and marks them unavailable. By default, this value is set to 1 . The MCO updates the affected nodes alphabetically by zone, based on the topology.kubernetes.io/zone label. If a zone has more than one node, the oldest nodes are updated first. For nodes that do not use zones, such as in bare metal deployments, the nodes are updated by age, with the oldest nodes updated first. The MCO updates the number of nodes as specified by the maxUnavailable field on the machine configuration pool at a time. The MCO then applies the new configuration and reboots the machine. Warning The default setting for maxUnavailable is 1 for all the machine config pools in OpenShift Container Platform. It is recommended to not change this value and update one control plane node at a time. Do not change this value to 3 for the control plane pool. If you use Red Hat Enterprise Linux (RHEL) machines as workers, the MCO does not update the kubelet because you must update the OpenShift API on the machines first. With the specification for the new version applied to the old kubelet, the RHEL machine cannot return to the Ready state. You cannot complete the update until the machines are available. However, the maximum number of unavailable nodes is set to ensure that normal cluster operations can continue with that number of machines out of service. The OpenShift Update Service is composed of an Operator and one or more application instances. 3.3. Support policy for unmanaged Operators The management state of an Operator determines whether an Operator is actively managing the resources for its related component in the cluster as designed. If an Operator is set to an unmanaged state, it does not respond to changes in configuration nor does it receive updates. While this can be helpful in non-production clusters or during debugging, Operators in an unmanaged state are unsupported and the cluster administrator assumes full control of the individual component configurations and upgrades. An Operator can be set to an unmanaged state using the following methods: Individual Operator configuration Individual Operators have a managementState parameter in their configuration. This can be accessed in different ways, depending on the Operator. For example, the Red Hat OpenShift Logging Operator accomplishes this by modifying a custom resource (CR) that it manages, while the Cluster Samples Operator uses a cluster-wide configuration resource. Changing the managementState parameter to Unmanaged means that the Operator is not actively managing its resources and will take no action related to the related component. Some Operators might not support this management state as it might damage the cluster and require manual recovery. Warning Changing individual Operators to the Unmanaged state renders that particular component and functionality unsupported. Reported issues must be reproduced in Managed state for support to proceed. Cluster Version Operator (CVO) overrides The spec.overrides parameter can be added to the CVO's configuration to allow administrators to provide a list of overrides to the CVO's behavior for a component. Setting the spec.overrides[].unmanaged parameter to true for a component blocks cluster upgrades and alerts the administrator after a CVO override has been set: Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing. Warning Setting a CVO override puts the entire cluster in an unsupported state. Reported issues must be reproduced after removing any overrides for support to proceed. 3.4. steps Selecting a cluster installation method and preparing it for users
|
[
"Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing."
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/architecture/architecture-installation
|
25.2. Configure Transactions
|
25.2. Configure Transactions 25.2.1. Configure Transactions (Library Mode) In Red Hat JBoss Data Grid, transactions in Library mode can be configured with synchronization and transaction recovery. Transactions in their entirety (which includes synchronization and transaction recovery) are not available in Remote Client-Server mode. In Library mode, transactions are configured as follows: Procedure 25.1. Configure Transactions in Library Mode (XML Configuration) Set the versioning parameter's enabled parameter to true . Set the versioningScheme parameter to either NONE or SIMPLE to set the versioning scheme used. Procedure 25.2. Configure Transactions in Library Mode (Programmatic Configuration) Set the transaction mode. Select and set a lookup class. See the table below this procedure for a list of available lookup classes. The lockingMode value determines whether optimistic or pessimistic locking is used. If the cache is non-transactional, the locking mode is ignored. The default value is OPTIMISTIC . The useSynchronization value configures the cache to register a synchronization with the transaction manager, or register itself as an XA resource. The default value is true (use synchronization). The recovery parameter enables recovery for the cache when set to true . The recoveryInfoCacheName sets the name of the cache where recovery information is held. The default name of the cache is specified by RecoveryConfiguration.DEFAULT_RECOVERY_INFO_CACHE . Configure Write Skew Check The writeSkew check determines if a modification to the entry from a different transaction should roll back the transaction. Write skew set to true requires isolation_level set to REPEATABLE_READ . The default value for writeSkew and isolation_level are false and READ_COMMITTED respectively. Configure Entry Versioning For clustered caches, enable write skew check by enabling entry versioning and setting its value to SIMPLE . Table 25.1. Transaction Manager Lookup Classes Class Name Details org.infinispan.transaction.lookup.DummyTransactionManagerLookup Used primarily for testing environments. This testing transaction manager is not for use in a production environment and is severely limited in terms of functionality, specifically for concurrent transactions and recovery. org.infinispan.transaction.lookup.JBossStandaloneJTAManagerLookup The default transaction manager when Red Hat JBoss Data Grid runs in a standalone environment. It is a fully functional JBoss Transactions based transaction manager that overcomes the functionality limits of the DummyTransactionManager . org.infinispan.transaction.lookup.GenericTransactionManagerLookup GenericTransactionManagerLookup is used by default when no transaction lookup class is specified. This lookup class is recommended when using JBoss Data Grid with Java EE-compatible environment that provides a TransactionManager interface, and is capable of locating the Transaction Manager in most Java EE application servers. If no transaction manager is located, it defaults to DummyTransactionManager . org.infinispan.transaction.lookup.JBossTransactionManagerLookup The JbossTransactionManagerLookup finds the standard transaction manager running in the application server. This lookup class uses JNDI to look up the TransactionManager instance, and is recommended when custom caches are being used in JTA transactions. Report a bug 25.2.2. Configure Transactions (Remote Client-Server Mode) Red Hat JBoss Data Grid does not offer transactions in Remote Client-Server mode. The default and only supported configuration is non-transactional, which is set as follows: Example 25.1. Transaction Configuration in Remote Client-Server Mode Report a bug
|
[
"<namedCache <!-- Additional configuration information here -->> <transaction <!-- Additional configuration information here --> > <locking <!-- Additional configuration information here --> > <versioning enabled=\"{true,false}\" versioningScheme=\"{NONE|SIMPLE}\"/> <!-- Additional configuration information here --> </namedCache>",
"Configuration config = new ConfigurationBuilder()/* ... */.transaction() .transactionMode(TransactionMode.TRANSACTIONAL) .transactionManagerLookup(new GenericTransactionManagerLookup()) .lockingMode(LockingMode.OPTIMISTIC) .useSynchronization(true) .recovery() .recoveryInfoCacheName(\"anotherRecoveryCacheName\").build();",
"Configuration config = new ConfigurationBuilder()/* ... */.locking() .isolationLevel(IsolationLevel.REPEATABLE_READ).writeSkewCheck(true);",
"Configuration config = new ConfigurationBuilder()/* ... */.versioning() .enable() .scheme(VersioningScheme.SIMPLE);",
"<cache> <!-- Additional configuration elements here --> <transaction mode=\"NONE\" /> <!-- Additional configuration elements here --> </cache>"
] |
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/sect-configure_transactions
|
Chapter 2. Setting up the environment for an OpenShift Container Platform installation
|
Chapter 2. Setting up the environment for an OpenShift Container Platform installation 2.1. Preparing the provisioner node on IBM Cloud(R) Bare Metal (Classic) infrastructure Perform the following steps to prepare the provisioner node. Procedure Log in to the provisioner node via ssh . Create a non-root user ( kni ) and provide that user with sudo privileges: # useradd kni # passwd kni # echo "kni ALL=(root) NOPASSWD:ALL" | tee -a /etc/sudoers.d/kni # chmod 0440 /etc/sudoers.d/kni Create an ssh key for the new user: # su - kni -c "ssh-keygen -f /home/kni/.ssh/id_rsa -N ''" Log in as the new user on the provisioner node: # su - kni Use Red Hat Subscription Manager to register the provisioner node: USD sudo subscription-manager register --username=<user> --password=<pass> --auto-attach USD sudo subscription-manager repos --enable=rhel-8-for-x86_64-appstream-rpms \ --enable=rhel-8-for-x86_64-baseos-rpms Note For more information about Red Hat Subscription Manager, see Using and Configuring Red Hat Subscription Manager . Install the following packages: USD sudo dnf install -y libvirt qemu-kvm mkisofs python3-devel jq ipmitool Modify the user to add the libvirt group to the newly created user: USD sudo usermod --append --groups libvirt kni Start firewalld : USD sudo systemctl start firewalld Enable firewalld : USD sudo systemctl enable firewalld Start the http service: USD sudo firewall-cmd --zone=public --add-service=http --permanent USD sudo firewall-cmd --reload Start and enable the libvirtd service: USD sudo systemctl enable libvirtd --now Set the ID of the provisioner node: USD PRVN_HOST_ID=<ID> You can view the ID with the following ibmcloud command: USD ibmcloud sl hardware list Set the ID of the public subnet: USD PUBLICSUBNETID=<ID> You can view the ID with the following ibmcloud command: USD ibmcloud sl subnet list Set the ID of the private subnet: USD PRIVSUBNETID=<ID> You can view the ID with the following ibmcloud command: USD ibmcloud sl subnet list Set the provisioner node public IP address: USD PRVN_PUB_IP=USD(ibmcloud sl hardware detail USDPRVN_HOST_ID --output JSON | jq .primaryIpAddress -r) Set the CIDR for the public network: USD PUBLICCIDR=USD(ibmcloud sl subnet detail USDPUBLICSUBNETID --output JSON | jq .cidr) Set the IP address and CIDR for the public network: USD PUB_IP_CIDR=USDPRVN_PUB_IP/USDPUBLICCIDR Set the gateway for the public network: USD PUB_GATEWAY=USD(ibmcloud sl subnet detail USDPUBLICSUBNETID --output JSON | jq .gateway -r) Set the private IP address of the provisioner node: USD PRVN_PRIV_IP=USD(ibmcloud sl hardware detail USDPRVN_HOST_ID --output JSON | \ jq .primaryBackendIpAddress -r) Set the CIDR for the private network: USD PRIVCIDR=USD(ibmcloud sl subnet detail USDPRIVSUBNETID --output JSON | jq .cidr) Set the IP address and CIDR for the private network: USD PRIV_IP_CIDR=USDPRVN_PRIV_IP/USDPRIVCIDR Set the gateway for the private network: USD PRIV_GATEWAY=USD(ibmcloud sl subnet detail USDPRIVSUBNETID --output JSON | jq .gateway -r) Set up the bridges for the baremetal and provisioning networks: USD sudo nohup bash -c " nmcli --get-values UUID con show | xargs -n 1 nmcli con delete nmcli connection add ifname provisioning type bridge con-name provisioning nmcli con add type bridge-slave ifname eth1 master provisioning nmcli connection add ifname baremetal type bridge con-name baremetal nmcli con add type bridge-slave ifname eth2 master baremetal nmcli connection modify baremetal ipv4.addresses USDPUB_IP_CIDR ipv4.method manual ipv4.gateway USDPUB_GATEWAY nmcli connection modify provisioning ipv4.addresses 172.22.0.1/24,USDPRIV_IP_CIDR ipv4.method manual nmcli connection modify provisioning +ipv4.routes \"10.0.0.0/8 USDPRIV_GATEWAY\" nmcli con down baremetal nmcli con up baremetal nmcli con down provisioning nmcli con up provisioning init 6 " Note For eth1 and eth2 , substitute the appropriate interface name, as needed. If required, SSH back into the provisioner node: # ssh kni@provisioner.<cluster-name>.<domain> Verify the connection bridges have been properly created: USD sudo nmcli con show Example output NAME UUID TYPE DEVICE baremetal 4d5133a5-8351-4bb9-bfd4-3af264801530 bridge baremetal provisioning 43942805-017f-4d7d-a2c2-7cb3324482ed bridge provisioning virbr0 d9bca40f-eee1-410b-8879-a2d4bb0465e7 bridge virbr0 bridge-slave-eth1 76a8ed50-c7e5-4999-b4f6-6d9014dd0812 ethernet eth1 bridge-slave-eth2 f31c3353-54b7-48de-893a-02d2b34c4736 ethernet eth2 Create a pull-secret.txt file: USD vim pull-secret.txt In a web browser, navigate to Install on Bare Metal with user-provisioned infrastructure . In step 1, click Download pull secret . Paste the contents into the pull-secret.txt file and save the contents in the kni user's home directory. 2.2. Configuring the public subnet All of the OpenShift Container Platform cluster nodes must be on the public subnet. IBM Cloud(R) Bare Metal (Classic) does not provide a DHCP server on the subnet. Set it up separately on the provisioner node. You must reset the BASH variables defined when preparing the provisioner node. Rebooting the provisioner node after preparing it will delete the BASH variables previously set. Procedure Install dnsmasq : USD sudo dnf install dnsmasq Open the dnsmasq configuration file: USD sudo vi /etc/dnsmasq.conf Add the following configuration to the dnsmasq configuration file: interface=baremetal except-interface=lo bind-dynamic log-dhcp dhcp-range=<ip_addr>,<ip_addr>,<pub_cidr> 1 dhcp-option=baremetal,121,0.0.0.0/0,<pub_gateway>,<prvn_priv_ip>,<prvn_pub_ip> 2 dhcp-hostsfile=/var/lib/dnsmasq/dnsmasq.hostsfile 1 Set the DHCP range. Replace both instances of <ip_addr> with one unused IP address from the public subnet so that the dhcp-range for the baremetal network begins and ends with the same the IP address. Replace <pub_cidr> with the CIDR of the public subnet. 2 Set the DHCP option. Replace <pub_gateway> with the IP address of the gateway for the baremetal network. Replace <prvn_priv_ip> with the IP address of the provisioner node's private IP address on the provisioning network. Replace <prvn_pub_ip> with the IP address of the provisioner node's public IP address on the baremetal network. To retrieve the value for <pub_cidr> , execute: USD ibmcloud sl subnet detail <publicsubnetid> --output JSON | jq .cidr Replace <publicsubnetid> with the ID of the public subnet. To retrieve the value for <pub_gateway> , execute: USD ibmcloud sl subnet detail <publicsubnetid> --output JSON | jq .gateway -r Replace <publicsubnetid> with the ID of the public subnet. To retrieve the value for <prvn_priv_ip> , execute: USD ibmcloud sl hardware detail <id> --output JSON | \ jq .primaryBackendIpAddress -r Replace <id> with the ID of the provisioner node. To retrieve the value for <prvn_pub_ip> , execute: USD ibmcloud sl hardware detail <id> --output JSON | jq .primaryIpAddress -r Replace <id> with the ID of the provisioner node. Obtain the list of hardware for the cluster: USD ibmcloud sl hardware list Obtain the MAC addresses and IP addresses for each node: USD ibmcloud sl hardware detail <id> --output JSON | \ jq '.networkComponents[] | \ "\(.primaryIpAddress) \(.macAddress)"' | grep -v null Replace <id> with the ID of the node. Example output "10.196.130.144 00:e0:ed:6a:ca:b4" "141.125.65.215 00:e0:ed:6a:ca:b5" Make a note of the MAC address and IP address of the public network. Make a separate note of the MAC address of the private network, which you will use later in the install-config.yaml file. Repeat this procedure for each node until you have all the public MAC and IP addresses for the public baremetal network, and the MAC addresses of the private provisioning network. Add the MAC and IP address pair of the public baremetal network for each node into the dnsmasq.hostsfile file: USD sudo vim /var/lib/dnsmasq/dnsmasq.hostsfile Example input 00:e0:ed:6a:ca:b5,141.125.65.215,master-0 <mac>,<ip>,master-1 <mac>,<ip>,master-2 <mac>,<ip>,worker-0 <mac>,<ip>,worker-1 ... Replace <mac>,<ip> with the public MAC address and public IP address of the corresponding node name. Start dnsmasq : USD sudo systemctl start dnsmasq Enable dnsmasq so that it starts when booting the node: USD sudo systemctl enable dnsmasq Verify dnsmasq is running: USD sudo systemctl status dnsmasq Example output β dnsmasq.service - DNS caching server. Loaded: loaded (/usr/lib/systemd/system/dnsmasq.service; enabled; vendor preset: disabled) Active: active (running) since Tue 2021-10-05 05:04:14 CDT; 49s ago Main PID: 3101 (dnsmasq) Tasks: 1 (limit: 204038) Memory: 732.0K CGroup: /system.slice/dnsmasq.service ββ3101 /usr/sbin/dnsmasq -k Open ports 53 and 67 with UDP protocol: USD sudo firewall-cmd --add-port 53/udp --permanent USD sudo firewall-cmd --add-port 67/udp --permanent Add provisioning to the external zone with masquerade: USD sudo firewall-cmd --change-zone=provisioning --zone=external --permanent This step ensures network address translation for IPMI calls to the management subnet. Reload the firewalld configuration: USD sudo firewall-cmd --reload 2.3. Retrieving the OpenShift Container Platform installer Use the stable-4.x version of the installation program and your selected architecture to deploy the generally available stable version of OpenShift Container Platform: USD export VERSION=stable-4.15 USD export RELEASE_ARCH=<architecture> USD export RELEASE_IMAGE=USD(curl -s https://mirror.openshift.com/pub/openshift-v4/USDRELEASE_ARCH/clients/ocp/USDVERSION/release.txt | grep 'Pull From: quay.io' | awk -F ' ' '{print USD3}') 2.4. Extracting the OpenShift Container Platform installer After retrieving the installer, the step is to extract it. Procedure Set the environment variables: USD export cmd=openshift-baremetal-install USD export pullsecret_file=~/pull-secret.txt USD export extract_dir=USD(pwd) Get the oc binary: USD curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDVERSION/openshift-client-linux.tar.gz | tar zxvf - oc Extract the installer: USD sudo cp oc /usr/local/bin USD oc adm release extract --registry-config "USD{pullsecret_file}" --command=USDcmd --to "USD{extract_dir}" USD{RELEASE_IMAGE} USD sudo cp openshift-baremetal-install /usr/local/bin 2.5. Configuring the install-config.yaml file The install-config.yaml file requires some additional details. Most of the information is teaching the installer and the resulting cluster enough about the available IBM Cloud(R) Bare Metal (Classic) hardware so that it is able to fully manage it. The material difference between installing on bare metal and installing on IBM Cloud(R) Bare Metal (Classic) is that you must explicitly set the privilege level for IPMI in the BMC section of the install-config.yaml file. Procedure Configure install-config.yaml . Change the appropriate variables to match the environment, including pullSecret and sshKey . apiVersion: v1 baseDomain: <domain> metadata: name: <cluster_name> networking: machineNetwork: - cidr: <public-cidr> networkType: OVNKubernetes compute: - name: worker replicas: 2 controlPlane: name: master replicas: 3 platform: baremetal: {} platform: baremetal: apiVIP: <api_ip> ingressVIP: <wildcard_ip> provisioningNetworkInterface: <NIC1> provisioningNetworkCIDR: <CIDR> hosts: - name: openshift-master-0 role: master bmc: address: ipmi://10.196.130.145?privilegelevel=OPERATOR 1 username: root password: <password> bootMACAddress: 00:e0:ed:6a:ca:b4 2 rootDeviceHints: deviceName: "/dev/sda" - name: openshift-worker-0 role: worker bmc: address: ipmi://<out-of-band-ip>?privilegelevel=OPERATOR 3 username: <user> password: <password> bootMACAddress: <NIC1_mac_address> 4 rootDeviceHints: deviceName: "/dev/sda" pullSecret: '<pull_secret>' sshKey: '<ssh_pub_key>' 1 3 The bmc.address provides a privilegelevel configuration setting with the value set to OPERATOR . This is required for IBM Cloud(R) Bare Metal (Classic) infrastructure. 2 4 Add the MAC address of the private provisioning network NIC for the corresponding node. Note You can use the ibmcloud command-line utility to retrieve the password. USD ibmcloud sl hardware detail <id> --output JSON | \ jq '"(.networkManagementIpAddress) (.remoteManagementAccounts[0].password)"' Replace <id> with the ID of the node. Create a directory to store the cluster configuration: USD mkdir ~/clusterconfigs Copy the install-config.yaml file into the directory: USD cp install-config.yaml ~/clusterconfigs Ensure all bare metal nodes are powered off prior to installing the OpenShift Container Platform cluster: USD ipmitool -I lanplus -U <user> -P <password> -H <management_server_ip> power off Remove old bootstrap resources if any are left over from a deployment attempt: for i in USD(sudo virsh list | tail -n +3 | grep bootstrap | awk {'print USD2'}); do sudo virsh destroy USDi; sudo virsh undefine USDi; sudo virsh vol-delete USDi --pool USDi; sudo virsh vol-delete USDi.ign --pool USDi; sudo virsh pool-destroy USDi; sudo virsh pool-undefine USDi; done 2.6. Additional install-config parameters See the following tables for the required parameters, the hosts parameter, and the bmc parameter for the install-config.yaml file. Table 2.1. Required parameters Parameters Default Description baseDomain The domain name for the cluster. For example, example.com . bootMode UEFI The boot mode for a node. Options are legacy , UEFI , and UEFISecureBoot . If bootMode is not set, Ironic sets it while inspecting the node. bootstrapExternalStaticDNS The static network DNS of the bootstrap node. You must set this value when deploying a cluster with static IP addresses when there is no Dynamic Host Configuration Protocol (DHCP) server on the bare-metal network. If you do not set this value, the installation program will use the value from bootstrapExternalStaticGateway , which causes problems when the IP address values of the gateway and DNS are different. bootstrapExternalStaticIP The static IP address for the bootstrap VM. You must set this value when deploying a cluster with static IP addresses when there is no DHCP server on the bare-metal network. bootstrapExternalStaticGateway The static IP address of the gateway for the bootstrap VM. You must set this value when deploying a cluster with static IP addresses when there is no DHCP server on the bare-metal network. sshKey The sshKey configuration setting contains the key in the ~/.ssh/id_rsa.pub file required to access the control plane nodes and worker nodes. Typically, this key is from the provisioner node. pullSecret The pullSecret configuration setting contains a copy of the pull secret downloaded from the Install OpenShift on Bare Metal page when preparing the provisioner node. The name to be given to the OpenShift Container Platform cluster. For example, openshift . The public CIDR (Classless Inter-Domain Routing) of the external network. For example, 10.0.0.0/24 . The OpenShift Container Platform cluster requires a name be provided for worker (or compute) nodes even if there are zero nodes. Replicas sets the number of worker (or compute) nodes in the OpenShift Container Platform cluster. The OpenShift Container Platform cluster requires a name for control plane (master) nodes. Replicas sets the number of control plane (master) nodes included as part of the OpenShift Container Platform cluster. provisioningNetworkInterface The name of the network interface on nodes connected to the provisioning network. For OpenShift Container Platform 4.9 and later releases, use the bootMACAddress configuration setting to enable Ironic to identify the IP address of the NIC instead of using the provisioningNetworkInterface configuration setting to identify the name of the NIC. defaultMachinePlatform The default configuration used for machine pools without a platform configuration. apiVIPs (Optional) The virtual IP address for Kubernetes API communication. This setting must either be provided in the install-config.yaml file as a reserved IP from the MachineNetwork or preconfigured in the DNS so that the default name resolves correctly. Use the virtual IP address and not the FQDN when adding a value to the apiVIPs configuration setting in the install-config.yaml file. The primary IP address must be from the IPv4 network when using dual stack networking. If not set, the installation program uses api.<cluster_name>.<base_domain> to derive the IP address from the DNS. Note Before OpenShift Container Platform 4.12, the cluster installation program only accepted an IPv4 address or an IPv6 address for the apiVIP configuration setting. From OpenShift Container Platform 4.12 or later, the apiVIP configuration setting is deprecated. Instead, use a list format for the apiVIPs configuration setting to specify an IPv4 address, an IPv6 address or both IP address formats. disableCertificateVerification False redfish and redfish-virtualmedia need this parameter to manage BMC addresses. The value should be True when using a self-signed certificate for BMC addresses. ingressVIPs (Optional) The virtual IP address for ingress traffic. This setting must either be provided in the install-config.yaml file as a reserved IP from the MachineNetwork or preconfigured in the DNS so that the default name resolves correctly. Use the virtual IP address and not the FQDN when adding a value to the ingressVIPs configuration setting in the install-config.yaml file. The primary IP address must be from the IPv4 network when using dual stack networking. If not set, the installation program uses test.apps.<cluster_name>.<base_domain> to derive the IP address from the DNS. Note Before OpenShift Container Platform 4.12, the cluster installation program only accepted an IPv4 address or an IPv6 address for the ingressVIP configuration setting. In OpenShift Container Platform 4.12 and later, the ingressVIP configuration setting is deprecated. Instead, use a list format for the ingressVIPs configuration setting to specify an IPv4 addresses, an IPv6 addresses or both IP address formats. Table 2.2. Optional Parameters Parameters Default Description provisioningDHCPRange 172.22.0.10,172.22.0.100 Defines the IP range for nodes on the provisioning network. provisioningNetworkCIDR 172.22.0.0/24 The CIDR for the network to use for provisioning. This option is required when not using the default address range on the provisioning network. clusterProvisioningIP The third IP address of the provisioningNetworkCIDR . The IP address within the cluster where the provisioning services run. Defaults to the third IP address of the provisioning subnet. For example, 172.22.0.3 . bootstrapProvisioningIP The second IP address of the provisioningNetworkCIDR . The IP address on the bootstrap VM where the provisioning services run while the installer is deploying the control plane (master) nodes. Defaults to the second IP address of the provisioning subnet. For example, 172.22.0.2 or 2620:52:0:1307::2 . externalBridge baremetal The name of the bare-metal bridge of the hypervisor attached to the bare-metal network. provisioningBridge provisioning The name of the provisioning bridge on the provisioner host attached to the provisioning network. architecture Defines the host architecture for your cluster. Valid values are amd64 or arm64 . defaultMachinePlatform The default configuration used for machine pools without a platform configuration. bootstrapOSImage A URL to override the default operating system image for the bootstrap node. The URL must contain a SHA-256 hash of the image. For example: https://mirror.openshift.com/rhcos-<version>-qemu.qcow2.gz?sha256=<uncompressed_sha256> . provisioningNetwork The provisioningNetwork configuration setting determines whether the cluster uses the provisioning network. If it does, the configuration setting also determines if the cluster manages the network. Disabled : Set this parameter to Disabled to disable the requirement for a provisioning network. When set to Disabled , you must only use virtual media based provisioning, or bring up the cluster using the assisted installer. If Disabled and using power management, BMCs must be accessible from the bare-metal network. If Disabled , you must provide two IP addresses on the bare-metal network that are used for the provisioning services. Managed : Set this parameter to Managed , which is the default, to fully manage the provisioning network, including DHCP, TFTP, and so on. Unmanaged : Set this parameter to Unmanaged to enable the provisioning network but take care of manual configuration of DHCP. Virtual media provisioning is recommended but PXE is still available if required. httpProxy Set this parameter to the appropriate HTTP proxy used within your environment. httpsProxy Set this parameter to the appropriate HTTPS proxy used within your environment. noProxy Set this parameter to the appropriate list of exclusions for proxy usage within your environment. Hosts The hosts parameter is a list of separate bare metal assets used to build the cluster. Table 2.3. Hosts Name Default Description name The name of the BareMetalHost resource to associate with the details. For example, openshift-master-0 . role The role of the bare metal node. Either master or worker . bmc Connection details for the baseboard management controller. See the BMC addressing section for additional details. bootMACAddress The MAC address of the NIC that the host uses for the provisioning network. Ironic retrieves the IP address using the bootMACAddress configuration setting. Then, it binds to the host. Note You must provide a valid MAC address from the host if you disabled the provisioning network. networkConfig Set this optional parameter to configure the network interface of a host. See "(Optional) Configuring host network interfaces" for additional details. 2.7. Root device hints The rootDeviceHints parameter enables the installer to provision the Red Hat Enterprise Linux CoreOS (RHCOS) image to a particular device. The installer examines the devices in the order it discovers them, and compares the discovered values with the hint values. The installer uses the first discovered device that matches the hint value. The configuration can combine multiple hints, but a device must match all hints for the installer to select it. Table 2.4. Subfields Subfield Description deviceName A string containing a Linux device name such as /dev/vda or /dev/disk/by-path/ . It is recommended to use the /dev/disk/by-path/<device_path> link to the storage location. The hint must match the actual value exactly. hctl A string containing a SCSI bus address like 0:0:0:0 . The hint must match the actual value exactly. model A string containing a vendor-specific device identifier. The hint can be a substring of the actual value. vendor A string containing the name of the vendor or manufacturer of the device. The hint can be a sub-string of the actual value. serialNumber A string containing the device serial number. The hint must match the actual value exactly. minSizeGigabytes An integer representing the minimum size of the device in gigabytes. wwn A string containing the unique storage identifier. The hint must match the actual value exactly. wwnWithExtension A string containing the unique storage identifier with the vendor extension appended. The hint must match the actual value exactly. wwnVendorExtension A string containing the unique vendor storage identifier. The hint must match the actual value exactly. rotational A boolean indicating whether the device should be a rotating disk (true) or not (false). Example usage - name: master-0 role: master bmc: address: ipmi://10.10.0.3:6203 username: admin password: redhat bootMACAddress: de:ad:be:ef:00:40 rootDeviceHints: deviceName: "/dev/sda" 2.8. Creating the OpenShift Container Platform manifests Create the OpenShift Container Platform manifests. USD ./openshift-baremetal-install --dir ~/clusterconfigs create manifests INFO Consuming Install Config from target directory WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings WARNING Discarding the OpenShift Manifest that was provided in the target directory because its dependencies are dirty and it needs to be regenerated 2.9. Deploying the cluster via the OpenShift Container Platform installer Run the OpenShift Container Platform installer: USD ./openshift-baremetal-install --dir ~/clusterconfigs --log-level debug create cluster 2.10. Following the progress of the installation During the deployment process, you can check the installation's overall status by issuing the tail command to the .openshift_install.log log file in the install directory folder: USD tail -f /path/to/install-dir/.openshift_install.log
|
[
"useradd kni",
"passwd kni",
"echo \"kni ALL=(root) NOPASSWD:ALL\" | tee -a /etc/sudoers.d/kni",
"chmod 0440 /etc/sudoers.d/kni",
"su - kni -c \"ssh-keygen -f /home/kni/.ssh/id_rsa -N ''\"",
"su - kni",
"sudo subscription-manager register --username=<user> --password=<pass> --auto-attach",
"sudo subscription-manager repos --enable=rhel-8-for-x86_64-appstream-rpms --enable=rhel-8-for-x86_64-baseos-rpms",
"sudo dnf install -y libvirt qemu-kvm mkisofs python3-devel jq ipmitool",
"sudo usermod --append --groups libvirt kni",
"sudo systemctl start firewalld",
"sudo systemctl enable firewalld",
"sudo firewall-cmd --zone=public --add-service=http --permanent",
"sudo firewall-cmd --reload",
"sudo systemctl enable libvirtd --now",
"PRVN_HOST_ID=<ID>",
"ibmcloud sl hardware list",
"PUBLICSUBNETID=<ID>",
"ibmcloud sl subnet list",
"PRIVSUBNETID=<ID>",
"ibmcloud sl subnet list",
"PRVN_PUB_IP=USD(ibmcloud sl hardware detail USDPRVN_HOST_ID --output JSON | jq .primaryIpAddress -r)",
"PUBLICCIDR=USD(ibmcloud sl subnet detail USDPUBLICSUBNETID --output JSON | jq .cidr)",
"PUB_IP_CIDR=USDPRVN_PUB_IP/USDPUBLICCIDR",
"PUB_GATEWAY=USD(ibmcloud sl subnet detail USDPUBLICSUBNETID --output JSON | jq .gateway -r)",
"PRVN_PRIV_IP=USD(ibmcloud sl hardware detail USDPRVN_HOST_ID --output JSON | jq .primaryBackendIpAddress -r)",
"PRIVCIDR=USD(ibmcloud sl subnet detail USDPRIVSUBNETID --output JSON | jq .cidr)",
"PRIV_IP_CIDR=USDPRVN_PRIV_IP/USDPRIVCIDR",
"PRIV_GATEWAY=USD(ibmcloud sl subnet detail USDPRIVSUBNETID --output JSON | jq .gateway -r)",
"sudo nohup bash -c \" nmcli --get-values UUID con show | xargs -n 1 nmcli con delete nmcli connection add ifname provisioning type bridge con-name provisioning nmcli con add type bridge-slave ifname eth1 master provisioning nmcli connection add ifname baremetal type bridge con-name baremetal nmcli con add type bridge-slave ifname eth2 master baremetal nmcli connection modify baremetal ipv4.addresses USDPUB_IP_CIDR ipv4.method manual ipv4.gateway USDPUB_GATEWAY nmcli connection modify provisioning ipv4.addresses 172.22.0.1/24,USDPRIV_IP_CIDR ipv4.method manual nmcli connection modify provisioning +ipv4.routes \\\"10.0.0.0/8 USDPRIV_GATEWAY\\\" nmcli con down baremetal nmcli con up baremetal nmcli con down provisioning nmcli con up provisioning init 6 \"",
"ssh kni@provisioner.<cluster-name>.<domain>",
"sudo nmcli con show",
"NAME UUID TYPE DEVICE baremetal 4d5133a5-8351-4bb9-bfd4-3af264801530 bridge baremetal provisioning 43942805-017f-4d7d-a2c2-7cb3324482ed bridge provisioning virbr0 d9bca40f-eee1-410b-8879-a2d4bb0465e7 bridge virbr0 bridge-slave-eth1 76a8ed50-c7e5-4999-b4f6-6d9014dd0812 ethernet eth1 bridge-slave-eth2 f31c3353-54b7-48de-893a-02d2b34c4736 ethernet eth2",
"vim pull-secret.txt",
"sudo dnf install dnsmasq",
"sudo vi /etc/dnsmasq.conf",
"interface=baremetal except-interface=lo bind-dynamic log-dhcp dhcp-range=<ip_addr>,<ip_addr>,<pub_cidr> 1 dhcp-option=baremetal,121,0.0.0.0/0,<pub_gateway>,<prvn_priv_ip>,<prvn_pub_ip> 2 dhcp-hostsfile=/var/lib/dnsmasq/dnsmasq.hostsfile",
"ibmcloud sl subnet detail <publicsubnetid> --output JSON | jq .cidr",
"ibmcloud sl subnet detail <publicsubnetid> --output JSON | jq .gateway -r",
"ibmcloud sl hardware detail <id> --output JSON | jq .primaryBackendIpAddress -r",
"ibmcloud sl hardware detail <id> --output JSON | jq .primaryIpAddress -r",
"ibmcloud sl hardware list",
"ibmcloud sl hardware detail <id> --output JSON | jq '.networkComponents[] | \"\\(.primaryIpAddress) \\(.macAddress)\"' | grep -v null",
"\"10.196.130.144 00:e0:ed:6a:ca:b4\" \"141.125.65.215 00:e0:ed:6a:ca:b5\"",
"sudo vim /var/lib/dnsmasq/dnsmasq.hostsfile",
"00:e0:ed:6a:ca:b5,141.125.65.215,master-0 <mac>,<ip>,master-1 <mac>,<ip>,master-2 <mac>,<ip>,worker-0 <mac>,<ip>,worker-1",
"sudo systemctl start dnsmasq",
"sudo systemctl enable dnsmasq",
"sudo systemctl status dnsmasq",
"β dnsmasq.service - DNS caching server. Loaded: loaded (/usr/lib/systemd/system/dnsmasq.service; enabled; vendor preset: disabled) Active: active (running) since Tue 2021-10-05 05:04:14 CDT; 49s ago Main PID: 3101 (dnsmasq) Tasks: 1 (limit: 204038) Memory: 732.0K CGroup: /system.slice/dnsmasq.service ββ3101 /usr/sbin/dnsmasq -k",
"sudo firewall-cmd --add-port 53/udp --permanent",
"sudo firewall-cmd --add-port 67/udp --permanent",
"sudo firewall-cmd --change-zone=provisioning --zone=external --permanent",
"sudo firewall-cmd --reload",
"export VERSION=stable-4.15",
"export RELEASE_ARCH=<architecture>",
"export RELEASE_IMAGE=USD(curl -s https://mirror.openshift.com/pub/openshift-v4/USDRELEASE_ARCH/clients/ocp/USDVERSION/release.txt | grep 'Pull From: quay.io' | awk -F ' ' '{print USD3}')",
"export cmd=openshift-baremetal-install",
"export pullsecret_file=~/pull-secret.txt",
"export extract_dir=USD(pwd)",
"curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDVERSION/openshift-client-linux.tar.gz | tar zxvf - oc",
"sudo cp oc /usr/local/bin",
"oc adm release extract --registry-config \"USD{pullsecret_file}\" --command=USDcmd --to \"USD{extract_dir}\" USD{RELEASE_IMAGE}",
"sudo cp openshift-baremetal-install /usr/local/bin",
"apiVersion: v1 baseDomain: <domain> metadata: name: <cluster_name> networking: machineNetwork: - cidr: <public-cidr> networkType: OVNKubernetes compute: - name: worker replicas: 2 controlPlane: name: master replicas: 3 platform: baremetal: {} platform: baremetal: apiVIP: <api_ip> ingressVIP: <wildcard_ip> provisioningNetworkInterface: <NIC1> provisioningNetworkCIDR: <CIDR> hosts: - name: openshift-master-0 role: master bmc: address: ipmi://10.196.130.145?privilegelevel=OPERATOR 1 username: root password: <password> bootMACAddress: 00:e0:ed:6a:ca:b4 2 rootDeviceHints: deviceName: \"/dev/sda\" - name: openshift-worker-0 role: worker bmc: address: ipmi://<out-of-band-ip>?privilegelevel=OPERATOR 3 username: <user> password: <password> bootMACAddress: <NIC1_mac_address> 4 rootDeviceHints: deviceName: \"/dev/sda\" pullSecret: '<pull_secret>' sshKey: '<ssh_pub_key>'",
"ibmcloud sl hardware detail <id> --output JSON | jq '\"(.networkManagementIpAddress) (.remoteManagementAccounts[0].password)\"'",
"mkdir ~/clusterconfigs",
"cp install-config.yaml ~/clusterconfigs",
"ipmitool -I lanplus -U <user> -P <password> -H <management_server_ip> power off",
"for i in USD(sudo virsh list | tail -n +3 | grep bootstrap | awk {'print USD2'}); do sudo virsh destroy USDi; sudo virsh undefine USDi; sudo virsh vol-delete USDi --pool USDi; sudo virsh vol-delete USDi.ign --pool USDi; sudo virsh pool-destroy USDi; sudo virsh pool-undefine USDi; done",
"metadata: name:",
"networking: machineNetwork: - cidr:",
"compute: - name: worker",
"compute: replicas: 2",
"controlPlane: name: master",
"controlPlane: replicas: 3",
"- name: master-0 role: master bmc: address: ipmi://10.10.0.3:6203 username: admin password: redhat bootMACAddress: de:ad:be:ef:00:40 rootDeviceHints: deviceName: \"/dev/sda\"",
"./openshift-baremetal-install --dir ~/clusterconfigs create manifests",
"INFO Consuming Install Config from target directory WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings WARNING Discarding the OpenShift Manifest that was provided in the target directory because its dependencies are dirty and it needs to be regenerated",
"./openshift-baremetal-install --dir ~/clusterconfigs --log-level debug create cluster",
"tail -f /path/to/install-dir/.openshift_install.log"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installing_ibm_cloud_bare_metal_classic/install-ibm-cloud-installation-workflow
|
Chapter 40. hypervisor
|
Chapter 40. hypervisor This chapter describes the commands under the hypervisor command. 40.1. hypervisor list List hypervisors Usage: Table 40.1. Command arguments Value Summary -h, --help Show this help message and exit --matching <hostname> Filter hypervisors using <hostname> substring --marker <marker> The uuid of the last hypervisor of the page; displays list of hypervisors after marker . (supported with --os-compute-api-version 2.33 or above) --limit <limit> Maximum number of hypervisors to display. note that there is a configurable max limit on the server, and the limit that is used will be the minimum of what is requested here and what is configured in the server. (supported with --os-compute-api-version 2.33 or above) --long List additional fields in output Table 40.2. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 40.3. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 40.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 40.5. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 40.2. hypervisor show Display hypervisor details Usage: Table 40.6. Positional arguments Value Summary <hypervisor> Hypervisor to display (name or id) Table 40.7. Command arguments Value Summary -h, --help Show this help message and exit Table 40.8. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 40.9. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 40.10. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 40.11. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 40.3. hypervisor stats show Display hypervisor stats details Usage: Table 40.12. Command arguments Value Summary -h, --help Show this help message and exit Table 40.13. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 40.14. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 40.15. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 40.16. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show.
|
[
"openstack hypervisor list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--matching <hostname>] [--marker <marker>] [--limit <limit>] [--long]",
"openstack hypervisor show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <hypervisor>",
"openstack hypervisor stats show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty]"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/command_line_interface_reference/hypervisor
|
Chapter 6. Installing a private cluster on IBM Power Virtual Server
|
Chapter 6. Installing a private cluster on IBM Power Virtual Server In OpenShift Container Platform version 4.17, you can install a private cluster into an existing VPC and IBM Power(R) Virtual Server Workspace. The installation program provisions the rest of the required infrastructure, which you can further customize. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. 6.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an IBM Cloud(R) account to host the cluster. If you use a firewall, you configured it to allow the sites that your cluster requires access to. You configured the ccoctl utility before you installed the cluster. For more information, see Configuring the Cloud Credential Operator utility . 6.2. Private clusters You can deploy a private OpenShift Container Platform cluster that does not expose external endpoints. Private clusters are accessible from only an internal network and are not visible to the internet. By default, OpenShift Container Platform is provisioned to use publicly-accessible DNS and endpoints. A private cluster sets the DNS, Ingress Controller, and API server to private when you deploy your cluster. This means that the cluster resources are only accessible from your internal network and are not visible to the internet. Important If the cluster has any public subnets, load balancer services created by administrators might be publicly accessible. To ensure cluster security, verify that these services are explicitly annotated as private. To deploy a private cluster, you must: Use existing networking that meets your requirements. Create a DNS zone using IBM Cloud(R) DNS Services and specify it as the base domain of the cluster. For more information, see "Using IBM Cloud(R) DNS Services to configure DNS resolution". Deploy from a machine that has access to: The API services for the cloud to which you provision. The hosts on the network that you provision. The internet to obtain installation media. You can use any machine that meets these access requirements and follows your company's guidelines. For example, this machine can be a bastion host on your cloud network or a machine that has access to the network through a VPN. 6.3. Private clusters in IBM Power Virtual Server To create a private cluster on IBM Power(R) Virtual Server, you must provide an existing private Virtual Private Cloud (VPC) and subnets to host the cluster. The installation program must also be able to resolve the DNS records that the cluster requires. The installation program configures the Ingress Operator and API server for only internal traffic. The cluster still requires access to internet to access the IBM Cloud(R) APIs. The following items are not required or created when you install a private cluster: Public subnets Public network load balancers, which support public Ingress A public DNS zone that matches the baseDomain for the cluster You will also need to create an IBM(R) DNS service containing a DNS zone that matches your baseDomain . Unlike standard deployments on Power VS which use IBM(R) CIS for DNS, you must use IBM(R) DNS for your DNS service. 6.3.1. Limitations Private clusters on IBM Power(R) Virtual Server are subject only to the limitations associated with the existing VPC that was used for cluster deployment. 6.4. Requirements for using your VPC You must correctly configure the existing VPC and its subnets before you install the cluster. The installation program does not create a VPC or VPC subnet in this scenario. The installation program cannot: Subdivide network ranges for the cluster to use Set route tables for the subnets Set VPC options like DHCP Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. 6.4.1. VPC validation The VPC and all of the subnets must be in an existing resource group. The cluster is deployed to this resource group. As part of the installation, specify the following in the install-config.yaml file: The name of the resource group The name of VPC The name of the VPC subnet To ensure that the subnets that you provide are suitable, the installation program confirms that all of the subnets you specify exists. Note Subnet IDs are not supported. 6.4.2. Isolation between clusters If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is reduced in the following ways: ICMP Ingress is allowed to the entire network. TCP port 22 Ingress (SSH) is allowed to the entire network. Control plane TCP 6443 Ingress (Kubernetes API) is allowed to the entire network. Control plane TCP 22623 Ingress (MCS) is allowed to the entire network. 6.5. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.17, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 6.6. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 6.7. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 6.8. Exporting the API key You must set the API key you created as a global variable; the installation program ingests the variable during startup to set the API key. Prerequisites You have created either a user API key or service ID API key for your IBM Cloud(R) account. Procedure Export your API key for your account as a global variable: USD export IBMCLOUD_API_KEY=<api_key> Important You must set the variable name exactly as specified; the installation program expects the variable name to be present during startup. 6.9. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for IBM Power(R) Virtual Server 6.9.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 6.1. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 2 16 GB 100 GB 300 Control plane RHCOS 2 16 GB 100 GB 300 Compute RHCOS 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 6.9.2. Sample customized install-config.yaml file for IBM Power Virtual Server You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com compute: 1 2 - architecture: ppc64le hyperthreading: Enabled 3 name: worker platform: powervs: smtLevel: 8 4 replicas: 3 controlPlane: 5 6 architecture: ppc64le hyperthreading: Enabled 7 name: master platform: powervs: smtLevel: 8 8 replicas: 3 metadata: creationTimestamp: null name: example-private-cluster-name networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 machineNetwork: - cidr: 192.168.0.0/24 networkType: OVNKubernetes 10 serviceNetwork: - 172.30.0.0/16 platform: powervs: userID: ibm-user-id powervsResourceGroup: "ibmcloud-resource-group" region: powervs-region vpcName: name-of-existing-vpc 11 vpcRegion : vpc-region zone: powervs-zone serviceInstanceGUID: "powervs-region-service-instance-guid" publish: Internal 12 pullSecret: '{"auths": ...}' 13 sshKey: ssh-ed25519 AAAA... 14 1 5 If you do not provide these parameters and values, the installation program provides the default value. 2 6 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Both sections currently define a single machine pool. Only one control plane pool is used. 3 7 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. 4 8 The smtLevel specifies the level of SMT to set to the control plane and compute machines. The supported values are 1, 2, 4, 8, 'off' and 'on' . The default value is 8. The smtLevel 'off' sets SMT to off and smtlevel 'on' sets SMT to the default value 8 on the cluster nodes. Note When simultaneous multithreading (SMT), or hyperthreading is not enabled, one vCPU is equivalent to one physical core. When enabled, total vCPUs is computed as (Thread(s) per core * Core(s) per socket) * Socket(s). The smtLevel controls the threads per core. Lower SMT levels may require additional assigned cores when deploying the cluster nodes. You can do this by setting the 'processors' parameter in the install-config.yaml file to an appropriate value to meet the requirements for deploying OpenShift Container Platform successfully. 9 The machine CIDR must contain the subnets for the compute machines and control plane machines. 10 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 11 Specify the name of an existing VPC. 12 Specify how to publish the user-facing endpoints of your cluster. Set publish to Internal to deploy a private cluster. 13 Required. The installation program prompts you for this value. 14 Provide the sshKey value that you use to access the machines in your cluster. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 6.9.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 6.10. Manually creating IAM Installing the cluster requires that the Cloud Credential Operator (CCO) operate in manual mode. While the installation program configures the CCO for manual mode, you must specify the identity and access management secrets for you cloud provider. You can use the Cloud Credential Operator (CCO) utility ( ccoctl ) to create the required IBM Cloud(R) resources. Prerequisites You have configured the ccoctl binary. You have an existing install-config.yaml file. Procedure Edit the install-config.yaml configuration file so that it contains the credentialsMode parameter set to Manual . Example install-config.yaml configuration file apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: ppc64le hyperthreading: Enabled 1 This line is added to set the credentialsMode parameter to Manual . To generate the manifests, run the following command from the directory that contains the installation program: USD ./openshift-install create manifests --dir <installation_directory> From the directory that contains the installation program, set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: "1.0" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer Create the service ID for each credential request, assign the policies defined, create an API key, and generate the secret: USD ccoctl ibmcloud create-service-id \ --credentials-requests-dir=<path_to_credential_requests_directory> \ 1 --name=<cluster_name> \ 2 --output-dir=<installation_directory> \ 3 --resource-group-name=<resource_group_name> 4 1 Specify the directory containing the files for the component CredentialsRequest objects. 2 Specify the name of the OpenShift Container Platform cluster. 3 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 4 Optional: Specify the name of the resource group used for scoping the access policies. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. If an incorrect resource group name is provided, the installation fails during the bootstrap phase. To find the correct resource group name, run the following command: USD grep resourceGroup <installation_directory>/manifests/cluster-infrastructure-02-config.yml Verification Ensure that the appropriate secrets were generated in your cluster's manifests directory. 6.11. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 6.12. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.17. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.17 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.17 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.17 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.17 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 6.13. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources Accessing the web console 6.14. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.17, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources About remote health monitoring 6.15. steps Customize your cluster Optional: Opt out of remote health reporting
|
[
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"export IBMCLOUD_API_KEY=<api_key>",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com compute: 1 2 - architecture: ppc64le hyperthreading: Enabled 3 name: worker platform: powervs: smtLevel: 8 4 replicas: 3 controlPlane: 5 6 architecture: ppc64le hyperthreading: Enabled 7 name: master platform: powervs: smtLevel: 8 8 replicas: 3 metadata: creationTimestamp: null name: example-private-cluster-name networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 machineNetwork: - cidr: 192.168.0.0/24 networkType: OVNKubernetes 10 serviceNetwork: - 172.30.0.0/16 platform: powervs: userID: ibm-user-id powervsResourceGroup: \"ibmcloud-resource-group\" region: powervs-region vpcName: name-of-existing-vpc 11 vpcRegion : vpc-region zone: powervs-zone serviceInstanceGUID: \"powervs-region-service-instance-guid\" publish: Internal 12 pullSecret: '{\"auths\": ...}' 13 sshKey: ssh-ed25519 AAAA... 14",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: ppc64le hyperthreading: Enabled",
"./openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: \"1.0\" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer",
"ccoctl ibmcloud create-service-id --credentials-requests-dir=<path_to_credential_requests_directory> \\ 1 --name=<cluster_name> \\ 2 --output-dir=<installation_directory> \\ 3 --resource-group-name=<resource_group_name> 4",
"grep resourceGroup <installation_directory>/manifests/cluster-infrastructure-02-config.yml",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/installing_on_ibm_power_virtual_server/installing-ibm-power-vs-private-cluster
|
Chapter 5. Deploying Red Hat Quay
|
Chapter 5. Deploying Red Hat Quay To deploy the Red Hat Quay service on the nodes in your cluster, you use the same Quay container you used to create the configuration file. The differences here are that you: Identify directories where the configuration files and data are stored Run the command with --sysctl net.core.somaxconn=4096 Don't use the config option or password For a basic setup, you can deploy on a single node; for high availability you probably want three or more nodes (for example, quay01, quay02, and quay03). Note The resulting Red Hat Quay service will listen on regular port 8080 and SSL port 8443. This is different from releases of Red Hat Quay, which listened on standard ports 80 and 443, respectively. In this document, we map 8080 and 8443 to standard ports 80 and 443 on the host, respectively. Througout the rest of this document, we assume you have mapped the ports in this way. Here is what you do: Create directories : Create two directories to store configuration information and data on the host. For example: Copy config files : Copy the tarball ( quay-config.tar.gz ) to the configuration directory and unpack it. For example: Deploy Red Hat Quay : Having already authenticated to Quay.io (see Accessing Red Hat Quay ) run Red Hat Quay as a container, as follows: Note Add -e DEBUGLOG=true to the podman run command line for the Quay container to enable debug level logging. Add -e IGNORE_VALIDATION=true to bypass validation during the startup process. Open browser to UI : Once the Quay container has started, go to your web browser and open the URL, to the node running the Quay container. Log into Red Hat Quay : Using the superuser account you created during configuration, log in and make sure Red Hat Quay is working properly. Add more Red Hat Quay nodes : At this point, you have the option of adding more nodes to this Red Hat Quay cluster by simply going to each node, then adding the tarball and starting the Quay container as just shown. Add optional features : To add more features to your Red Hat Quay cluster, such as Clair images scanning and Repository Mirroring, continue on to the section. 5.1. Add Clair image scanning to Red Hat Quay Setting up and deploying Clair image scanning for your Red Hat Quay deployment is described in Clair Security Scanning 5.2. Add repository mirroring Red Hat Quay Enabling repository mirroring allows you to create container image repositories on your Red Hat Quay cluster that exactly match the content of a selected external registry, then sync the contents of those repositories on a regular schedule and on demand. To add the repository mirroring feature to your Red Hat Quay cluster: Run the repository mirroring worker. To do this, you start a quay pod with the repomirror option. Select "Enable Repository Mirroring in the Red Hat Quay Setup tool. Log into your Red Hat Quay Web UI and begin creating mirrored repositories as described in Repository Mirroring in Red Hat Quay . The following procedure assumes you already have a running Red Hat Quay cluster on an OpenShift platform, with the Red Hat Quay Setup container running in your browser: Start the repo mirroring worker : Start the Quay container in repomirror mode. This example assumes you have configured TLS communications using a certificate that is currently stored in /root/ca.crt . If not, then remove the line that adds /root/ca.crt to the container: Log into config tool : Log into the Red Hat Quay Setup Web UI (config tool). Enable repository mirroring : Scroll down the Repository Mirroring section and select the Enable Repository Mirroring check box, as shown here: Select HTTPS and cert verification : If you want to require HTTPS communications and verify certificates during mirroring, select this check box. Save configuration : Select the Save Configuration Changes button. Repository mirroring should now be enabled on your Red Hat Quay cluster. Refer to Repository Mirroring in Red Hat Quay for details on setting up your own mirrored container image repositories.
|
[
"mkdir -p /mnt/quay/config #optional: if you don't choose to install an Object Store mkdir -p /mnt/quay/storage",
"cp quay-config.tar.gz /mnt/quay/config/ tar xvf quay-config.tar.gz config.yaml ssl.cert ssl.key",
"sudo podman run --restart=always -p 443:8443 -p 80:8080 --sysctl net.core.somaxconn=4096 --privileged=true -v /mnt/quay/config:/conf/stack:Z -v /mnt/quay/storage:/datastorage:Z -d registry.redhat.io/quay/quay-rhel8:v3.12.8",
"sudo podman run -d --name mirroring-worker -v /mnt/quay/config:/conf/stack:Z -v /root/ca.crt:/etc/pki/ca-trust/source/anchors/ca.crt registry.redhat.io/quay/quay-rhel8:v3.12.8 repomirror"
] |
https://docs.redhat.com/en/documentation/red_hat_quay/3.12/html/deploy_red_hat_quay_-_high_availability/deploying_red_hat_quay
|
Providing feedback on Red Hat documentation
|
Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug .
| null |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/updating_openshift_data_foundation/providing-feedback-on-red-hat-documentation_rhodf
|
Chapter 79. task
|
Chapter 79. task This chapter describes the commands under the task command. 79.1. task execution list List all tasks. Usage: Table 79.1. Positional arguments Value Summary workflow_execution Workflow execution id associated with list of tasks. Table 79.2. Command arguments Value Summary -h, --help Show this help message and exit --marker [MARKER] The last execution uuid of the page, displays list of executions after "marker". --limit [LIMIT] Maximum number of entries to return in a single result. --sort_keys [SORT_KEYS] Comma-separated list of sort keys to sort results by. Default: created_at. Example: mistral execution-list --sort_keys=id,description --sort_dirs [SORT_DIRS] Comma-separated list of sort directions. default: asc. Example: mistral execution-list --sort_keys=id,description --sort_dirs=asc,desc --filter FILTERS Filters. can be repeated. --oldest Display the executions starting from the oldest entries instead of the newest Table 79.3. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 79.4. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 79.5. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 79.6. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 79.2. task execution published show Show task published variables. Usage: Table 79.7. Positional arguments Value Summary id Task id Table 79.8. Command arguments Value Summary -h, --help Show this help message and exit 79.3. task execution rerun Rerun an existing task. Usage: Table 79.9. Positional arguments Value Summary id Task identifier Table 79.10. Command arguments Value Summary -h, --help Show this help message and exit --resume Rerun only failed or unstarted action executions for with-items task -e ENV, --env ENV Environment variables Table 79.11. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 79.12. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 79.13. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 79.14. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 79.4. task execution result show Show task output data. Usage: Table 79.15. Positional arguments Value Summary id Task id Table 79.16. Command arguments Value Summary -h, --help Show this help message and exit 79.5. task execution show Show specific task. Usage: Table 79.17. Positional arguments Value Summary task Task identifier Table 79.18. Command arguments Value Summary -h, --help Show this help message and exit Table 79.19. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 79.20. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 79.21. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 79.22. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show.
|
[
"openstack task execution list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--marker [MARKER]] [--limit [LIMIT]] [--sort_keys [SORT_KEYS]] [--sort_dirs [SORT_DIRS]] [--filter FILTERS] [--oldest] [workflow_execution]",
"openstack task execution published show [-h] id",
"openstack task execution rerun [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--resume] [-e ENV] id",
"openstack task execution result show [-h] id",
"openstack task execution show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] task"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/command_line_interface_reference/task
|
Chapter 1. High Availability Add-On overview
|
Chapter 1. High Availability Add-On overview The High Availability Add-On is a clustered system that provides reliability, scalability, and availability to critical production services. A cluster is two or more computers (called nodes or members ) that work together to perform a task. Clusters can be used to provide highly available services or resources. The redundancy of multiple machines is used to guard against failures of many types. High availability clusters provide highly available services by eliminating single points of failure and by failing over services from one cluster node to another in case a node becomes inoperative. Typically, services in a high availability cluster read and write data (by means of read-write mounted file systems). Therefore, a high availability cluster must maintain data integrity as one cluster node takes over control of a service from another cluster node. Node failures in a high availability cluster are not visible from clients outside the cluster. (High availability clusters are sometimes referred to as failover clusters.) The High Availability Add-On provides high availability clustering through its high availability service management component, Pacemaker . Red Hat provides a variety of documentation for planning, configuring, and maintaining a Red Hat high availability cluster. For a listing of articles that provide guided indexes to the various areas of Red Hat cluster documentation, see the Red Hat High Availability Add-On Documentation Guide . 1.1. High Availability Add-On components The Red Hat High Availability Add-On consists of several components that provide the high availability service. The major components of the High Availability Add-On are as follows: Cluster infrastructure - Provides fundamental functions for nodes to work together as a cluster: configuration file management, membership management, lock management, and fencing. High availability service management - Provides failover of services from one cluster node to another in case a node becomes inoperative. Cluster administration tools - Configuration and management tools for setting up, configuring, and managing the High Availability Add-On. The tools are for use with the cluster infrastructure components, the high availability and service management components, and storage. You can supplement the High Availability Add-On with the following components: Red Hat GFS2 (Global File System 2) - Part of the Resilient Storage Add-On, this provides a cluster file system for use with the High Availability Add-On. GFS2 allows multiple nodes to share storage at a block level as if the storage were connected locally to each cluster node. GFS2 cluster file system requires a cluster infrastructure. LVM Locking Daemon ( lvmlockd ) - Part of the Resilient Storage Add-On, this provides volume management of cluster storage. lvmlockd support also requires cluster infrastructure. HAProxy - Routing software that provides high availability load balancing and failover in layer 4 (TCP) and layer 7 (HTTP, HTTPS) services. 1.2. High Availability Add-On concepts Some of the key concepts of a Red Hat High Availability Add-On cluster are as follows. 1.2.1. Fencing If communication with a single node in the cluster fails, then other nodes in the cluster must be able to restrict or release access to resources that the failed cluster node may have access to. his cannot be accomplished by contacting the cluster node itself as the cluster node may not be responsive. Instead, you must provide an external method, which is called fencing with a fence agent. A fence device is an external device that can be used by the cluster to restrict access to shared resources by an errant node, or to issue a hard reboot on the cluster node. Without a fence device configured you do not have a way to know that the resources previously used by the disconnected cluster node have been released, and this could prevent the services from running on any of the other cluster nodes. Conversely, the system may assume erroneously that the cluster node has released its resources and this can lead to data corruption and data loss. Without a fence device configured data integrity cannot be guaranteed and the cluster configuration will be unsupported. When the fencing is in progress no other cluster operation is allowed to run. Normal operation of the cluster cannot resume until fencing has completed or the cluster node rejoins the cluster after the cluster node has been rebooted. For more information about fencing, see the Red Hat Knowledgebase solution Fencing in a Red Hat High Availability Cluster . 1.2.2. Quorum In order to maintain cluster integrity and availability, cluster systems use a concept known as quorum to prevent data corruption and loss. A cluster has quorum when more than half of the cluster nodes are online. To mitigate the chance of data corruption due to failure, Pacemaker by default stops all resources if the cluster does not have quorum. Quorum is established using a voting system. When a cluster node does not function as it should or loses communication with the rest of the cluster, the majority working nodes can vote to isolate and, if needed, fence the node for servicing. For example, in a 6-node cluster, quorum is established when at least 4 cluster nodes are functioning. If the majority of nodes go offline or become unavailable, the cluster no longer has quorum and Pacemaker stops clustered services. The quorum features in Pacemaker prevent what is also known as split-brain , a phenomenon where the cluster is separated from communication but each part continues working as separate clusters, potentially writing to the same data and possibly causing corruption or loss. For more information about what it means to be in a split-brain state, and on quorum concepts in general, see the Red Hat Knowledgebase article Exploring Concepts of RHEL High Availability Clusters - Quorum . A Red Hat Enterprise Linux High Availability Add-On cluster uses the votequorum service, in conjunction with fencing, to avoid split brain situations. A number of votes is assigned to each system in the cluster, and cluster operations are allowed to proceed only when a majority of votes is present. 1.2.3. Cluster resources A cluster resource is an instance of a program, data, or application to be managed by the cluster service. These resources are abstracted by agents that provide a standard interface for managing the resource in a cluster environment. To ensure that resources remain healthy, you can add a monitoring operation to a resource's definition. If you do not specify a monitoring operation for a resource, one is added by default. You can determine the behavior of a resource in a cluster by configuring constraints . You can configure the following categories of constraints: location constraints - A location constraint determines which nodes a resource can run on. ordering constraints - An ordering constraint determines the order in which the resources run. colocation constraints - A colocation constraint determines where resources will be placed relative to other resources. One of the most common elements of a cluster is a set of resources that need to be located together, start sequentially, and stop in the reverse order. To simplify this configuration, Pacemaker supports the concept of groups . 1.3. Pacemaker overview Pacemaker is a cluster resource manager. It achieves maximum availability for your cluster services and resources by making use of the cluster infrastructure's messaging and membership capabilities to deter and recover from node and resource-level failure. 1.3.1. Pacemaker architecture components A cluster configured with Pacemaker comprises separate component daemons that monitor cluster membership, scripts that manage the services, and resource management subsystems that monitor the disparate resources. The following components form the Pacemaker architecture: Cluster Information Base (CIB) The Pacemaker information daemon, which uses XML internally to distribute and synchronize current configuration and status information from the Designated Coordinator (DC) - a node assigned by Pacemaker to store and distribute cluster state and actions by means of the CIB - to all other cluster nodes. Cluster Resource Management Daemon (CRMd) Pacemaker cluster resource actions are routed through this daemon. Resources managed by CRMd can be queried by client systems, moved, instantiated, and changed when needed. Each cluster node also includes a local resource manager daemon (LRMd) that acts as an interface between CRMd and resources. LRMd passes commands from CRMd to agents, such as starting and stopping and relaying status information. Shoot the Other Node in the Head (STONITH) STONITH is the Pacemaker fencing implementation. It acts as a cluster resource in Pacemaker that processes fence requests, forcefully shutting down nodes and removing them from the cluster to ensure data integrity. STONITH is configured in the CIB and can be monitored as a normal cluster resource. corosync corosync is the component - and a daemon of the same name - that serves the core membership and member-communication needs for high availability clusters. It is required for the High Availability Add-On to function. In addition to those membership and messaging functions, corosync also: Manages quorum rules and determination. Provides messaging capabilities for applications that coordinate or operate across multiple members of the cluster and thus must communicate stateful or other information between instances. Uses the kronosnet library as its network transport to provide multiple redundant links and automatic failover. 1.3.2. Pacemaker configuration and management tools The High Availability Add-On features two configuration tools for cluster deployment, monitoring, and management. pcs The pcs command-line interface controls and configures Pacemaker and the corosync heartbeat daemon. A command-line based program, pcs can perform the following cluster management tasks: Create and configure a Pacemaker/Corosync cluster Modify configuration of the cluster while it is running Remotely configure both Pacemaker and Corosync as well as start, stop, and display status information of the cluster pcsd Web UI A graphical user interface to create and configure Pacemaker/Corosync clusters. 1.3.3. The cluster and Pacemaker configuration files The configuration files for the Red Hat High Availability Add-On are corosync.conf and cib.xml . The corosync.conf file provides the cluster parameters used by corosync , the cluster manager that Pacemaker is built on. In general, you should not edit the corosync.conf directly but, instead, use the pcs or pcsd interface. The cib.xml file is an XML file that represents both the cluster's configuration and the current state of all resources in the cluster. This file is used by Pacemaker's Cluster Information Base (CIB). The contents of the CIB are automatically kept in sync across the entire cluster. Do not edit the cib.xml file directly; use the pcs or pcsd interface instead. 1.4. LVM logical volumes in a Red Hat high availability cluster The Red Hat High Availability Add-On provides support for LVM volumes in two distinct cluster configurations. The cluster configurations you can choose are as follows: High availability LVM volumes (HA-LVM) in active/passive failover configurations in which only a single node of the cluster accesses the storage at any one time. LVM volumes that use the lvmlockd daemon to manage storage devices in active/active configurations in which more than one node of the cluster requires access to the storage at the same time. The lvmlockd daemon is part of the Resilient Storage Add-On. 1.4.1. Choosing HA-LVM or shared volumes When to use HA-LVM or shared logical volumes managed by the lvmlockd daemon should be based on the needs of the applications or services being deployed. If multiple nodes of the cluster require simultaneous read/write access to LVM volumes in an active/active system, then you must use the lvmlockd daemon and configure your volumes as shared volumes. The lvmlockd daemon provides a system for coordinating activation of and changes to LVM volumes across nodes of a cluster concurrently. The lvmlockd daemon's locking service provides protection to LVM metadata as various nodes of the cluster interact with volumes and make changes to their layout. This protection is contingent upon configuring any volume group that will be activated simultaneously across multiple cluster nodes as a shared volume. If the high availability cluster is configured to manage shared resources in an active/passive manner with only one single member needing access to a given LVM volume at a time, then you can use HA-LVM without the lvmlockd locking service. Most applications will run better in an active/passive configuration, as they are not designed or optimized to run concurrently with other instances. Choosing to run an application that is not cluster-aware on shared logical volumes can result in degraded performance. This is because there is cluster communication overhead for the logical volumes themselves in these instances. A cluster-aware application must be able to achieve performance gains above the performance losses introduced by cluster file systems and cluster-aware logical volumes. This is achievable for some applications and workloads more easily than others. Determining what the requirements of the cluster are and whether the extra effort toward optimizing for an active/active cluster will pay dividends is the way to choose between the two LVM variants. Most users will achieve the best HA results from using HA-LVM. HA-LVM and shared logical volumes using lvmlockd are similar in the fact that they prevent corruption of LVM metadata and its logical volumes, which could otherwise occur if multiple machines are allowed to make overlapping changes. HA-LVM imposes the restriction that a logical volume can only be activated exclusively; that is, active on only one machine at a time. This means that only local (non-clustered) implementations of the storage drivers are used. Avoiding the cluster coordination overhead in this way increases performance. A shared volume using lvmlockd does not impose these restrictions and a user is free to activate a logical volume on all machines in a cluster; this forces the use of cluster-aware storage drivers, which allow for cluster-aware file systems and applications to be put on top. 1.4.2. Configuring LVM volumes in a cluster Clusters are managed through Pacemaker. Both HA-LVM and shared logical volumes are supported only in conjunction with Pacemaker clusters, and must be configured as cluster resources. Note If an LVM volume group used by a Pacemaker cluster contains one or more physical volumes that reside on remote block storage, such as an iSCSI target, Red Hat recommends that you configure a systemd resource-agents-deps target and a systemd drop-in unit for the target to ensure that the service starts before Pacemaker starts. For information on configuring a systemd resource-agents-deps target, see Configuring startup order for resource dependencies not managed by Pacemaker . For examples of procedures for configuring an HA-LVM volume as part of a Pacemaker cluster, see Configuring an active/passive Apache HTTP server in a Red Hat High Availability cluster and Configuring an active/passive NFS server in a Red Hat High Availability cluster . Note that these procedures include the following steps: Ensuring that only the cluster is capable of activating the volume group Configuring an LVM logical volume Configuring the LVM volume as a cluster resource For procedures for configuring shared LVM volumes that use the lvmlockd daemon to manage storage devices in active/active configurations, see GFS2 file systems in a cluster and Configuring an active/active Samba server in a Red Hat High Availability cluster .
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_high_availability_clusters/assembly_overview-of-high-availability-configuring-and-managing-high-availability-clusters
|
3. Installation-Related Notes
|
3. Installation-Related Notes The following section includes information specific to installation of Red Hat Enterprise Linux and the Anaconda installation program. Note When updating from one minor version of Red Hat Enterprise Linux 4 (such as 4.6 to 4.7) to Red Hat Enterprise Linux 4.8, it is recommended that you do so using Red Hat Network, whether through the hosted web user interface or Red Hat Network Satellite. If you are upgrading a system with no available network connectivity, use the "Upgrade" functionality of Anaconda . However, note that Anaconda has limited abilities to handle issues such as dependencies on additional repositories or third-party applications. Further, Anaconda reports installation errors in a log file, not interactively. As such, Red Hat recommends that when upgrading offline systems, you should test and verify the integrity of your upgrade configuration first. Be sure to carefully review the update log for errors before applying the upgrade to your production environment. In-place upgrades between major versions of Red Hat Enterprise Linux (for example, upgrading from Red Hat Enterprise Linux 3 to Red Hat Enterprise Linux 4.8) is not supported. While the "Upgrade" option of Anaconda allows you to perform this, there is no guarantee that the upgrade will result in a working installation. In-place upgrades accross major releases do not preserve all system settings, services, and custom configurations. For this reason, Red Hat strongly recommends that you perform a fresh installation when planning to upgrade between major versions. 3.1. All Architectures Important If you are copying the contents of the Red Hat Enterprise Linux 4.8 CD-ROMs (in preparation for a network-based installation, for example) be sure you copy the CD-ROMs for the operating system only . Do not copy the Supplementary CD-ROM, or any of the layered product CD-ROMs, as this will overwrite files necessary for Anaconda's proper operation. These CD-ROMs must be installed after Red Hat Enterprise Linux is installed. Bugzilla #205295 The version of GRUB shipped with Red Hat Enterprise Linux 4 (and all updates) does not support software mirroring (RAID1). As such, if you install Red Hat Enterprise Linux 4 on a RAID1 partition, the bootloader will be installed in the first hard drive instead of the master boot record (MBR). This will render the system unbootable. If you wish to install Red Hat Enterprise Linux 4 on a RAID1 partition, you should clear any pre-existing bootloader from the MBR first. Bugzilla #222958 When installing Red Hat Enterprise Linux 4 in Text Mode on systems that use flat-panel monitors and some ATI cards, the screen area may appear shifted. When this occurs, some areas of the screen will be obscured. If this occurs, perform the installation with the parameter linux nofb . Bugzilla #445835 When upgrading from Red Hat Enterprise Linux 4.6 to this release, minilogd may log several SELinux denials. These error logs are harmless, and can be safely ignored. Bugzilla #430476 Previously, in the Anaconda kickstart documentation (located at: /usr/share/doc/anaconda-<anaconda-version>/kickstart-docs.txt ), the section detailing the --driveorder option in a kickstart file stated: However, the --driveorder option actually requires a list of all drives on the system, with the first boot device appearing first in the list. With this update, the documentation has been clarified and now reads: When using the --driveorder option in a kickstart file The ordered list must include all the drives in the system.
|
[
"Specify which drive is first in the BIOS boot order.",
"Specify which drive is first in the BIOS boot order. The ordered list must include all the drives in the system."
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/4.8_release_notes/ar01s03
|
Chapter 3. Additional Features
|
Chapter 3. Additional Features 3.1. Adding a FORM Login as a Fallback JBoss EAP and applications deployed to it can also configure a FORM login authentication mechanism to use as a fallback. This allows applications to present a login page for authentication in cases where Kerberos/SPNEGO tokens are not present. This authentication happens independent of the Kerberos authentication. As a result, depending on how the FORM login fallback is configured, users may require separate credentials to authenticate by this method. Note The fallback to FORM login is available when no SPNEGO or NTLM tokens are present or, when a SPNEGO token is present, but from another KDC. 3.1.1. Update Your Application The following steps are required to configure your application for FORM login as a fallback: Configure JBoss EAP and the web application to use Kerberos and SPNEGO. See How to Set Up SSO for JBoss EAP with Kerberos for the steps required to configure JBoss EAP and web applications to use Kerberos and SPNEGO for authentication and authorization. Add the login and error pages. To use FORM login, a login and error page are required. These files are added to web application and are used in the authentication process. Example: login.jsp File <html> <head></head> <body> <form id="login_form" name="login_form" method="post" action="j_security_check" enctype="application/x-www-form-urlencoded"> <center> <p>Please login to proceed.</p> </center> <div style="margin-left: 15px;"> <p> <label for="username">Username</label> <br /> <input id="username" type="text" name="j_username"/> </p> <p> <label for="password">Password</label> <br /> <input id="password" type="password" name="j_password" value=""/> </p> <center> <input id="submit" type="submit" name="submit" value="Login"/> </center> </div> </form> </body> </html> Example: error.jsp File <html> <head></head> <body> <p>Login failed, please go back and try again.</p> </body> </html> Modify the web.xml . After adding the login and error pages to the web application, the web.xml must be updated to use these files for FORM login. The exact value FORM must be added to the <auth-method> element. Since <auth-method> expects a comma-separated list and order is significant, the exact value for <auth-method> must be updated to SPNEGO,FORM . In addition, a <form-login-config> element must be added to <login-config> and the paths to the login and error pages specified as <form-login-page> and <form-error-page> elements. Example: Updated web.xml File <web-app> <display-name>App1</display-name> <description>App1</description> <!-- Define a security constraint that requires the Admin role to access resources --> <security-constraint> <display-name>Security Constraint on Conversation</display-name> <web-resource-collection> <web-resource-name>examplesWebApp</web-resource-name> <url-pattern>/*</url-pattern> </web-resource-collection> <auth-constraint> <role-name>Admin</role-name> </auth-constraint> </security-constraint> <!-- Define the Login Configuration for this Application --> <login-config> <auth-method>SPNEGO,FORM</auth-method> <realm-name>SPNEGO</realm-name> <form-login-config> <form-login-page>/login.jsp</form-login-page> <form-error-page>/error.jsp</form-error-page> </form-login-config> </login-config> <!-- Security roles referenced by this web application --> <security-role> <description> role required to log in to the Application</description> <role-name>Admin</role-name> </security-role> </web-app> 3.1.2. Update the Elytron Subsystem Add a mechanism for FORM authentication in the http-authentication-factory . You can use the existing http-authentication-factory you configured for kerberos-based authentication and an additional mechanism for FORM authentication. Add additional fallback principals. The existing configuration for kerberos-based authentication should already have a security realm configured for mapping principals from kerberos token to roles for the application. You can add additional users for fallback authentication to that realm. For example if you used a filesystem-realm , you can simply create a new user with the appropriate roles: 3.1.3. Update the Legacy Security Subsystem If you are using the legacy security subsystem in JBoss EAP, you must update the security domain for fallback authentication. The web application security domain must be configured to support a fallback login mechanism. This requires the following steps: Add a new security domain to serve as a fallback authentication method. Add a usernamePasswordDomain module option to the web application security domain that points to the fallback domain. Example: Security Domain Configured with a Fallback Security Domain 3.2. Securing the Management Interfaces with Kerberos In addition to providing Kerberos authentication in security domains, JBoss EAP also provides the ability to secure the management interfaces using Kerberos. 3.2.1. Secure the Management Interfaces with Kerberos Using Elytron To configure Kerberos authentication for the HTTP management interface: Follow the same instructions for configuring Kerberos authentication for applications to create an http-authentication-factory that does Kerberos authentication. Important When configuring Kerberos authentication with the management interfaces, it is very important that you pay close attention to the service principal you configure for JBoss EAP to authenticate against the KDC. This service principal takes the form of service-name/hostname . JBoss EAP expects HTTP to be the service name, for example HTTP/localhost , when authenticating against the web-based management console and remote to be the service name, for example remote/localhost , for the management CLI. Update the management HTTP interface to use the http-authentication-factory . To configure Kerberos authentication for SASL authentication for the management CLI: Follow the same instructions for configuring Kerberos authentication for applications to create a security domain and kerberos-security-factory . Add GSSAPI to the configurable-sasl-server-factory . Create a sasl-authentication-factory that uses the security domain and kerberos-security-factory . Example: sasl-authentication-factory Update the management SASL interface to use the sasl-authentication-factory . Example: Update sasl-authentication-factory 3.2.2. Secure the Management Interfaces With Kerberos Using Legacy Core Management Authentication To enable Kerberos authentication on the management interfaces using legacy core management authentication, the following steps must be performed: Note The management CLI commands shown assume that you are running a JBoss EAP standalone server. For more details on using the management CLI for a JBoss EAP managed domain, see the JBoss EAP Management CLI Guide . Enable the relevant system properties. As discussed in a section , enable any needed JBoss EAP system properties for connecting to the Kerberos server. Add the Kerberos server identity to the security realm. Before Kerberos authentication can be used in a security realm, a connection to a Kerberos server must be added. The following example shows how to add a Kerberos server identity to the existing Management Realm. You will need to replace service-name , hostname , and MY-REALM with the appropriate values. Example CLI for Adding a Server Identity to a Security Realm Important When configuring Kerberos authentication with the management interfaces, it is very important that you pay close attention to the service principal you configure for JBoss EAP to authenticate against the KDC. This service principal takes the form of service-name/hostname . JBoss EAP expects HTTP to be the service name, for example HTTP/localhost , when authenticating against the web-based management console and remote to be the service name, for example remote/localhost , for the management CLI. Update the authentication method in the security realm. Once the Kerberos server identity has been properly configured, the authentication method in the security realm needs to be updated to use it. Example: Adding Kerberos Authentication to a Security Realm Important Based on the order in which you have the authentication mechanisms defined in the security realm, JBoss EAP will attempt to authenticate the user in that order when accessing the management interfaces. Secure both interfaces with Kerberos. In cases where you would like to secure both the web-based management console and management CLI with Kerberos, you need a Kerberos server identity configured for each. To add an additional identity, use the following command. 3.2.3. Connecting to the Management Interface Before attempting to connect to the management interfaces, you need to have a valid Kerberos ticket. If the security realm fails to authenticate a user via Kerberos, when using the legacy security solution, it will attempt to authenticate the user using any of the subsequent methods specified in the <authentication> element. The elytron subsystem behaves similar to the legacy security solution. If the Kerberos authentication mechanism fails, authentication falls back to any other mechanism that you have defined in the authentication factory that is protecting the management interface. Usually, DIGEST or BASIC is used as a fallback. When you connect to the web-based management console using a browser, the security realm will attempt to authenticate you based on that ticket. When connecting to the management CLI, you will need to use the -Djavax.security.auth.useSubjectCredsOnly=false parameter, as this allows the GSSAPI implementation to make use of the identity managed at the operating system level. You may also need to use the following parameters based on how your environment is set up: -Djava.security.krb5.realm= REALM_NAME Specifies the realm name. -Djava.security.krb5.kdc= KDC_HOSTNAME Specifies the location of the KDC. --no-local-auth Disables local authentication. This is useful if you are attempting to connect to a JBoss EAP instance running on the same machine you are running the script from. Example Command Warning If an HTTP proxy is used between the client and server, it must take care to not share authenticated connections between different authenticated clients to the same server. If this is not honored, then the server can easily lose track of security context associations. A proxy that correctly honors client to server authentication integrity will supply the Proxy-support: Session- Based-Authentication HTTP header to the client in HTTP responses from the proxy. The client must not utilize the SPNEGO HTTP authentication mechanism through a proxy unless the proxy supplies this header with the 401 Unauthorized response from the server. 3.3. Kerberos Authentication Integration for Remoting In addition to using Kerberos for securing the management interfaces and web applications, you can also configure Kerberos authentication for services accessed via remoting, such as Jakarta Enterprise Beans. System properties for Kerberos also need to be configured. For more information, see Configure the Elytron Subsystem . 3.3.1. Kerberos Authentication Integration Using Legacy Security Realms To configure Kerberos authentication, you will need to do the following: Configure a security domain with remoting and RealmDirect You need to configure a security domain for use by the service that is accessed by remoting. This security domain needs to make use of both the Remoting login module as well as a RealmDirect login module, such as RealmDirect or RealmUsersRoles . Essentially, it should look very similar to the other security domain provided by default. For more details on the specific configuration options of each login module, see the JBoss EAP Login Module Reference . Example: Security Domain with Remoting and RealmDirect Login Modules Configure a security realm for Kerberos authentication. Setting up a security realm with Kerberos authentication is covered in the Securing the Management Interfaces with Kerberos section. Example: Security Realm Configure the HTTP connector in the remoting subsystem. In addition, you will need to configure the HTTP connector in the remoting subsystem to use the newly created security realm. Example: Remoting Subsystem Configure security for the service. You must also set up the service that is accessed using the remoting interface to secured . This will vary depending on the service. For example, with a Jakarta Enterprise Beans, you can use the @SecurityDomain and @RolesAllowed annotations. 3.3.2. Kerberos Authentication Integration Using Elytron It is possible to define an Elytron security domain for Kerberos or GSSAPI SASL authentication for remoting authentication. Define the security realm to load the identity from. It is used for assigning roles. Define the Kerberos security factory for the server's identity. Define the security domain and along with it, a SASL authentication factory. Use the created sasl-authentication-factory , in the remoting subsystem, to enable it for remoting. Example CLI Command Configure security for the service. If you reference the security domain in a Jakarta Enterprise Beans, you must specify the application-security-domain that maps to the Elytron security domain. For example, with a Jakarta Enterprise Beans, you can use the @SecurityDomain annotation. Example CLI Command The use of a Jakarta Authentication Subject for identity association is no longer supported. Clients wishing to programmatically manage the Kerberos identity for a Jakarta Enterprise Beans call should migrate and use the AuthenticationConfiguration APIs directly, as follows: // create your authentication configuration AuthenticationConfiguration configuration = AuthenticationConfiguration.empty() .useProvidersFromClassLoader(SecuredGSSCredentialClient.class.getClassLoader()) .useGSSCredential(getGSSCredential()); // create your authentication context AuthenticationContext context = AuthenticationContext.empty().with(MatchRule.ALL, configuration); // create a callable that looks up an Jakarta Enterprise Bean and invokes a method on it Callable<Void> callable = () -> { ... }; // use your authentication context to run your callable context.runCallable(callable); The call to useGSSCredential(getGSSCredential()) happens when creating the AuthenticationConfiguration . Client code that already has access to a Jakarta Authentication Subject can easily be converted to obtain the GSSCredential as follows: private GSSCredential getGSSCredential() { return Subject.doAs(subject, new PrivilegedAction<GSSCredential>() { public GSSCredential run() { try { GSSManager gssManager = GSSManager.getInstance(); return gssManager.createCredential(GSSCredential.INITIATE_ONLY); } catch (Exception e) { e.printStackTrace(); } return null; } }); } Revised on 2024-01-17 05:25:11 UTC
|
[
"<html> <head></head> <body> <form id=\"login_form\" name=\"login_form\" method=\"post\" action=\"j_security_check\" enctype=\"application/x-www-form-urlencoded\"> <center> <p>Please login to proceed.</p> </center> <div style=\"margin-left: 15px;\"> <p> <label for=\"username\">Username</label> <br /> <input id=\"username\" type=\"text\" name=\"j_username\"/> </p> <p> <label for=\"password\">Password</label> <br /> <input id=\"password\" type=\"password\" name=\"j_password\" value=\"\"/> </p> <center> <input id=\"submit\" type=\"submit\" name=\"submit\" value=\"Login\"/> </center> </div> </form> </body> </html>",
"<html> <head></head> <body> <p>Login failed, please go back and try again.</p> </body> </html>",
"<web-app> <display-name>App1</display-name> <description>App1</description> <!-- Define a security constraint that requires the Admin role to access resources --> <security-constraint> <display-name>Security Constraint on Conversation</display-name> <web-resource-collection> <web-resource-name>examplesWebApp</web-resource-name> <url-pattern>/*</url-pattern> </web-resource-collection> <auth-constraint> <role-name>Admin</role-name> </auth-constraint> </security-constraint> <!-- Define the Login Configuration for this Application --> <login-config> <auth-method>SPNEGO,FORM</auth-method> <realm-name>SPNEGO</realm-name> <form-login-config> <form-login-page>/login.jsp</form-login-page> <form-error-page>/error.jsp</form-error-page> </form-login-config> </login-config> <!-- Security roles referenced by this web application --> <security-role> <description> role required to log in to the Application</description> <role-name>Admin</role-name> </security-role> </web-app>",
"/subsystem=elytron/http-authentication-factory=example-krb-http-auth:list-add(name=mechanism-configurations, value={mechanism-name=FORM})",
"/subsystem=elytron/filesystem-realm=exampleFsRealm:add-identity(identity=fallbackUser1) /subsystem=elytron/filesystem-realm=exampleFsRealm:set-password(identity=fallbackUser1, clear={password=\"password123\"}) /subsystem=elytron/filesystem-realm=exampleFsRealm:add-identity-attribute(identity=fallbackUser1, name=Roles, value=[\"Admin\",\"Guest\"])",
"/subsystem=security/security-domain=app-fallback:add(cache-type=default) /subsystem=security/security-domain=app-fallback/authentication=classic:add() /subsystem=security/security-domain=app-fallback/authentication=classic/login-module=UsersRoles:add(code=UsersRoles, flag=required, module-options=[usersProperties=\"file:USD{jboss.server.config.dir}/fallback-users.properties\", rolesProperties=\"file:USD{jboss.server.config.dir}/fallback-roles.properties\"]) /subsystem=security/security-domain=app-spnego/authentication=classic/login-module=SPNEGO:add(code=SPNEGO, flag=required, module-options=[serverSecurityDomain=host]) /subsystem=security/security-domain=app-spnego/authentication=classic/login-module=SPNEGO:map-put(name=module-options, key=usernamePasswordDomain, value=app-fallback) /subsystem=security/security-domain=app-spnego/authentication=classic/login-module=SPNEGO:map-put(name=module-options, key=password-stacking, value=useFirstPass) reload",
"/core-service=management/management-interface=http-interface:write-attribute(name=http-authentication-factory, value=example-krb-http-auth)",
"/subsystem=elytron/configurable-sasl-server-factory=configured:list-add(name=filters, value={pattern-filter=GSSAPI})",
"/subsystem=elytron/sasl-authentication-factory=example-sasl-auth:add(sasl-server-factory=configured, security-domain=exampleFsSD, mechanism-configurations=[{mechanism-name=GSSAPI, mechanism-realm-configurations=[{realm-name=exampleFsSD}], credential-security-factory=krbSF}])",
"/core-service=management/management-interface=http-interface:write-attribute(name=http-upgrade.sasl-authentication-factory, value=example-sasl-auth) reload",
"/core-service=management/security-realm=ManagementRealm/server-identity=kerberos:add() /core-service=management/security-realm=ManagementRealm/server-identity=kerberos/keytab=service-name\\/hostname@MY-REALM:add(path=/home\\/username\\/service.keytab, debug=true) reload",
"/core-service=management/security-realm=ManagementRealm/authentication=kerberos:add() reload",
"/core-service=management/security-realm=ManagementRealm/server-identity=kerberos/keytab=remote\\/hostname@MY-REALM:add(path=/home\\/username\\/remote.keytab, debug=true) reload",
"EAP_HOME /bin/jboss-cli.sh -c -Djavax.security.auth.useSubjectCredsOnly=false --no-local-auth",
"/subsystem=security/security-domain=krb-remoting-domain:add() /subsystem=security/security-domain=krb-remoting-domain/authentication=classic:add() /subsystem=security/security-domain=krb-remoting-domain/authentication=classic/login-module=Remoting:add(code=Remoting, flag=optional, module-options=[password-stacking=useFirstPass]) /subsystem=security/security-domain=krb-remoting-domain/authentication=classic/login-module=RealmDirect:add(code=RealmDirect, flag=required, module-options=[password-stacking=useFirstPass, realm=krbRealm]) /subsystem=security/security-domain=krb-remoting-domain/mapping=classic:add() /subsystem=security/security-domain=krb-remoting-domain/mapping=classic/mapping-module=SimpleRoles:add(code=SimpleRoles, type=role, module-options=[\"testUser\"=\"testRole\"]) reload",
"/core-service=management/security-realm=krbRealm:add() /core-service=management/security-realm=krbRealm/server-identity=kerberos:add() /core-service=management/security-realm=krbRealm/server-identity=kerberos/keytab=remote\\/[email protected]:add(path=\\/path\\/to\\/remote.keytab, debug=true) /core-service=management/security-realm=krbRealm/authentication=kerberos:add(remove-realm=true) reload",
"/subsystem=remoting/http-connector=http-remoting-connector:write-attribute(name=security-realm, value=krbRealm)",
"/path=kerberos:add(relative-to=user.home, path=src/kerberos) /subsystem=elytron/properties-realm=kerberos-properties:add(users-properties={path=kerberos-users.properties, relative-to=kerberos, digest-realm-name=ELYTRON.ORG}, groups-properties={path=kerberos-groups.properties, relative-to=kerberos})",
"/subsystem=elytron/kerberos-security-factory=test-server:add(relative-to=kerberos, path=remote-test-server.keytab, principal=remote/[email protected])",
"/subsystem=elytron/security-domain=KerberosDomain:add(default-realm=kerberos-properties, realms=[{realm=kerberos-properties, role-decoder=groups-to-roles}], permission-mapper=default-permission-mapper) /subsystem=elytron/sasl-authentication-factory=gssapi-authentication-factory:add(security-domain=KerberosDomain, sasl-server-factory=elytron, mechanism-configurations=[{mechanism-name=GSSAPI, credential-security-factory=test-server}])",
"/subsystem=remoting/http-connector=http-remoting-connector:write-attribute(name=sasl-authentication-factory, value=gssapi-authentication-factory)",
"/subsystem=ejb3/application-security-domain=KerberosDomain:add(security-domain=KerberosDomain)",
"// create your authentication configuration AuthenticationConfiguration configuration = AuthenticationConfiguration.empty() .useProvidersFromClassLoader(SecuredGSSCredentialClient.class.getClassLoader()) .useGSSCredential(getGSSCredential()); // create your authentication context AuthenticationContext context = AuthenticationContext.empty().with(MatchRule.ALL, configuration); // create a callable that looks up an Jakarta Enterprise Bean and invokes a method on it Callable<Void> callable = () -> { }; // use your authentication context to run your callable context.runCallable(callable);",
"private GSSCredential getGSSCredential() { return Subject.doAs(subject, new PrivilegedAction<GSSCredential>() { public GSSCredential run() { try { GSSManager gssManager = GSSManager.getInstance(); return gssManager.createCredential(GSSCredential.INITIATE_ONLY); } catch (Exception e) { e.printStackTrace(); } return null; } }); }"
] |
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/how_to_set_up_sso_with_kerberos/additional_features
|
1.3. Installing GFS2
|
1.3. Installing GFS2 In addition to the packages required for the Red Hat High Availability Add-On, you must install the gfs2-utils package for GFS2 and the lvm2-cluster package for the Clustered Logical Volume Manager (CLVM). The lvm2-cluster and gfs2-utils packages are part of ResilientStorage channel, which must be enabled before installing the packages. You can use the following yum install command to install the Red Hat High Availability Add-On software packages: For general information on the Red Hat High Availability Add-On and cluster administration, see the Cluster Administration manual.
|
[
"yum install rgmanager lvm2-cluster gfs2-utils"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/global_file_system_2/s1-ov-GFS2install
|
Chapter 2. New features
|
Chapter 2. New features This section describes new features that the Cryostat 3.0 release provides. Cryostat DB container From Cryostat 3.0 onward, the Red Hat Ecosystem Catalog also includes a Cryostat DB container image ( cryostat-db ). When you install Cryostat by using either the Cryostat Operator or a Helm chart, the cryostat-db container is also automatically deployed. The cryostat-db container provides a lightly customized Postgres database. Cryostat now uses this database for storing information such as the encrypted target JMX credentials keyring, Automated Rules definitions, discovered targets, and discovery plug-ins. In releases, Cryostat stored information such as the JMX credentials keyring in a simple H2 file-based database, and other information such as Automated Rules definitions were stored directly as files on disk. With the introduction of the cryostat-db container, Cryostat can now store different types of information in the same Postgres database. Cryostat Storage container From Cryostat 3.0 onward, the Red Hat Ecosystem Catalog also includes a Cryostat Storage container image ( cryostat-storage ). When you install Cryostat by using either the Cryostat Operator or a Helm chart, the cryostat-storage container is also automatically deployed. The cryostat-storage container provides a lightly customized SeaweedFS storage solution that acts as an S3-compatible storage provider. In releases, Cryostat used direct storage of files on disk for archived Flight Recordings and custom Event Templates. With the introduction of the cryostat-storage container, Cryostat no longer needs to use direct file-system access for this type of information. Reverse proxy architecture When you install Cryostat 3.0 by using either the Cryostat Operator or a Helm chart, Cryostat now includes a reverse proxy ( openshift-oauth-proxy or oauth2_proxy ) in the pod. Only this proxy is exposed to cluster traffic through a service. This means that all API requests to Cryostat and all users of the Cryostat web console or Grafana dashboard are directed through the proxy. The proxy handles user sessions to control access to the application, providing unified access control and user sessions for both the Cryostat web console and Grafana dashboard. Both of these user interfaces are accessible through the same route and present the same TLS certificate. When deployed on Red Hat OpenShift, the proxy uses the Cryostat installation namespace to perform role-based access control (RBAC) checks for user authentication and authorization by integrating with the Red Hat OpenShift cluster SSO provider. You can optionally configure the auth proxy with an htpasswd file to enable Basic authentication. On Red Hat OpenShift, this allows for defining additional user accounts that can access Cryostat beyond those with Red Hat OpenShift SSO RBAC access. Support for customizing the route host name By default, Red Hat OpenShift Container Platform automatically assigns a host name, based on the cluster's default ingress domain name, for any routes that do not specify a host. Depending on your requirements, you might want to use a particular host name for the route that the Cryostat Operator creates for your Cryostat deployment. In Cryostat 3.0, you can use a new .spec.networkOptions.coreConfig.externalHost property in the Cryostat custom resource (CR) to specify a custom host name for the Cryostat route. In the Red Hat OpenShift console, you can access this property when creating your Cryostat CR: Alternatively, you can create your Cryostat CR in YAML format. For example: Once a route is created in Red Hat OpenShift Container Platform, you cannot change the route's host name. If you need to change the route's host name after you have already created your Cryostat CR, you must delete the Cryostat CR and create a new CR with the modified host name. Dynamic attachment to the JVM From Cryostat 3.0 onward, the Cryostat agent can attach dynamically to an application JVM that is already running without requiring an application restart. This dynamic attachment feature has the following requirements: You must ensure that the agent's JAR file is copied to the JVM's file system (for example, by using the oc cp command). You must be able to run the agent as a separate process on the same host or within the same application (for example, by using the oc exec command). Dynamic attachment supports ad hoc one-time profiling or troubleshooting workflows where you might not need the agent to be attached every time the JVM starts. Dynamic attachment also suits situations where you cannot or do not want to reconfigure your application for the sole purpose of attaching the agent. Because the agent can attach to a running JVM without requiring an application restart, this also means there is no application downtime. Note In releases, your only option was to enable your application's JVM to load and initialize the Cryostat agent at JVM startup. This requires that you configure the application to pass the -javaagent JVM flag with the path to the Cryostat agent's JAR file. Depending on your requirements, you can continue to use this type of static attachment to the JVM. Support for launching the Cryostat agent as a standalone process From Cryostat 3.0 onward, if you want the Cryostat agent to attach dynamically to an application JVM that is already running, you can launch the agent as a standalone Java process. This feature requires that you have already copied the agent's JAR file to the JVM's file system (for example, by using the oc cp command). To launch the agent, you can run the following command, where <agent_jar_file> represents the agent's JAR file name and <pid> represents the process ID (PID) of the JVM: For example: The agent process uses its Attach providers to look up the specified PID. If the specified PID is found, the agent process attaches to this PID and attempts to load the agent's JAR file into this JVM, which then bootstraps into the normal agent launch process. You can also specify additional late-binding configuration options to the agent launcher by using command-line options. For example: For more information about the available options and their behavior, run the java -jar target/cryostat-agent-0.4.0.jar -h help command. System properties that you specify with -D are set onto the host JVM before the injected agent attempts to read the configuration values. This has the same effect as setting these system properties or equivalent environment variables on the host JVM process itself.
|
[
"apiVersion: operator.cryostat.io/v1beta2 kind: Cryostat metadata: name: cryostat-sample spec: networkOptions: coreConfig: externalHost: cryostat.example.com",
"java -jar target/<agent_jar_file> <pid>",
"java -jar target/cryostat-agent-0.4.0.jar 1234",
"java -jar target/cryostat-agent-0.4.0.jar -Dcryostat.agent.baseuri=http://cryostat.local --smartTrigger=[ProcessCpuLoad>0.2]~profile @/deployment/app/moreAgentArgs 1234"
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/3/html/release_notes_for_the_red_hat_build_of_cryostat_3.0/cryostat-new-features_cryostat
|
Chapter 12. DeploymentRequest [apps.openshift.io/v1]
|
Chapter 12. DeploymentRequest [apps.openshift.io/v1] Description DeploymentRequest is a request to a deployment config for a new deployment. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required name latest force 12.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources excludeTriggers array (string) ExcludeTriggers instructs the instantiator to avoid processing the specified triggers. This field overrides the triggers from latest and allows clients to control specific logic. This field is ignored if not specified. force boolean Force will try to force a new deployment to run. If the deployment config is paused, then setting this to true will return an Invalid error. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds latest boolean Latest will update the deployment config with the latest state from all triggers. name string Name of the deployment config for requesting a new deployment. 12.2. API endpoints The following API endpoints are available: /apis/apps.openshift.io/v1/namespaces/{namespace}/deploymentconfigs/{name}/instantiate POST : create instantiate of a DeploymentConfig 12.2.1. /apis/apps.openshift.io/v1/namespaces/{namespace}/deploymentconfigs/{name}/instantiate Table 12.1. Global path parameters Parameter Type Description name string name of the DeploymentRequest Table 12.2. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. HTTP method POST Description create instantiate of a DeploymentConfig Table 12.3. Body parameters Parameter Type Description body DeploymentRequest schema Table 12.4. HTTP responses HTTP code Reponse body 200 - OK DeploymentRequest schema 201 - Created DeploymentRequest schema 202 - Accepted DeploymentRequest schema 401 - Unauthorized Empty
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/workloads_apis/deploymentrequest-apps-openshift-io-v1
|
Chapter 4. Kernel Module Management Operator
|
Chapter 4. Kernel Module Management Operator Learn about the Kernel Module Management (KMM) Operator and how you can use it to deploy out-of-tree kernel modules and device plugins on OpenShift Container Platform clusters. 4.1. About the Kernel Module Management Operator The Kernel Module Management (KMM) Operator manages, builds, signs, and deploys out-of-tree kernel modules and device plugins on OpenShift Container Platform clusters. KMM adds a new Module CRD which describes an out-of-tree kernel module and its associated device plugin. You can use Module resources to configure how to load the module, define ModuleLoader images for kernel versions, and include instructions for building and signing modules for specific kernel versions. KMM is designed to accommodate multiple kernel versions at once for any kernel module, allowing for seamless node upgrades and reduced application downtime. 4.2. Installing the Kernel Module Management Operator As a cluster administrator, you can install the Kernel Module Management (KMM) Operator by using the OpenShift CLI or the web console. The KMM Operator is supported on OpenShift Container Platform 4.12 and later. Installing KMM on version 4.11 does not require specific additional steps. For details on installing KMM on version 4.10 and earlier, see the section "Installing the Kernel Module Management Operator on earlier versions of OpenShift Container Platform". 4.2.1. Installing the Kernel Module Management Operator using the web console As a cluster administrator, you can install the Kernel Module Management (KMM) Operator using the OpenShift Container Platform web console. Procedure Log in to the OpenShift Container Platform web console. Install the Kernel Module Management Operator: In the OpenShift Container Platform web console, click Operators OperatorHub . Select Kernel Module Management Operator from the list of available Operators, and then click Install . From the Installed Namespace list, select the openshift-kmm namespace. Click Install . Verification To verify that KMM Operator installed successfully: Navigate to the Operators Installed Operators page. Ensure that Kernel Module Management Operator is listed in the openshift-kmm project with a Status of InstallSucceeded . Note During installation, an Operator might display a Failed status. If the installation later succeeds with an InstallSucceeded message, you can ignore the Failed message. Troubleshooting To troubleshoot issues with Operator installation: Navigate to the Operators Installed Operators page and inspect the Operator Subscriptions and Install Plans tabs for any failure or errors under Status . Navigate to the Workloads Pods page and check the logs for pods in the openshift-kmm project. 4.2.2. Installing the Kernel Module Management Operator by using the CLI As a cluster administrator, you can install the Kernel Module Management (KMM) Operator by using the OpenShift CLI. Prerequisites You have a running OpenShift Container Platform cluster. You installed the OpenShift CLI ( oc ). You are logged into the OpenShift CLI as a user with cluster-admin privileges. Procedure Install KMM in the openshift-kmm namespace: Create the following Namespace CR and save the YAML file, for example, kmm-namespace.yaml : apiVersion: v1 kind: Namespace metadata: name: openshift-kmm Create the following OperatorGroup CR and save the YAML file, for example, kmm-op-group.yaml : apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: kernel-module-management namespace: openshift-kmm Create the following Subscription CR and save the YAML file, for example, kmm-sub.yaml : apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: kernel-module-management namespace: openshift-kmm spec: channel: release-1.0 installPlanApproval: Automatic name: kernel-module-management source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: kernel-module-management.v1.0.0 Create the subscription object by running the following command: USD oc create -f kmm-sub.yaml Verification To verify that the Operator deployment is successful, run the following command: USD oc get -n openshift-kmm deployments.apps kmm-operator-controller-manager Example output NAME READY UP-TO-DATE AVAILABLE AGE kmm-operator-controller-manager 1/1 1 1 97s The Operator is available. 4.2.3. Installing the Kernel Module Management Operator on earlier versions of OpenShift Container Platform The KMM Operator is supported on OpenShift Container Platform 4.12 and later. For version 4.10 and earlier, you must create a new SecurityContextConstraint object and bind it to the Operator's ServiceAccount . As a cluster administrator, you can install the Kernel Module Management (KMM) Operator by using the OpenShift CLI. Prerequisites You have a running OpenShift Container Platform cluster. You installed the OpenShift CLI ( oc ). You are logged into the OpenShift CLI as a user with cluster-admin privileges. Procedure Install KMM in the openshift-kmm namespace: Create the following Namespace CR and save the YAML file, for example, kmm-namespace.yaml file: apiVersion: v1 kind: Namespace metadata: name: openshift-kmm Create the following SecurityContextConstraint object and save the YAML file, for example, kmm-security-constraint.yaml : allowHostDirVolumePlugin: false allowHostIPC: false allowHostNetwork: false allowHostPID: false allowHostPorts: false allowPrivilegeEscalation: false allowPrivilegedContainer: false allowedCapabilities: - NET_BIND_SERVICE apiVersion: security.openshift.io/v1 defaultAddCapabilities: null fsGroup: type: MustRunAs groups: [] kind: SecurityContextConstraints metadata: name: restricted-v2 priority: null readOnlyRootFilesystem: false requiredDropCapabilities: - ALL runAsUser: type: MustRunAsRange seLinuxContext: type: MustRunAs seccompProfiles: - runtime/default supplementalGroups: type: RunAsAny users: [] volumes: - configMap - downwardAPI - emptyDir - persistentVolumeClaim - projected - secret Bind the SecurityContextConstraint object to the Operator's ServiceAccount by running the following commands: USD oc apply -f kmm-security-constraint.yaml USD oc adm policy add-scc-to-user kmm-security-constraint -z kmm-operator-controller-manager -n openshift-kmm Create the following OperatorGroup CR and save the YAML file, for example, kmm-op-group.yaml : apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: kernel-module-management namespace: openshift-kmm Create the following Subscription CR and save the YAML file, for example, kmm-sub.yaml : apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: kernel-module-management namespace: openshift-kmm spec: channel: release-1.0 installPlanApproval: Automatic name: kernel-module-management source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: kernel-module-management.v1.0.0 Create the subscription object by running the following command: USD oc create -f kmm-sub.yaml Verification To verify that the Operator deployment is successful, run the following command: USD oc get -n openshift-kmm deployments.apps kmm-operator-controller-manager Example output NAME READY UP-TO-DATE AVAILABLE AGE kmm-operator-controller-manager 1/1 1 1 97s The Operator is available. 4.3. Uninstalling the Kernel Module Management Operator Use one of the following procedures to uninstall the Kernel Module Management (KMM) Operator, depending on how the KMM Operator was installed. 4.3.1. Uninstalling a Red Hat catalog installation Use this procedure if KMM was installed from the Red Hat catalog. Procedure Use the following method to uninstall the KMM Operator: Use the OpenShift console under Operators --> Installed Operators to locate and uninstall the Operator. Note Alternatively, you can delete the Subscription resource in the KMM namespace. 4.3.2. Uninstalling a CLI installation Use this command if the KMM Operator was installed using the OpenShift CLI. Procedure Run the following command to uninstall the KMM Operator: USD oc delete -k https://github.com/rh-ecosystem-edge/kernel-module-management/config/default Note Using this command deletes the Module CRD and all Module instances in the cluster. 4.4. Kernel module deployment For each Module resource, Kernel Module Management (KMM) can create a number of DaemonSet resources: One ModuleLoader DaemonSet per compatible kernel version running in the cluster. One device plugin DaemonSet , if configured. The module loader daemon set resources run ModuleLoader images to load kernel modules. A module loader image is an OCI image that contains the .ko files and both the modprobe and sleep binaries. When the module loader pod is created, the pod runs modprobe to insert the specified module into the kernel. It then enters a sleep state until it is terminated. When that happens, the ExecPreStop hook runs modprobe -r to unload the kernel module. If the .spec.devicePlugin attribute is configured in a Module resource, then KMM creates a device plugin daemon set in the cluster. That daemon set targets: Nodes that match the .spec.selector of the Module resource. Nodes with the kernel module loaded (where the module loader pod is in the Ready condition). 4.4.1. The Module custom resource definition The Module custom resource definition (CRD) represents a kernel module that can be loaded on all or select nodes in the cluster, through a module loader image. A Module custom resource (CR) specifies one or more kernel versions with which it is compatible, and a node selector. The compatible versions for a Module resource are listed under .spec.moduleLoader.container.kernelMappings . A kernel mapping can either match a literal version, or use regexp to match many of them at the same time. The reconciliation loop for the Module resource runs the following steps: List all nodes matching .spec.selector . Build a set of all kernel versions running on those nodes. For each kernel version: Go through .spec.moduleLoader.container.kernelMappings and find the appropriate container image name. If the kernel mapping has build or sign defined and the container image does not already exist, run the build, the signing job, or both, as needed. Create a module loader daemon set with the container image determined in the step. If .spec.devicePlugin is defined, create a device plugin daemon set using the configuration specified under .spec.devicePlugin.container . Run garbage-collect on: Existing daemon set resources targeting kernel versions that are not run by any node in the cluster. Successful build jobs. Successful signing jobs. 4.4.2. Set soft dependencies between kernel modules Some configurations require that several kernel modules be loaded in a specific order to work properly, even though the modules do not directly depend on each other through symbols. These are called soft dependencies. depmod is usually not aware of these dependencies, and they do not appear in the files it produces. For example, if mod_a has a soft dependency on mod_b , modprobe mod_a will not load mod_b . You can resolve these situations by declaring soft dependencies in the Module Custom Resource Definition (CRD) using the modulesLoadingOrder field. # ... spec: moduleLoader: container: modprobe: moduleName: mod_a dirName: /opt firmwarePath: /firmware parameters: - param=1 modulesLoadingOrder: - mod_a - mod_b In the configuration above: The loading order is mod_b , then mod_a . The unloading order is mod_a , then mod_b . Note The first value in the list, to be loaded last, must be equivalent to the moduleName . 4.4.3. Security and permissions Important Loading kernel modules is a highly sensitive operation. After they are loaded, kernel modules have all possible permissions to do any kind of operation on the node. 4.4.3.1. ServiceAccounts and SecurityContextConstraints Kernel Module Management (KMM) creates a privileged workload to load the kernel modules on nodes. That workload needs ServiceAccounts allowed to use the privileged SecurityContextConstraint (SCC) resource. The authorization model for that workload depends on the namespace of the Module resource, as well as its spec. If the .spec.moduleLoader.serviceAccountName or .spec.devicePlugin.serviceAccountName fields are set, they are always used. If those fields are not set, then: If the Module resource is created in the operator's namespace ( openshift-kmm by default), then KMM uses its default, powerful ServiceAccounts to run the daemon sets. If the Module resource is created in any other namespace, then KMM runs the daemon sets as the namespace's default ServiceAccount . The Module resource cannot run a privileged workload unless you manually enable it to use the privileged SCC. Important openshift-kmm is a trusted namespace. When setting up RBAC permissions, remember that any user or ServiceAccount creating a Module resource in the openshift-kmm namespace results in KMM automatically running privileged workloads on potentially all nodes in the cluster. To allow any ServiceAccount to use the privileged SCC and therefore to run module loader or device plugin pods, use the following command: USD oc adm policy add-scc-to-user privileged -z "USD{serviceAccountName}" [ -n "USD{namespace}" ] 4.4.3.2. Pod security standards OpenShift runs a synchronization mechanism that sets the namespace Pod Security level automatically based on the security contexts in use. No action is needed. Additional resources Understanding and managing pod security admission . 4.5. Replacing in-tree modules with out-of-tree modules You can use Kernel Module Management (KMM) to build kernel modules that can be loaded or unloaded into the kernel on demand. These modules extend the functionality of the kernel without the need to reboot the system. Modules can be configured as built-in or dynamically loaded. Dynamically loaded modules include in-tree modules and out-of-tree (OOT) modules. In-tree modules are internal to the Linux kernel tree, that is, they are already part of the kernel. Out-of-tree modules are external to the Linux kernel tree. They are generally written for development and testing purposes, such as testing the new version of a kernel module that is shipped in-tree, or to deal with incompatibilities. Some modules loaded by KMM could replace in-tree modules already loaded on the node. To unload an in-tree module before loading your module, set the .spec.moduleLoader.container.inTreeModuleToRemove field. The following is an example for module replacement for all kernel mappings: # ... spec: moduleLoader: container: modprobe: moduleName: mod_a inTreeModuleToRemove: mod_b In this example, the moduleLoader pod uses inTreeModuleToRemove to unload the in-tree mod_b before loading mod_a from the moduleLoader image. When the moduleLoader`pod is terminated and `mod_a is unloaded, mod_b is not loaded again. The following is an example for module replacement for specific kernel mappings: # ... spec: moduleLoader: container: kernelMappings: - literal: 6.0.15-300.fc37.x86_64 containerImage: some.registry/org/my-kmod:6.0.15-300.fc37.x86_64 inTreeModuleToRemove: <module_name> Additional resources Building a linux kernel module 4.5.1. Example Module CR The following is an annotated Module example: apiVersion: kmm.sigs.x-k8s.io/v1beta1 kind: Module metadata: name: <my_kmod> spec: moduleLoader: container: modprobe: moduleName: <my_kmod> 1 dirName: /opt 2 firmwarePath: /firmware 3 parameters: 4 - param=1 kernelMappings: 5 - literal: 6.0.15-300.fc37.x86_64 containerImage: some.registry/org/my-kmod:6.0.15-300.fc37.x86_64 - regexp: '^.+\fc37\.x86_64USD' 6 containerImage: "some.other.registry/org/<my_kmod>:USD{KERNEL_FULL_VERSION}" - regexp: '^.+USD' 7 containerImage: "some.registry/org/<my_kmod>:USD{KERNEL_FULL_VERSION}" build: buildArgs: 8 - name: ARG_NAME value: <some_value> secrets: - name: <some_kubernetes_secret> 9 baseImageRegistryTLS: 10 insecure: false insecureSkipTLSVerify: false 11 dockerfileConfigMap: 12 name: <my_kmod_dockerfile> sign: certSecret: name: <cert_secret> 13 keySecret: name: <key_secret> 14 filesToSign: - /opt/lib/modules/USD{KERNEL_FULL_VERSION}/<my_kmod>.ko registryTLS: 15 insecure: false 16 insecureSkipTLSVerify: false serviceAccountName: <sa_module_loader> 17 devicePlugin: 18 container: image: some.registry/org/device-plugin:latest 19 env: - name: MY_DEVICE_PLUGIN_ENV_VAR value: SOME_VALUE volumeMounts: 20 - mountPath: /some/mountPath name: <device_plugin_volume> volumes: 21 - name: <device_plugin_volume> configMap: name: <some_configmap> serviceAccountName: <sa_device_plugin> 22 imageRepoSecret: 23 name: <secret_name> selector: node-role.kubernetes.io/worker: "" 1 1 1 Required. 2 Optional. 3 Optional: Copies /firmware/* into /var/lib/firmware/ on the node. 4 Optional. 5 At least one kernel item is required. 6 For each node running a kernel matching the regular expression, KMM creates a DaemonSet resource running the image specified in containerImage with USD{KERNEL_FULL_VERSION} replaced with the kernel version. 7 For any other kernel, build the image using the Dockerfile in the my-kmod ConfigMap. 8 Optional. 9 Optional: A value for some-kubernetes-secret can be obtained from the build environment at /run/secrets/some-kubernetes-secret . 10 Optional: Avoid using this parameter. If set to true , the build is allowed to pull the image in the Dockerfile FROM instruction using plain HTTP. 11 Optional: Avoid using this parameter. If set to true , the build will skip any TLS server certificate validation when pulling the image in the Dockerfile FROM instruction using plain HTTP. 12 Required. 13 Required: A secret holding the public secureboot key with the key 'cert'. 14 Required: A secret holding the private secureboot key with the key 'key'. 15 Optional: Avoid using this parameter. If set to true , KMM will be allowed to check if the container image already exists using plain HTTP. 16 Optional: Avoid using this parameter. If set to true , KMM will skip any TLS server certificate validation when checking if the container image already exists. 17 Optional. 18 Optional. 19 Required: If the device plugin section is present. 20 Optional. 21 Optional. 22 Optional. 23 Optional: Used to pull module loader and device plugin images. 4.6. Using a ModuleLoader image Kernel Module Management (KMM) works with purpose-built module loader images. These are standard OCI images that must satisfy the following requirements: .ko files must be located in /opt/lib/modules/USD{KERNEL_VERSION} . modprobe and sleep binaries must be defined in the USDPATH variable. 4.6.1. Running depmod If your module loader image contains several kernel modules and if one of the modules depends on another module, it is best practice to run depmod at the end of the build process to generate dependencies and map files. Note You must have a Red Hat subscription to download the kernel-devel package. Procedure To generate modules.dep and .map files for a specific kernel version, run depmod -b /opt USD{KERNEL_VERSION} . 4.6.1.1. Example Dockerfile If you are building your image on OpenShift Container Platform, consider using the Driver Tool Kit (DTK). For further information, see using an entitled build . apiVersion: v1 kind: ConfigMap metadata: name: kmm-ci-dockerfile data: dockerfile: | ARG DTK_AUTO FROM USD{DTK_AUTO} as builder ARG KERNEL_VERSION WORKDIR /usr/src RUN ["git", "clone", "https://github.com/rh-ecosystem-edge/kernel-module-management.git"] WORKDIR /usr/src/kernel-module-management/ci/kmm-kmod RUN KERNEL_SRC_DIR=/lib/modules/USD{KERNEL_VERSION}/build make all FROM registry.redhat.io/ubi9/ubi-minimal ARG KERNEL_VERSION RUN microdnf install kmod COPY --from=builder /usr/src/kernel-module-management/ci/kmm-kmod/kmm_ci_a.ko /opt/lib/modules/USD{KERNEL_VERSION}/ COPY --from=builder /usr/src/kernel-module-management/ci/kmm-kmod/kmm_ci_b.ko /opt/lib/modules/USD{KERNEL_VERSION}/ RUN depmod -b /opt USD{KERNEL_VERSION} Additional resources Driver Toolkit . 4.6.2. Building in the cluster KMM can build module loader images in the cluster. Follow these guidelines: Provide build instructions using the build section of a kernel mapping. Copy the Dockerfile for your container image into a ConfigMap resource, under the dockerfile key. Ensure that the ConfigMap is located in the same namespace as the Module . KMM checks if the image name specified in the containerImage field exists. If it does, the build is skipped. Otherwise, KMM creates a Build resource to build your image. After the image is built, KMM proceeds with the Module reconciliation. See the following example. # ... - regexp: '^.+USD' containerImage: "some.registry/org/<my_kmod>:USD{KERNEL_FULL_VERSION}" build: buildArgs: 1 - name: ARG_NAME value: <some_value> secrets: 2 - name: <some_kubernetes_secret> 3 baseImageRegistryTLS: insecure: false 4 insecureSkipTLSVerify: false 5 dockerfileConfigMap: 6 name: <my_kmod_dockerfile> registryTLS: insecure: false 7 insecureSkipTLSVerify: false 8 1 Optional. 2 Optional. 3 Will be mounted in the build pod as /run/secrets/some-kubernetes-secret . 4 Optional: Avoid using this parameter. If set to true , the build will be allowed to pull the image in the Dockerfile FROM instruction using plain HTTP. 5 Optional: Avoid using this parameter. If set to true , the build will skip any TLS server certificate validation when pulling the image in the Dockerfile FROM instruction using plain HTTP. 6 Required. 7 Optional: Avoid using this parameter. If set to true , KMM will be allowed to check if the container image already exists using plain HTTP. 8 Optional: Avoid using this parameter. If set to true , KMM will skip any TLS server certificate validation when checking if the container image already exists. Additional resources Build configuration resources . 4.6.3. Using the Driver Toolkit The Driver Toolkit (DTK) is a convenient base image for building build module loader images. It contains tools and libraries for the OpenShift version currently running in the cluster. Procedure Use DTK as the first stage of a multi-stage Dockerfile . Build the kernel modules. Copy the .ko files into a smaller end-user image such as ubi-minimal . To leverage DTK in your in-cluster build, use the DTK_AUTO build argument. The value is automatically set by KMM when creating the Build resource. See the following example. ARG DTK_AUTO FROM USD{DTK_AUTO} as builder ARG KERNEL_VERSION WORKDIR /usr/src RUN ["git", "clone", "https://github.com/rh-ecosystem-edge/kernel-module-management.git"] WORKDIR /usr/src/kernel-module-management/ci/kmm-kmod RUN KERNEL_SRC_DIR=/lib/modules/USD{KERNEL_VERSION}/build make all FROM registry.redhat.io/ubi9/ubi-minimal ARG KERNEL_VERSION RUN microdnf install kmod COPY --from=builder /usr/src/kernel-module-management/ci/kmm-kmod/kmm_ci_a.ko /opt/lib/modules/USD{KERNEL_VERSION}/ COPY --from=builder /usr/src/kernel-module-management/ci/kmm-kmod/kmm_ci_b.ko /opt/lib/modules/USD{KERNEL_VERSION}/ RUN depmod -b /opt USD{KERNEL_VERSION} Additional resources Driver Toolkit . 4.7. Using signing with Kernel Module Management (KMM) On a Secure Boot enabled system, all kernel modules (kmods) must be signed with a public/private key-pair enrolled into the Machine Owner's Key (MOK) database. Drivers distributed as part of a distribution should already be signed by the distribution's private key, but for kernel modules build out-of-tree, KMM supports signing kernel modules using the sign section of the kernel mapping. For more details on using Secure Boot, see Generating a public and private key pair Prerequisites A public private key pair in the correct (DER) format. At least one secure-boot enabled node with the public key enrolled in its MOK database. Either a pre-built driver container image, or the source code and Dockerfile needed to build one in-cluster. 4.8. Adding the keys for secureboot To use KMM Kernel Module Management (KMM) to sign kernel modules, a certificate and private key are required. For details on how to create these, see Generating a public and private key pair . For details on how to extract the public and private key pair, see Signing kernel modules with the private key . Use steps 1 through 4 to extract the keys into files. Procedure Create the sb_cert.cer file that contains the certificate and the sb_cert.priv file that contains the private key: USD openssl req -x509 -new -nodes -utf8 -sha256 -days 36500 -batch -config configuration_file.config -outform DER -out my_signing_key_pub.der -keyout my_signing_key.priv Add the files by using one of the following methods: Add the files as secrets directly: USD oc create secret generic my-signing-key --from-file=key=<my_signing_key.priv> USD oc create secret generic my-signing-key-pub --from-file=cert=<my_signing_key_pub.der> Add the files by base64 encoding them: USD cat sb_cert.priv | base64 -w 0 > my_signing_key2.base64 USD cat sb_cert.cer | base64 -w 0 > my_signing_key_pub.base64 Add the encoded text to a YAML file: apiVersion: v1 kind: Secret metadata: name: my-signing-key-pub namespace: default 1 type: Opaque data: cert: <base64_encoded_secureboot_public_key> --- apiVersion: v1 kind: Secret metadata: name: my-signing-key namespace: default 2 type: Opaque data: key: <base64_encoded_secureboot_private_key> 1 2 namespace - Replace default with a valid namespace. Apply the YAML file: USD oc apply -f <yaml_filename> 4.8.1. Checking the keys After you have added the keys, you must check them to ensure they are set correctly. Procedure Check to ensure the public key secret is set correctly: USD oc get secret -o yaml <certificate secret name> | awk '/cert/{print USD2; exit}' | base64 -d | openssl x509 -inform der -text This should display a certificate with a Serial Number, Issuer, Subject, and more. Check to ensure the private key secret is set correctly: USD oc get secret -o yaml <private key secret name> | awk '/key/{print USD2; exit}' | base64 -d This should display the key enclosed in the -----BEGIN PRIVATE KEY----- and -----END PRIVATE KEY----- lines. 4.9. Signing a pre-built driver container Use this procedure if you have a pre-built image, such as an image either distributed by a hardware vendor or built elsewhere. The following YAML file adds the public/private key-pair as secrets with the required key names - key for the private key, cert for the public key. The cluster then pulls down the unsignedImage image, opens it, signs the kernel modules listed in filesToSign , adds them back, and pushes the resulting image as containerImage . Kernel Module Management (KMM) should then deploy the DaemonSet that loads the signed kmods onto all the nodes that match the selector. The driver containers should run successfully on any nodes that have the public key in their MOK database, and any nodes that are not secure-boot enabled, which ignore the signature. They should fail to load on any that have secure-boot enabled but do not have that key in their MOK database. Prerequisites The keySecret and certSecret secrets have been created. Procedure Apply the YAML file: --- apiVersion: kmm.sigs.x-k8s.io/v1beta1 kind: Module metadata: name: example-module spec: moduleLoader: serviceAccountName: default container: modprobe: 1 moduleName: '<your module name>' kernelMappings: # the kmods will be deployed on all nodes in the cluster with a kernel that matches the regexp - regexp: '^.*\.x86_64USD' # the container to produce containing the signed kmods containerImage: <image name e.g. quay.io/myuser/my-driver:<kernelversion>-signed> sign: # the image containing the unsigned kmods (we need this because we are not building the kmods within the cluster) unsignedImage: <image name e.g. quay.io/myuser/my-driver:<kernelversion> > keySecret: # a secret holding the private secureboot key with the key 'key' name: <private key secret name> certSecret: # a secret holding the public secureboot key with the key 'cert' name: <certificate secret name> filesToSign: # full path within the unsignedImage container to the kmod(s) to sign - /opt/lib/modules/4.18.0-348.2.1.el8_5.x86_64/kmm_ci_a.ko imageRepoSecret: # the name of a secret containing credentials to pull unsignedImage and push containerImage to the registry name: repo-pull-secret selector: kubernetes.io/arch: amd64 1 modprobe - The name of the kmod to load. 4.10. Building and signing a ModuleLoader container image Use this procedure if you have source code and must build your image first. The following YAML file builds a new container image using the source code from the repository. The image produced is saved back in the registry with a temporary name, and this temporary image is then signed using the parameters in the sign section. The temporary image name is based on the final image name and is set to be <containerImage>:<tag>-<namespace>_<module name>_kmm_unsigned . For example, using the following YAML file, Kernel Module Management (KMM) builds an image named example.org/repository/minimal-driver:final-default_example-module_kmm_unsigned containing the build with unsigned kmods and push it to the registry. Then it creates a second image named example.org/repository/minimal-driver:final that contains the signed kmods. It is this second image that is loaded by the DaemonSet object and deploys the kmods to the cluster nodes. After it is signed, the temporary image can be safely deleted from the registry. It will be rebuilt, if needed. Prerequisites The keySecret and certSecret secrets have been created. Procedure Apply the YAML file: --- apiVersion: v1 kind: ConfigMap metadata: name: example-module-dockerfile namespace: default 1 data: Dockerfile: | ARG DTK_AUTO ARG KERNEL_VERSION FROM USD{DTK_AUTO} as builder WORKDIR /build/ RUN git clone -b main --single-branch https://github.com/rh-ecosystem-edge/kernel-module-management.git WORKDIR kernel-module-management/ci/kmm-kmod/ RUN make FROM registry.access.redhat.com/ubi9/ubi:latest ARG KERNEL_VERSION RUN yum -y install kmod && yum clean all RUN mkdir -p /opt/lib/modules/USD{KERNEL_VERSION} COPY --from=builder /build/kernel-module-management/ci/kmm-kmod/*.ko /opt/lib/modules/USD{KERNEL_VERSION}/ RUN /usr/sbin/depmod -b /opt --- apiVersion: kmm.sigs.x-k8s.io/v1beta1 kind: Module metadata: name: example-module namespace: default 2 spec: moduleLoader: serviceAccountName: default 3 container: modprobe: moduleName: simple_kmod kernelMappings: - regexp: '^.*\.x86_64USD' containerImage: < the name of the final driver container to produce> build: dockerfileConfigMap: name: example-module-dockerfile sign: keySecret: name: <private key secret name> certSecret: name: <certificate secret name> filesToSign: - /opt/lib/modules/4.18.0-348.2.1.el8_5.x86_64/kmm_ci_a.ko imageRepoSecret: 4 name: repo-pull-secret selector: # top-level selector kubernetes.io/arch: amd64 1 2 namespace - Replace default with a valid namespace. 3 serviceAccountName - The default serviceAccountName does not have the required permissions to run a module that is privileged. For information on creating a service account, see "Creating service accounts" in the "Additional resources" of this section. 4 imageRepoSecret - Used as imagePullSecrets in the DaemonSet object and to pull and push for the build and sign features. Additional resources For information on creating a service account, see Creating service accounts . 4.11. KMM hub and spoke In hub and spoke scenarios, many spoke clusters are connected to a central, powerful hub cluster. Kernel Module Management (KMM) depends on Red Hat Advanced Cluster Management (RHACM) to operate in hub and spoke environments. KMM is compatible with hub and spoke environments through decoupling KMM features. A ManagedClusterModule Custom Resource Definition (CRD) is provided to wrap the existing Module CRD and extend it to select Spoke clusters. Also provided is KMM-Hub, a new standalone controller that builds images and signs modules on the hub cluster. In hub and spoke setups, spokes are focused, resource-constrained clusters that are centrally managed by a hub cluster. Spokes run the single-cluster edition of KMM, with those resource-intensive features disabled. To adapt KMM to this environment, you should reduce the workload running on the spokes to the minimum, while the hub takes care of the expensive tasks. Building kernel module images and signing the .ko files, should run on the hub. The scheduling of the Module Loader and Device Plugin DaemonSets can only happen on the spokes. Additional resources Red Hat Advanced Cluster Management (RHACM) 4.11.1. KMM-Hub The KMM project provides KMM-Hub, an edition of KMM dedicated to hub clusters. KMM-Hub monitors all kernel versions running on the spokes and determines the nodes on the cluster that should receive a kernel module. KMM-Hub runs all compute-intensive tasks such as image builds and kmod signing, and prepares the trimmed-down Module to be transferred to the spokes through RHACM. Note KMM-Hub cannot be used to load kernel modules on the hub cluster. Install the regular edition of KMM to load kernel modules. Additional resources Installing KMM 4.11.2. Installing KMM-Hub You can use one of the following methods to install KMM-Hub: Using the Operator Lifecycle Manager (OLM) Creating KMM resources Additional resources KMM Operator bundle 4.11.2.1. Installing KMM-Hub using the Operator Lifecycle Manager Use the Operators section of the OpenShift console to install KMM-Hub. 4.11.2.2. Installing KMM-Hub by creating KMM resources Procedure If you want to install KMM-Hub programmatically, you can use the following resources to create the Namespace , OperatorGroup and Subscription resources: --- apiVersion: v1 kind: Namespace metadata: name: openshift-kmm-hub --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: kernel-module-management-hub namespace: openshift-kmm-hub --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: kernel-module-management-hub namespace: openshift-kmm-hub spec: channel: stable installPlanApproval: Automatic name: kernel-module-management-hub source: redhat-operators sourceNamespace: openshift-marketplace 4.11.3. Using the ManagedClusterModule CRD Use the ManagedClusterModule Custom Resource Definition (CRD) to configure the deployment of kernel modules on spoke clusters. This CRD is cluster-scoped, wraps a Module spec and adds the following additional fields: apiVersion: hub.kmm.sigs.x-k8s.io/v1beta1 kind: ManagedClusterModule metadata: name: <my-mcm> # No namespace, because this resource is cluster-scoped. spec: moduleSpec: 1 selector: 2 node-wants-my-mcm: 'true' spokeNamespace: <some-namespace> 3 selector: 4 wants-my-mcm: 'true' 1 moduleSpec : Contains moduleLoader and devicePlugin sections, similar to a Module resource. 2 Selects nodes within the ManagedCluster . 3 Specifies in which namespace the Module should be created. 4 Selects ManagedCluster objects. If build or signing instructions are present in .spec.moduleSpec , those pods are run on the hub cluster in the operator's namespace. When the .spec.selector matches one or more ManagedCluster resources, then KMM-Hub creates a ManifestWork resource in the corresponding namespace(s). ManifestWork contains a trimmed-down Module resource, with kernel mappings preserved but all build and sign subsections are removed. containerImage fields that contain image names ending with a tag are replaced with their digest equivalent. 4.11.4. Running KMM on the spoke After installing Kernel Module Management (KMM) on the spoke, no further action is required. Create a ManagedClusterModule object from the hub to deploy kernel modules on spoke clusters. Procedure You can install KMM on the spokes cluster through a RHACM Policy object. In addition to installing KMM from the OperatorHub and running it in a lightweight spoke mode, the Policy configures additional RBAC required for the RHACM agent to be able to manage Module resources. Use the following RHACM policy to install KMM on spoke clusters: 1 This environment variable is required when running KMM on a spoke cluster. 2 The spec.clusterSelector field can be customized to target select clusters only. 4.12. Customizing upgrades for kernel modules Use this procedure to upgrade the kernel module while running maintenance operations on the node, including rebooting the node, if needed. To minimize the impact on the workloads running in the cluster, run the kernel upgrade process sequentially, one node at a time. Note This procedure requires knowledge of the workload utilizing the kernel module and must be managed by the cluster administrator. Prerequisites Before upgrading, set the kmm.node.kubernetes.io/version-module.<module_namespace>.<module_name>=USDmoduleVersion label on all the nodes that are used by the kernel module. Terminate all user application workloads on the node or move them to another node. Unload the currently loaded kernel module. Ensure that the user workload (the application running in the cluster that is accessing kernel module) is not running on the node prior to kernel module unloading and that the workload is back running on the node after the new kernel module version has been loaded. Procedure Ensure that the device plugin managed by KMM on the node is unloaded. Update the following fields in the Module custom resource (CR): containerImage (to the appropriate kernel version) version The update should be atomic; that is, both the containerImage and version fields must be updated simultaneously. Terminate any workload using the kernel module on the node being upgraded. Remove the kmm.node.kubernetes.io/version-module.<module_namespace>.<module_name> label on the node. Run the following command to unload the kernel module from the node: USD oc label node/<node_name> kmm.node.kubernetes.io/version-module.<module_namespace>.<module_name>- If required, as the cluster administrator, perform any additional maintenance required on the node for the kernel module upgrade. If no additional upgrading is needed, you can skip Steps 3 through 6 by updating the kmm.node.kubernetes.io/version-module.<module-namespace>.<module-name> label value to the new USDmoduleVersion as set in the Module . Run the following command to add the kmm.node.kubernetes.io/version-module.<module_namespace>.<module_name>=USDmoduleVersion label to the node. The USDmoduleVersion must be equal to the new value of the version field in the Module CR. USD oc label node/<node_name> kmm.node.kubernetes.io/version-module.<module_namespace>.<module_name>=<desired_version> Note Because of Kubernetes limitations in label names, the combined length of Module name and namespace must not exceed 39 characters. Restore any workload that leverages the kernel module on the node. Reload the device plugin managed by KMM on the node. 4.13. Day 1 kernel module loading Kernel Module Management (KMM) is typically a Day 2 Operator. Kernel modules are loaded only after the complete initialization of a Linux (RHCOS) server. However, in some scenarios the kernel module must be loaded at an earlier stage. Day 1 functionality allows you to use the Machine Config Operator (MCO) to load kernel modules during the Linux systemd initialization stage. Additional resources Machine Config Operator 4.13.1. Day 1 supported use cases The Day 1 functionality supports a limited number of use cases. The main use case is to allow loading out-of-tree (OOT) kernel modules prior to NetworkManager service initialization. It does not support loading kernel module at the initramfs stage. The following are the conditions needed for Day 1 functionality: The kernel module is not loaded in the kernel. The in-tree kernel module is loaded into the kernel, but can be unloaded and replaced by the OOT kernel module. This means that the in-tree module is not referenced by any other kernel modules. In order for Day 1 functionlity to work, the node must have a functional network interface, that is, an in-tree kernel driver for that interface. The OOT kernel module can be a network driver that will replace the functional network driver. 4.13.2. OOT kernel module loading flow The loading of the out-of-tree (OOT) kernel module leverages the Machine Config Operator (MCO). The flow sequence is as follows: Procedure Apply a MachineConfig resource to the existing running cluster. In order to identify the necessary nodes that need to be updated, you must create an appropriate MachineConfigPool resource. MCO applies the reboots node by node. On any rebooted node, two new systemd services are deployed: pull service and load service. The load service is configured to run prior to the NetworkConfiguration service. The service tries to pull a predefined kernel module image and then, using that image, to unload an in-tree module and load an OOT kernel module. The pull service is configured to run after NetworkManager service. The service checks if the preconfigured kernel module image is located on the node's filesystem. If it is, the service exists normally, and the server continues with the boot process. If not, it pulls the image onto the node and reboots the node afterwards. 4.13.3. The kernel module image The Day 1 functionality uses the same DTK based image leveraged by Day 2 KMM builds. The out-of-tree kernel module should be located under /opt/lib/modules/USD{kernelVersion} . Additional resources Driver Toolkit 4.13.4. In-tree module replacement The Day 1 functionality always tries to replace the in-tree kernel module with the OOT version. If the in-tree kernel module is not loaded, the flow is not affected; the service proceeds and loads the OOT kernel module. 4.13.5. MCO yaml creation KMM provides an API to create an MCO YAML manifest for the Day 1 functionality: ProduceMachineConfig(machineConfigName, machineConfigPoolRef, kernelModuleImage, kernelModuleName string) (string, error) The returned output is a string representation of the MCO YAML manifest to be applied. It is up to the customer to apply this YAML. The parameters are: machineConfigName The name of the MCO YAML manifest. This parameter is set as the name parameter of the metadata of the MCO YAML manifest. machineConfigPoolRef The MachineConfigPool name used to identify the targeted nodes. kernelModuleImage The name of the container image that includes the OOT kernel module. kernelModuleName The name of the OOT kernel module. This parameter is used both to unload the in-tree kernel module (if loaded into the kernel) and to load the OOT kernel module. The API is located under pkg/mcproducer package of the KMM source code. The KMM operator does not need to be running to use the Day 1 functionality. You only need to import the pkg/mcproducer package into their operator/utility code, call the API, and apply the produced MCO YAML to the cluster. 4.13.6. The MachineConfigPool The MachineConfigPool identifies a collection of nodes that are affected by the applied MCO. kind: MachineConfigPool metadata: name: sfc spec: machineConfigSelector: 1 matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker, sfc]} nodeSelector: 2 matchLabels: node-role.kubernetes.io/sfc: "" paused: false maxUnavailable: 1 1 Matches the labels in the MachineConfig. 2 Matches the labels on the node. There are predefined MachineConfigPools in the OCP cluster: worker : Targets all worker nodes in the cluster master : Targets all master nodes in the cluster Define the following MachineConfig to target the master MachineConfigPool : metadata: labels: machineconfiguration.opensfhit.io/role: master Define the following MachineConfig to target the worker MachineConfigPool : metadata: labels: machineconfiguration.opensfhit.io/role: worker Additional resources About MachineConfigPool 4.14. Debugging and troubleshooting If the kmods in your driver container are not signed or are signed with the wrong key, then the container can enter a PostStartHookError or CrashLoopBackOff status. You can verify by running the oc describe command on your container, which displays the following message in this scenario: modprobe: ERROR: could not insert '<your_kmod_name>': Required key not available 4.15. KMM firmware support Kernel modules sometimes need to load firmware files from the file system. KMM supports copying firmware files from the ModuleLoader image to the node's file system. The contents of .spec.moduleLoader.container.modprobe.firmwarePath are copied into the /var/lib/firmware path on the node before running the modprobe command to insert the kernel module. All files and empty directories are removed from that location before running the modprobe -r command to unload the kernel module, when the pod is terminated. Additional resources Creating a ModuleLoader image . 4.15.1. Configuring the lookup path on nodes On OpenShift Container Platform nodes, the set of default lookup paths for firmwares does not include the /var/lib/firmware path. Procedure Use the Machine Config Operator to create a MachineConfig custom resource (CR) that contains the /var/lib/firmware path: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 99-worker-kernel-args-firmware-path spec: kernelArguments: - 'firmware_class.path=/var/lib/firmware' 1 You can configure the label based on your needs. In the case of single-node OpenShift, use either control-pane or master objects. By applying the MachineConfig CR, the nodes are automatically rebooted. Additional resources Machine Config Operator . 4.15.2. Building a ModuleLoader image Procedure In addition to building the kernel module itself, include the binary firmware in the builder image: FROM registry.redhat.io/ubi9/ubi-minimal as builder # Build the kmod RUN ["mkdir", "/firmware"] RUN ["curl", "-o", "/firmware/firmware.bin", "https://artifacts.example.com/firmware.bin"] FROM registry.redhat.io/ubi9/ubi-minimal # Copy the kmod, install modprobe, run depmod COPY --from=builder /firmware /firmware 4.15.3. Tuning the Module resource Procedure Set .spec.moduleLoader.container.modprobe.firmwarePath in the Module custom resource (CR): apiVersion: kmm.sigs.x-k8s.io/v1beta1 kind: Module metadata: name: my-kmod spec: moduleLoader: container: modprobe: moduleName: my-kmod # Required firmwarePath: /firmware 1 1 Optional: Copies /firmware/* into /var/lib/firmware/ on the node. 4.16. Troubleshooting KMM When troubleshooting KMM installation issues, you can monitor logs to determine at which stage issues occur. Then, retrieve diagnostic data relevant to that stage. 4.16.1. Using the must-gather tool The oc adm must-gather command is the preferred way to collect a support bundle and provide debugging information to Red Hat Support. Collect specific information by running the command with the appropriate arguments as described in the following sections. Additional resources About the must-gather tool 4.16.1.1. Gathering data for KMM Procedure Gather the data for the KMM Operator controller manager: Set the MUST_GATHER_IMAGE variable: USD export MUST_GATHER_IMAGE=USD(oc get deployment -n openshift-kmm kmm-operator-controller-manager -ojsonpath='{.spec.template.spec.containers[?(@.name=="manager")].env[?(@.name=="RELATED_IMAGES_MUST_GATHER")].value}') Note Use the -n <namespace> switch to specify a namespace if you installed KMM in a custom namespace. Run the must-gather tool: USD oc adm must-gather --image="USD{MUST_GATHER_IMAGE}" -- /usr/bin/gather View the Operator logs: USD oc logs -fn openshift-kmm deployments/kmm-operator-controller-manager Example 4.1. Example output I0228 09:36:37.352405 1 request.go:682] Waited for 1.001998746s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/apis/machine.openshift.io/v1beta1?timeout=32s I0228 09:36:40.767060 1 listener.go:44] kmm/controller-runtime/metrics "msg"="Metrics server is starting to listen" "addr"="127.0.0.1:8080" I0228 09:36:40.769483 1 main.go:234] kmm/setup "msg"="starting manager" I0228 09:36:40.769907 1 internal.go:366] kmm "msg"="Starting server" "addr"={"IP":"127.0.0.1","Port":8080,"Zone":""} "kind"="metrics" "path"="/metrics" I0228 09:36:40.770025 1 internal.go:366] kmm "msg"="Starting server" "addr"={"IP":"::","Port":8081,"Zone":""} "kind"="health probe" I0228 09:36:40.770128 1 leaderelection.go:248] attempting to acquire leader lease openshift-kmm/kmm.sigs.x-k8s.io... I0228 09:36:40.784396 1 leaderelection.go:258] successfully acquired lease openshift-kmm/kmm.sigs.x-k8s.io I0228 09:36:40.784876 1 controller.go:185] kmm "msg"="Starting EventSource" "controller"="Module" "controllerGroup"="kmm.sigs.x-k8s.io" "controllerKind"="Module" "source"="kind source: *v1beta1.Module" I0228 09:36:40.784925 1 controller.go:185] kmm "msg"="Starting EventSource" "controller"="Module" "controllerGroup"="kmm.sigs.x-k8s.io" "controllerKind"="Module" "source"="kind source: *v1.DaemonSet" I0228 09:36:40.784968 1 controller.go:185] kmm "msg"="Starting EventSource" "controller"="Module" "controllerGroup"="kmm.sigs.x-k8s.io" "controllerKind"="Module" "source"="kind source: *v1.Build" I0228 09:36:40.785001 1 controller.go:185] kmm "msg"="Starting EventSource" "controller"="Module" "controllerGroup"="kmm.sigs.x-k8s.io" "controllerKind"="Module" "source"="kind source: *v1.Job" I0228 09:36:40.785025 1 controller.go:185] kmm "msg"="Starting EventSource" "controller"="Module" "controllerGroup"="kmm.sigs.x-k8s.io" "controllerKind"="Module" "source"="kind source: *v1.Node" I0228 09:36:40.785039 1 controller.go:193] kmm "msg"="Starting Controller" "controller"="Module" "controllerGroup"="kmm.sigs.x-k8s.io" "controllerKind"="Module" I0228 09:36:40.785458 1 controller.go:185] kmm "msg"="Starting EventSource" "controller"="PodNodeModule" "controllerGroup"="" "controllerKind"="Pod" "source"="kind source: *v1.Pod" I0228 09:36:40.786947 1 controller.go:185] kmm "msg"="Starting EventSource" "controller"="PreflightValidation" "controllerGroup"="kmm.sigs.x-k8s.io" "controllerKind"="PreflightValidation" "source"="kind source: *v1beta1.PreflightValidation" I0228 09:36:40.787406 1 controller.go:185] kmm "msg"="Starting EventSource" "controller"="PreflightValidation" "controllerGroup"="kmm.sigs.x-k8s.io" "controllerKind"="PreflightValidation" "source"="kind source: *v1.Build" I0228 09:36:40.787474 1 controller.go:185] kmm "msg"="Starting EventSource" "controller"="PreflightValidation" "controllerGroup"="kmm.sigs.x-k8s.io" "controllerKind"="PreflightValidation" "source"="kind source: *v1.Job" I0228 09:36:40.787488 1 controller.go:185] kmm "msg"="Starting EventSource" "controller"="PreflightValidation" "controllerGroup"="kmm.sigs.x-k8s.io" "controllerKind"="PreflightValidation" "source"="kind source: *v1beta1.Module" I0228 09:36:40.787603 1 controller.go:185] kmm "msg"="Starting EventSource" "controller"="NodeKernel" "controllerGroup"="" "controllerKind"="Node" "source"="kind source: *v1.Node" I0228 09:36:40.787634 1 controller.go:193] kmm "msg"="Starting Controller" "controller"="NodeKernel" "controllerGroup"="" "controllerKind"="Node" I0228 09:36:40.787680 1 controller.go:193] kmm "msg"="Starting Controller" "controller"="PreflightValidation" "controllerGroup"="kmm.sigs.x-k8s.io" "controllerKind"="PreflightValidation" I0228 09:36:40.785607 1 controller.go:185] kmm "msg"="Starting EventSource" "controller"="imagestream" "controllerGroup"="image.openshift.io" "controllerKind"="ImageStream" "source"="kind source: *v1.ImageStream" I0228 09:36:40.787822 1 controller.go:185] kmm "msg"="Starting EventSource" "controller"="preflightvalidationocp" "controllerGroup"="kmm.sigs.x-k8s.io" "controllerKind"="PreflightValidationOCP" "source"="kind source: *v1beta1.PreflightValidationOCP" I0228 09:36:40.787853 1 controller.go:193] kmm "msg"="Starting Controller" "controller"="imagestream" "controllerGroup"="image.openshift.io" "controllerKind"="ImageStream" I0228 09:36:40.787879 1 controller.go:185] kmm "msg"="Starting EventSource" "controller"="preflightvalidationocp" "controllerGroup"="kmm.sigs.x-k8s.io" "controllerKind"="PreflightValidationOCP" "source"="kind source: *v1beta1.PreflightValidation" I0228 09:36:40.787905 1 controller.go:193] kmm "msg"="Starting Controller" "controller"="preflightvalidationocp" "controllerGroup"="kmm.sigs.x-k8s.io" "controllerKind"="PreflightValidationOCP" I0228 09:36:40.786489 1 controller.go:193] kmm "msg"="Starting Controller" "controller"="PodNodeModule" "controllerGroup"="" "controllerKind"="Pod" 4.16.1.2. Gathering data for KMM-Hub Procedure Gather the data for the KMM Operator hub controller manager: Set the MUST_GATHER_IMAGE variable: USD export MUST_GATHER_IMAGE=USD(oc get deployment -n openshift-kmm-hub kmm-operator-hub-controller-manager -ojsonpath='{.spec.template.spec.containers[?(@.name=="manager")].env[?(@.name=="RELATED_IMAGES_MUST_GATHER")].value}') Note Use the -n <namespace> switch to specify a namespace if you installed KMM in a custom namespace. Run the must-gather tool: USD oc adm must-gather --image="USD{MUST_GATHER_IMAGE}" -- /usr/bin/gather -u View the Operator logs: USD oc logs -fn openshift-kmm-hub deployments/kmm-operator-hub-controller-manager Example 4.2. Example output I0417 11:34:08.807472 1 request.go:682] Waited for 1.023403273s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/apis/tuned.openshift.io/v1?timeout=32s I0417 11:34:12.373413 1 listener.go:44] kmm-hub/controller-runtime/metrics "msg"="Metrics server is starting to listen" "addr"="127.0.0.1:8080" I0417 11:34:12.376253 1 main.go:150] kmm-hub/setup "msg"="Adding controller" "name"="ManagedClusterModule" I0417 11:34:12.376621 1 main.go:186] kmm-hub/setup "msg"="starting manager" I0417 11:34:12.377690 1 leaderelection.go:248] attempting to acquire leader lease openshift-kmm-hub/kmm-hub.sigs.x-k8s.io... I0417 11:34:12.378078 1 internal.go:366] kmm-hub "msg"="Starting server" "addr"={"IP":"127.0.0.1","Port":8080,"Zone":""} "kind"="metrics" "path"="/metrics" I0417 11:34:12.378222 1 internal.go:366] kmm-hub "msg"="Starting server" "addr"={"IP":"::","Port":8081,"Zone":""} "kind"="health probe" I0417 11:34:12.395703 1 leaderelection.go:258] successfully acquired lease openshift-kmm-hub/kmm-hub.sigs.x-k8s.io I0417 11:34:12.396334 1 controller.go:185] kmm-hub "msg"="Starting EventSource" "controller"="ManagedClusterModule" "controllerGroup"="hub.kmm.sigs.x-k8s.io" "controllerKind"="ManagedClusterModule" "source"="kind source: *v1beta1.ManagedClusterModule" I0417 11:34:12.396403 1 controller.go:185] kmm-hub "msg"="Starting EventSource" "controller"="ManagedClusterModule" "controllerGroup"="hub.kmm.sigs.x-k8s.io" "controllerKind"="ManagedClusterModule" "source"="kind source: *v1.ManifestWork" I0417 11:34:12.396430 1 controller.go:185] kmm-hub "msg"="Starting EventSource" "controller"="ManagedClusterModule" "controllerGroup"="hub.kmm.sigs.x-k8s.io" "controllerKind"="ManagedClusterModule" "source"="kind source: *v1.Build" I0417 11:34:12.396469 1 controller.go:185] kmm-hub "msg"="Starting EventSource" "controller"="ManagedClusterModule" "controllerGroup"="hub.kmm.sigs.x-k8s.io" "controllerKind"="ManagedClusterModule" "source"="kind source: *v1.Job" I0417 11:34:12.396522 1 controller.go:185] kmm-hub "msg"="Starting EventSource" "controller"="ManagedClusterModule" "controllerGroup"="hub.kmm.sigs.x-k8s.io" "controllerKind"="ManagedClusterModule" "source"="kind source: *v1.ManagedCluster" I0417 11:34:12.396543 1 controller.go:193] kmm-hub "msg"="Starting Controller" "controller"="ManagedClusterModule" "controllerGroup"="hub.kmm.sigs.x-k8s.io" "controllerKind"="ManagedClusterModule" I0417 11:34:12.397175 1 controller.go:185] kmm-hub "msg"="Starting EventSource" "controller"="imagestream" "controllerGroup"="image.openshift.io" "controllerKind"="ImageStream" "source"="kind source: *v1.ImageStream" I0417 11:34:12.397221 1 controller.go:193] kmm-hub "msg"="Starting Controller" "controller"="imagestream" "controllerGroup"="image.openshift.io" "controllerKind"="ImageStream" I0417 11:34:12.498335 1 filter.go:196] kmm-hub "msg"="Listing all ManagedClusterModules" "managedcluster"="local-cluster" I0417 11:34:12.498570 1 filter.go:205] kmm-hub "msg"="Listed ManagedClusterModules" "count"=0 "managedcluster"="local-cluster" I0417 11:34:12.498629 1 filter.go:238] kmm-hub "msg"="Adding reconciliation requests" "count"=0 "managedcluster"="local-cluster" I0417 11:34:12.498687 1 filter.go:196] kmm-hub "msg"="Listing all ManagedClusterModules" "managedcluster"="sno1-0" I0417 11:34:12.498750 1 filter.go:205] kmm-hub "msg"="Listed ManagedClusterModules" "count"=0 "managedcluster"="sno1-0" I0417 11:34:12.498801 1 filter.go:238] kmm-hub "msg"="Adding reconciliation requests" "count"=0 "managedcluster"="sno1-0" I0417 11:34:12.501947 1 controller.go:227] kmm-hub "msg"="Starting workers" "controller"="imagestream" "controllerGroup"="image.openshift.io" "controllerKind"="ImageStream" "worker count"=1 I0417 11:34:12.501948 1 controller.go:227] kmm-hub "msg"="Starting workers" "controller"="ManagedClusterModule" "controllerGroup"="hub.kmm.sigs.x-k8s.io" "controllerKind"="ManagedClusterModule" "worker count"=1 I0417 11:34:12.502285 1 imagestream_reconciler.go:50] kmm-hub "msg"="registered imagestream info mapping" "ImageStream"={"name":"driver-toolkit","namespace":"openshift"} "controller"="imagestream" "controllerGroup"="image.openshift.io" "controllerKind"="ImageStream" "dtkImage"="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:df42b4785a7a662b30da53bdb0d206120cf4d24b45674227b16051ba4b7c3934" "name"="driver-toolkit" "namespace"="openshift" "osImageVersion"="412.86.202302211547-0" "reconcileID"="e709ff0a-5664-4007-8270-49b5dff8bae9"
|
[
"apiVersion: v1 kind: Namespace metadata: name: openshift-kmm",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: kernel-module-management namespace: openshift-kmm",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: kernel-module-management namespace: openshift-kmm spec: channel: release-1.0 installPlanApproval: Automatic name: kernel-module-management source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: kernel-module-management.v1.0.0",
"oc create -f kmm-sub.yaml",
"oc get -n openshift-kmm deployments.apps kmm-operator-controller-manager",
"NAME READY UP-TO-DATE AVAILABLE AGE kmm-operator-controller-manager 1/1 1 1 97s",
"apiVersion: v1 kind: Namespace metadata: name: openshift-kmm",
"allowHostDirVolumePlugin: false allowHostIPC: false allowHostNetwork: false allowHostPID: false allowHostPorts: false allowPrivilegeEscalation: false allowPrivilegedContainer: false allowedCapabilities: - NET_BIND_SERVICE apiVersion: security.openshift.io/v1 defaultAddCapabilities: null fsGroup: type: MustRunAs groups: [] kind: SecurityContextConstraints metadata: name: restricted-v2 priority: null readOnlyRootFilesystem: false requiredDropCapabilities: - ALL runAsUser: type: MustRunAsRange seLinuxContext: type: MustRunAs seccompProfiles: - runtime/default supplementalGroups: type: RunAsAny users: [] volumes: - configMap - downwardAPI - emptyDir - persistentVolumeClaim - projected - secret",
"oc apply -f kmm-security-constraint.yaml",
"oc adm policy add-scc-to-user kmm-security-constraint -z kmm-operator-controller-manager -n openshift-kmm",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: kernel-module-management namespace: openshift-kmm",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: kernel-module-management namespace: openshift-kmm spec: channel: release-1.0 installPlanApproval: Automatic name: kernel-module-management source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: kernel-module-management.v1.0.0",
"oc create -f kmm-sub.yaml",
"oc get -n openshift-kmm deployments.apps kmm-operator-controller-manager",
"NAME READY UP-TO-DATE AVAILABLE AGE kmm-operator-controller-manager 1/1 1 1 97s",
"oc delete -k https://github.com/rh-ecosystem-edge/kernel-module-management/config/default",
"spec: moduleLoader: container: modprobe: moduleName: mod_a dirName: /opt firmwarePath: /firmware parameters: - param=1 modulesLoadingOrder: - mod_a - mod_b",
"oc adm policy add-scc-to-user privileged -z \"USD{serviceAccountName}\" [ -n \"USD{namespace}\" ]",
"spec: moduleLoader: container: modprobe: moduleName: mod_a inTreeModuleToRemove: mod_b",
"spec: moduleLoader: container: kernelMappings: - literal: 6.0.15-300.fc37.x86_64 containerImage: some.registry/org/my-kmod:6.0.15-300.fc37.x86_64 inTreeModuleToRemove: <module_name>",
"apiVersion: kmm.sigs.x-k8s.io/v1beta1 kind: Module metadata: name: <my_kmod> spec: moduleLoader: container: modprobe: moduleName: <my_kmod> 1 dirName: /opt 2 firmwarePath: /firmware 3 parameters: 4 - param=1 kernelMappings: 5 - literal: 6.0.15-300.fc37.x86_64 containerImage: some.registry/org/my-kmod:6.0.15-300.fc37.x86_64 - regexp: '^.+\\fc37\\.x86_64USD' 6 containerImage: \"some.other.registry/org/<my_kmod>:USD{KERNEL_FULL_VERSION}\" - regexp: '^.+USD' 7 containerImage: \"some.registry/org/<my_kmod>:USD{KERNEL_FULL_VERSION}\" build: buildArgs: 8 - name: ARG_NAME value: <some_value> secrets: - name: <some_kubernetes_secret> 9 baseImageRegistryTLS: 10 insecure: false insecureSkipTLSVerify: false 11 dockerfileConfigMap: 12 name: <my_kmod_dockerfile> sign: certSecret: name: <cert_secret> 13 keySecret: name: <key_secret> 14 filesToSign: - /opt/lib/modules/USD{KERNEL_FULL_VERSION}/<my_kmod>.ko registryTLS: 15 insecure: false 16 insecureSkipTLSVerify: false serviceAccountName: <sa_module_loader> 17 devicePlugin: 18 container: image: some.registry/org/device-plugin:latest 19 env: - name: MY_DEVICE_PLUGIN_ENV_VAR value: SOME_VALUE volumeMounts: 20 - mountPath: /some/mountPath name: <device_plugin_volume> volumes: 21 - name: <device_plugin_volume> configMap: name: <some_configmap> serviceAccountName: <sa_device_plugin> 22 imageRepoSecret: 23 name: <secret_name> selector: node-role.kubernetes.io/worker: \"\"",
"apiVersion: v1 kind: ConfigMap metadata: name: kmm-ci-dockerfile data: dockerfile: | ARG DTK_AUTO FROM USD{DTK_AUTO} as builder ARG KERNEL_VERSION WORKDIR /usr/src RUN [\"git\", \"clone\", \"https://github.com/rh-ecosystem-edge/kernel-module-management.git\"] WORKDIR /usr/src/kernel-module-management/ci/kmm-kmod RUN KERNEL_SRC_DIR=/lib/modules/USD{KERNEL_VERSION}/build make all FROM registry.redhat.io/ubi9/ubi-minimal ARG KERNEL_VERSION RUN microdnf install kmod COPY --from=builder /usr/src/kernel-module-management/ci/kmm-kmod/kmm_ci_a.ko /opt/lib/modules/USD{KERNEL_VERSION}/ COPY --from=builder /usr/src/kernel-module-management/ci/kmm-kmod/kmm_ci_b.ko /opt/lib/modules/USD{KERNEL_VERSION}/ RUN depmod -b /opt USD{KERNEL_VERSION}",
"- regexp: '^.+USD' containerImage: \"some.registry/org/<my_kmod>:USD{KERNEL_FULL_VERSION}\" build: buildArgs: 1 - name: ARG_NAME value: <some_value> secrets: 2 - name: <some_kubernetes_secret> 3 baseImageRegistryTLS: insecure: false 4 insecureSkipTLSVerify: false 5 dockerfileConfigMap: 6 name: <my_kmod_dockerfile> registryTLS: insecure: false 7 insecureSkipTLSVerify: false 8",
"ARG DTK_AUTO FROM USD{DTK_AUTO} as builder ARG KERNEL_VERSION WORKDIR /usr/src RUN [\"git\", \"clone\", \"https://github.com/rh-ecosystem-edge/kernel-module-management.git\"] WORKDIR /usr/src/kernel-module-management/ci/kmm-kmod RUN KERNEL_SRC_DIR=/lib/modules/USD{KERNEL_VERSION}/build make all FROM registry.redhat.io/ubi9/ubi-minimal ARG KERNEL_VERSION RUN microdnf install kmod COPY --from=builder /usr/src/kernel-module-management/ci/kmm-kmod/kmm_ci_a.ko /opt/lib/modules/USD{KERNEL_VERSION}/ COPY --from=builder /usr/src/kernel-module-management/ci/kmm-kmod/kmm_ci_b.ko /opt/lib/modules/USD{KERNEL_VERSION}/ RUN depmod -b /opt USD{KERNEL_VERSION}",
"openssl req -x509 -new -nodes -utf8 -sha256 -days 36500 -batch -config configuration_file.config -outform DER -out my_signing_key_pub.der -keyout my_signing_key.priv",
"oc create secret generic my-signing-key --from-file=key=<my_signing_key.priv>",
"oc create secret generic my-signing-key-pub --from-file=cert=<my_signing_key_pub.der>",
"cat sb_cert.priv | base64 -w 0 > my_signing_key2.base64",
"cat sb_cert.cer | base64 -w 0 > my_signing_key_pub.base64",
"apiVersion: v1 kind: Secret metadata: name: my-signing-key-pub namespace: default 1 type: Opaque data: cert: <base64_encoded_secureboot_public_key> --- apiVersion: v1 kind: Secret metadata: name: my-signing-key namespace: default 2 type: Opaque data: key: <base64_encoded_secureboot_private_key>",
"oc apply -f <yaml_filename>",
"oc get secret -o yaml <certificate secret name> | awk '/cert/{print USD2; exit}' | base64 -d | openssl x509 -inform der -text",
"oc get secret -o yaml <private key secret name> | awk '/key/{print USD2; exit}' | base64 -d",
"--- apiVersion: kmm.sigs.x-k8s.io/v1beta1 kind: Module metadata: name: example-module spec: moduleLoader: serviceAccountName: default container: modprobe: 1 moduleName: '<your module name>' kernelMappings: # the kmods will be deployed on all nodes in the cluster with a kernel that matches the regexp - regexp: '^.*\\.x86_64USD' # the container to produce containing the signed kmods containerImage: <image name e.g. quay.io/myuser/my-driver:<kernelversion>-signed> sign: # the image containing the unsigned kmods (we need this because we are not building the kmods within the cluster) unsignedImage: <image name e.g. quay.io/myuser/my-driver:<kernelversion> > keySecret: # a secret holding the private secureboot key with the key 'key' name: <private key secret name> certSecret: # a secret holding the public secureboot key with the key 'cert' name: <certificate secret name> filesToSign: # full path within the unsignedImage container to the kmod(s) to sign - /opt/lib/modules/4.18.0-348.2.1.el8_5.x86_64/kmm_ci_a.ko imageRepoSecret: # the name of a secret containing credentials to pull unsignedImage and push containerImage to the registry name: repo-pull-secret selector: kubernetes.io/arch: amd64",
"--- apiVersion: v1 kind: ConfigMap metadata: name: example-module-dockerfile namespace: default 1 data: Dockerfile: | ARG DTK_AUTO ARG KERNEL_VERSION FROM USD{DTK_AUTO} as builder WORKDIR /build/ RUN git clone -b main --single-branch https://github.com/rh-ecosystem-edge/kernel-module-management.git WORKDIR kernel-module-management/ci/kmm-kmod/ RUN make FROM registry.access.redhat.com/ubi9/ubi:latest ARG KERNEL_VERSION RUN yum -y install kmod && yum clean all RUN mkdir -p /opt/lib/modules/USD{KERNEL_VERSION} COPY --from=builder /build/kernel-module-management/ci/kmm-kmod/*.ko /opt/lib/modules/USD{KERNEL_VERSION}/ RUN /usr/sbin/depmod -b /opt --- apiVersion: kmm.sigs.x-k8s.io/v1beta1 kind: Module metadata: name: example-module namespace: default 2 spec: moduleLoader: serviceAccountName: default 3 container: modprobe: moduleName: simple_kmod kernelMappings: - regexp: '^.*\\.x86_64USD' containerImage: < the name of the final driver container to produce> build: dockerfileConfigMap: name: example-module-dockerfile sign: keySecret: name: <private key secret name> certSecret: name: <certificate secret name> filesToSign: - /opt/lib/modules/4.18.0-348.2.1.el8_5.x86_64/kmm_ci_a.ko imageRepoSecret: 4 name: repo-pull-secret selector: # top-level selector kubernetes.io/arch: amd64",
"--- apiVersion: v1 kind: Namespace metadata: name: openshift-kmm-hub --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: kernel-module-management-hub namespace: openshift-kmm-hub --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: kernel-module-management-hub namespace: openshift-kmm-hub spec: channel: stable installPlanApproval: Automatic name: kernel-module-management-hub source: redhat-operators sourceNamespace: openshift-marketplace",
"apiVersion: hub.kmm.sigs.x-k8s.io/v1beta1 kind: ManagedClusterModule metadata: name: <my-mcm> # No namespace, because this resource is cluster-scoped. spec: moduleSpec: 1 selector: 2 node-wants-my-mcm: 'true' spokeNamespace: <some-namespace> 3 selector: 4 wants-my-mcm: 'true'",
"--- apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: install-kmm spec: remediationAction: enforce disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: install-kmm spec: severity: high object-templates: - complianceType: mustonlyhave objectDefinition: apiVersion: v1 kind: Namespace metadata: name: openshift-kmm - complianceType: mustonlyhave objectDefinition: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: kmm namespace: openshift-kmm spec: upgradeStrategy: Default - complianceType: mustonlyhave objectDefinition: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: kernel-module-management namespace: openshift-kmm spec: channel: stable config: env: - name: KMM_MANAGED 1 value: \"1\" installPlanApproval: Automatic name: kernel-module-management source: redhat-operators sourceNamespace: openshift-marketplace - complianceType: mustonlyhave objectDefinition: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: kmm-module-manager rules: - apiGroups: [kmm.sigs.x-k8s.io] resources: [modules] verbs: [create, delete, get, list, patch, update, watch] - complianceType: mustonlyhave objectDefinition: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: klusterlet-kmm subjects: - kind: ServiceAccount name: klusterlet-work-sa namespace: open-cluster-management-agent roleRef: kind: ClusterRole name: kmm-module-manager apiGroup: rbac.authorization.k8s.io --- apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: all-managed-clusters spec: clusterSelector: 2 matchExpressions: [] --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: install-kmm placementRef: apiGroup: apps.open-cluster-management.io kind: PlacementRule name: all-managed-clusters subjects: - apiGroup: policy.open-cluster-management.io kind: Policy name: install-kmm",
"oc label node/<node_name> kmm.node.kubernetes.io/version-module.<module_namespace>.<module_name>-",
"oc label node/<node_name> kmm.node.kubernetes.io/version-module.<module_namespace>.<module_name>=<desired_version>",
"ProduceMachineConfig(machineConfigName, machineConfigPoolRef, kernelModuleImage, kernelModuleName string) (string, error)",
"kind: MachineConfigPool metadata: name: sfc spec: machineConfigSelector: 1 matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker, sfc]} nodeSelector: 2 matchLabels: node-role.kubernetes.io/sfc: \"\" paused: false maxUnavailable: 1",
"metadata: labels: machineconfiguration.opensfhit.io/role: master",
"metadata: labels: machineconfiguration.opensfhit.io/role: worker",
"modprobe: ERROR: could not insert '<your_kmod_name>': Required key not available",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 99-worker-kernel-args-firmware-path spec: kernelArguments: - 'firmware_class.path=/var/lib/firmware'",
"FROM registry.redhat.io/ubi9/ubi-minimal as builder Build the kmod RUN [\"mkdir\", \"/firmware\"] RUN [\"curl\", \"-o\", \"/firmware/firmware.bin\", \"https://artifacts.example.com/firmware.bin\"] FROM registry.redhat.io/ubi9/ubi-minimal Copy the kmod, install modprobe, run depmod COPY --from=builder /firmware /firmware",
"apiVersion: kmm.sigs.x-k8s.io/v1beta1 kind: Module metadata: name: my-kmod spec: moduleLoader: container: modprobe: moduleName: my-kmod # Required firmwarePath: /firmware 1",
"export MUST_GATHER_IMAGE=USD(oc get deployment -n openshift-kmm kmm-operator-controller-manager -ojsonpath='{.spec.template.spec.containers[?(@.name==\"manager\")].env[?(@.name==\"RELATED_IMAGES_MUST_GATHER\")].value}')",
"oc adm must-gather --image=\"USD{MUST_GATHER_IMAGE}\" -- /usr/bin/gather",
"oc logs -fn openshift-kmm deployments/kmm-operator-controller-manager",
"I0228 09:36:37.352405 1 request.go:682] Waited for 1.001998746s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/apis/machine.openshift.io/v1beta1?timeout=32s I0228 09:36:40.767060 1 listener.go:44] kmm/controller-runtime/metrics \"msg\"=\"Metrics server is starting to listen\" \"addr\"=\"127.0.0.1:8080\" I0228 09:36:40.769483 1 main.go:234] kmm/setup \"msg\"=\"starting manager\" I0228 09:36:40.769907 1 internal.go:366] kmm \"msg\"=\"Starting server\" \"addr\"={\"IP\":\"127.0.0.1\",\"Port\":8080,\"Zone\":\"\"} \"kind\"=\"metrics\" \"path\"=\"/metrics\" I0228 09:36:40.770025 1 internal.go:366] kmm \"msg\"=\"Starting server\" \"addr\"={\"IP\":\"::\",\"Port\":8081,\"Zone\":\"\"} \"kind\"=\"health probe\" I0228 09:36:40.770128 1 leaderelection.go:248] attempting to acquire leader lease openshift-kmm/kmm.sigs.x-k8s.io I0228 09:36:40.784396 1 leaderelection.go:258] successfully acquired lease openshift-kmm/kmm.sigs.x-k8s.io I0228 09:36:40.784876 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"Module\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"Module\" \"source\"=\"kind source: *v1beta1.Module\" I0228 09:36:40.784925 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"Module\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"Module\" \"source\"=\"kind source: *v1.DaemonSet\" I0228 09:36:40.784968 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"Module\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"Module\" \"source\"=\"kind source: *v1.Build\" I0228 09:36:40.785001 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"Module\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"Module\" \"source\"=\"kind source: *v1.Job\" I0228 09:36:40.785025 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"Module\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"Module\" \"source\"=\"kind source: *v1.Node\" I0228 09:36:40.785039 1 controller.go:193] kmm \"msg\"=\"Starting Controller\" \"controller\"=\"Module\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"Module\" I0228 09:36:40.785458 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"PodNodeModule\" \"controllerGroup\"=\"\" \"controllerKind\"=\"Pod\" \"source\"=\"kind source: *v1.Pod\" I0228 09:36:40.786947 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"PreflightValidation\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"PreflightValidation\" \"source\"=\"kind source: *v1beta1.PreflightValidation\" I0228 09:36:40.787406 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"PreflightValidation\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"PreflightValidation\" \"source\"=\"kind source: *v1.Build\" I0228 09:36:40.787474 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"PreflightValidation\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"PreflightValidation\" \"source\"=\"kind source: *v1.Job\" I0228 09:36:40.787488 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"PreflightValidation\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"PreflightValidation\" \"source\"=\"kind source: *v1beta1.Module\" I0228 09:36:40.787603 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"NodeKernel\" \"controllerGroup\"=\"\" \"controllerKind\"=\"Node\" \"source\"=\"kind source: *v1.Node\" I0228 09:36:40.787634 1 controller.go:193] kmm \"msg\"=\"Starting Controller\" \"controller\"=\"NodeKernel\" \"controllerGroup\"=\"\" \"controllerKind\"=\"Node\" I0228 09:36:40.787680 1 controller.go:193] kmm \"msg\"=\"Starting Controller\" \"controller\"=\"PreflightValidation\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"PreflightValidation\" I0228 09:36:40.785607 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"imagestream\" \"controllerGroup\"=\"image.openshift.io\" \"controllerKind\"=\"ImageStream\" \"source\"=\"kind source: *v1.ImageStream\" I0228 09:36:40.787822 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"preflightvalidationocp\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"PreflightValidationOCP\" \"source\"=\"kind source: *v1beta1.PreflightValidationOCP\" I0228 09:36:40.787853 1 controller.go:193] kmm \"msg\"=\"Starting Controller\" \"controller\"=\"imagestream\" \"controllerGroup\"=\"image.openshift.io\" \"controllerKind\"=\"ImageStream\" I0228 09:36:40.787879 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"preflightvalidationocp\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"PreflightValidationOCP\" \"source\"=\"kind source: *v1beta1.PreflightValidation\" I0228 09:36:40.787905 1 controller.go:193] kmm \"msg\"=\"Starting Controller\" \"controller\"=\"preflightvalidationocp\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"PreflightValidationOCP\" I0228 09:36:40.786489 1 controller.go:193] kmm \"msg\"=\"Starting Controller\" \"controller\"=\"PodNodeModule\" \"controllerGroup\"=\"\" \"controllerKind\"=\"Pod\"",
"export MUST_GATHER_IMAGE=USD(oc get deployment -n openshift-kmm-hub kmm-operator-hub-controller-manager -ojsonpath='{.spec.template.spec.containers[?(@.name==\"manager\")].env[?(@.name==\"RELATED_IMAGES_MUST_GATHER\")].value}')",
"oc adm must-gather --image=\"USD{MUST_GATHER_IMAGE}\" -- /usr/bin/gather -u",
"oc logs -fn openshift-kmm-hub deployments/kmm-operator-hub-controller-manager",
"I0417 11:34:08.807472 1 request.go:682] Waited for 1.023403273s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/apis/tuned.openshift.io/v1?timeout=32s I0417 11:34:12.373413 1 listener.go:44] kmm-hub/controller-runtime/metrics \"msg\"=\"Metrics server is starting to listen\" \"addr\"=\"127.0.0.1:8080\" I0417 11:34:12.376253 1 main.go:150] kmm-hub/setup \"msg\"=\"Adding controller\" \"name\"=\"ManagedClusterModule\" I0417 11:34:12.376621 1 main.go:186] kmm-hub/setup \"msg\"=\"starting manager\" I0417 11:34:12.377690 1 leaderelection.go:248] attempting to acquire leader lease openshift-kmm-hub/kmm-hub.sigs.x-k8s.io I0417 11:34:12.378078 1 internal.go:366] kmm-hub \"msg\"=\"Starting server\" \"addr\"={\"IP\":\"127.0.0.1\",\"Port\":8080,\"Zone\":\"\"} \"kind\"=\"metrics\" \"path\"=\"/metrics\" I0417 11:34:12.378222 1 internal.go:366] kmm-hub \"msg\"=\"Starting server\" \"addr\"={\"IP\":\"::\",\"Port\":8081,\"Zone\":\"\"} \"kind\"=\"health probe\" I0417 11:34:12.395703 1 leaderelection.go:258] successfully acquired lease openshift-kmm-hub/kmm-hub.sigs.x-k8s.io I0417 11:34:12.396334 1 controller.go:185] kmm-hub \"msg\"=\"Starting EventSource\" \"controller\"=\"ManagedClusterModule\" \"controllerGroup\"=\"hub.kmm.sigs.x-k8s.io\" \"controllerKind\"=\"ManagedClusterModule\" \"source\"=\"kind source: *v1beta1.ManagedClusterModule\" I0417 11:34:12.396403 1 controller.go:185] kmm-hub \"msg\"=\"Starting EventSource\" \"controller\"=\"ManagedClusterModule\" \"controllerGroup\"=\"hub.kmm.sigs.x-k8s.io\" \"controllerKind\"=\"ManagedClusterModule\" \"source\"=\"kind source: *v1.ManifestWork\" I0417 11:34:12.396430 1 controller.go:185] kmm-hub \"msg\"=\"Starting EventSource\" \"controller\"=\"ManagedClusterModule\" \"controllerGroup\"=\"hub.kmm.sigs.x-k8s.io\" \"controllerKind\"=\"ManagedClusterModule\" \"source\"=\"kind source: *v1.Build\" I0417 11:34:12.396469 1 controller.go:185] kmm-hub \"msg\"=\"Starting EventSource\" \"controller\"=\"ManagedClusterModule\" \"controllerGroup\"=\"hub.kmm.sigs.x-k8s.io\" \"controllerKind\"=\"ManagedClusterModule\" \"source\"=\"kind source: *v1.Job\" I0417 11:34:12.396522 1 controller.go:185] kmm-hub \"msg\"=\"Starting EventSource\" \"controller\"=\"ManagedClusterModule\" \"controllerGroup\"=\"hub.kmm.sigs.x-k8s.io\" \"controllerKind\"=\"ManagedClusterModule\" \"source\"=\"kind source: *v1.ManagedCluster\" I0417 11:34:12.396543 1 controller.go:193] kmm-hub \"msg\"=\"Starting Controller\" \"controller\"=\"ManagedClusterModule\" \"controllerGroup\"=\"hub.kmm.sigs.x-k8s.io\" \"controllerKind\"=\"ManagedClusterModule\" I0417 11:34:12.397175 1 controller.go:185] kmm-hub \"msg\"=\"Starting EventSource\" \"controller\"=\"imagestream\" \"controllerGroup\"=\"image.openshift.io\" \"controllerKind\"=\"ImageStream\" \"source\"=\"kind source: *v1.ImageStream\" I0417 11:34:12.397221 1 controller.go:193] kmm-hub \"msg\"=\"Starting Controller\" \"controller\"=\"imagestream\" \"controllerGroup\"=\"image.openshift.io\" \"controllerKind\"=\"ImageStream\" I0417 11:34:12.498335 1 filter.go:196] kmm-hub \"msg\"=\"Listing all ManagedClusterModules\" \"managedcluster\"=\"local-cluster\" I0417 11:34:12.498570 1 filter.go:205] kmm-hub \"msg\"=\"Listed ManagedClusterModules\" \"count\"=0 \"managedcluster\"=\"local-cluster\" I0417 11:34:12.498629 1 filter.go:238] kmm-hub \"msg\"=\"Adding reconciliation requests\" \"count\"=0 \"managedcluster\"=\"local-cluster\" I0417 11:34:12.498687 1 filter.go:196] kmm-hub \"msg\"=\"Listing all ManagedClusterModules\" \"managedcluster\"=\"sno1-0\" I0417 11:34:12.498750 1 filter.go:205] kmm-hub \"msg\"=\"Listed ManagedClusterModules\" \"count\"=0 \"managedcluster\"=\"sno1-0\" I0417 11:34:12.498801 1 filter.go:238] kmm-hub \"msg\"=\"Adding reconciliation requests\" \"count\"=0 \"managedcluster\"=\"sno1-0\" I0417 11:34:12.501947 1 controller.go:227] kmm-hub \"msg\"=\"Starting workers\" \"controller\"=\"imagestream\" \"controllerGroup\"=\"image.openshift.io\" \"controllerKind\"=\"ImageStream\" \"worker count\"=1 I0417 11:34:12.501948 1 controller.go:227] kmm-hub \"msg\"=\"Starting workers\" \"controller\"=\"ManagedClusterModule\" \"controllerGroup\"=\"hub.kmm.sigs.x-k8s.io\" \"controllerKind\"=\"ManagedClusterModule\" \"worker count\"=1 I0417 11:34:12.502285 1 imagestream_reconciler.go:50] kmm-hub \"msg\"=\"registered imagestream info mapping\" \"ImageStream\"={\"name\":\"driver-toolkit\",\"namespace\":\"openshift\"} \"controller\"=\"imagestream\" \"controllerGroup\"=\"image.openshift.io\" \"controllerKind\"=\"ImageStream\" \"dtkImage\"=\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:df42b4785a7a662b30da53bdb0d206120cf4d24b45674227b16051ba4b7c3934\" \"name\"=\"driver-toolkit\" \"namespace\"=\"openshift\" \"osImageVersion\"=\"412.86.202302211547-0\" \"reconcileID\"=\"e709ff0a-5664-4007-8270-49b5dff8bae9\""
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/specialized_hardware_and_driver_enablement/kernel-module-management-operator
|
function::user_long_warn
|
function::user_long_warn Name function::user_long_warn - Retrieves a long value stored in user space Synopsis Arguments addr the user space address to retrieve the long from Description Returns the long value from a given user space address. Returns zero when user space and warns (but does not abort) about the failure. Note that the size of the long depends on the architecture of the current user space task (for those architectures that support both 64/32 bit compat tasks).
|
[
"user_long_warn:long(addr:long)"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-user-long-warn
|
Chapter 2. Differences from upstream OpenJDK 11
|
Chapter 2. Differences from upstream OpenJDK 11 Red Hat build of OpenJDK in Red Hat Enterprise Linux (RHEL) contains a number of structural changes from the upstream distribution of OpenJDK. The Microsoft Windows version of Red Hat build of OpenJDK attempts to follow RHEL updates as closely as possible. The following list details the most notable Red Hat build of OpenJDK 11 changes: FIPS support. Red Hat build of OpenJDK 11 automatically detects whether RHEL is in FIPS mode and automatically configures Red Hat build of OpenJDK 11 to operate in that mode. This change does not apply to Red Hat build of OpenJDK builds for Microsoft Windows. Cryptographic policy support. Red Hat build of OpenJDK 11 obtains the list of enabled cryptographic algorithms and key size constraints from RHEL. These configuration components are used by the Transport Layer Security (TLS) encryption protocol, the certificate path validation, and any signed JARs. You can set different security profiles to balance safety and compatibility. This change does not apply to Red Hat build of OpenJDK builds for Microsoft Windows. Red Hat build of OpenJDK on RHEL dynamically links against native libraries such as zlib for archive format support and libjpeg-turbo , libpng , and giflib for image support. RHEL also dynamically links against Harfbuzz and Freetype for font rendering and management. The src.zip file includes the source for all the JAR libraries shipped with Red Hat build of OpenJDK. Red Hat build of OpenJDK on RHEL uses system-wide timezone data files as a source for timezone information. Red Hat build of OpenJDK on RHEL uses system-wide CA certificates. Red Hat build of OpenJDK on Microsoft Windows includes the latest available timezone data from RHEL. Red Hat build of OpenJDK on Microsoft Windows uses the latest available CA certificate from RHEL. Additional resources For more information about detecting if a system is in FIPS mode, see the Improve system FIPS detection example on the Red Hat RHEL Planning Jira. For more information about cryptographic policies, see Using system-wide cryptographic policies .
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/getting_started_with_red_hat_build_of_openjdk_11/rn-openjdk-diff-from-upstream
|
probe::nfsd.proc.lookup
|
probe::nfsd.proc.lookup Name probe::nfsd.proc.lookup - NFS server opening or searching for a file for client Synopsis nfsd.proc.lookup Values fh file handle of parent dir (the first part is the length of the file handle) gid requester's group id filelen the length of file name uid requester's user id version nfs version proto transfer protocol filename file name client_ip the ip address of client
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-nfsd-proc-lookup
|
Chapter 1. Troubleshooting OpenShift Lightspeed
|
Chapter 1. Troubleshooting OpenShift Lightspeed The following topics provide information to help troubleshoot OpenShift Lightspeed. 1.1. 502 Bad Gateway Errors in the interface If you try to start using OpenShift Lightspeed before the internal components have stabilized, 502 Bad Gateway errors can occur. While all the pods might be running, there can be other console and internal OpenShift Container Platform system components that are not ready. Wait a few minutes and try using OpenShift Lightspeed again. 1.2. Operator Missing from the Operator Hub list If you are not running your OpenShift Container Platform cluster on an x86 architecture, for example OpenShift Local on M1 Mac, you will not see the OpenShift Lightspeed Operator in Operator Hub due to the way the Operator Hub filters architectures. OpenShift Lightspeed is only supported on x86 architecture.
| null |
https://docs.redhat.com/en/documentation/red_hat_openshift_lightspeed/1.0tp1/html/troubleshoot/ols-troubleshooting-openshift-lightspeed
|
Chapter 3. glance
|
Chapter 3. glance The following chapter contains information about the configuration options in the glance service. 3.1. glance-api.conf This section contains options for the /etc/glance/glance-api.conf file. 3.1.1. DEFAULT The following table outlines the options available under the [DEFAULT] group in the /etc/glance/glance-api.conf file. . Configuration option = Default value Type Description admin_password = None string value The administrators password. If "use_user_token" is not in effect, then admin credentials can be specified. admin_role = admin string value Role used to identify an authenticated user as administrator. Provide a string value representing a Keystone role to identify an administrative user. Users with this role will be granted administrative privileges. The default value for this option is admin . Possible values: A string value which is a valid Keystone role Related options: None admin_tenant_name = None string value The tenant name of the administrative user. If "use_user_token" is not in effect, then admin tenant name can be specified. admin_user = None string value The administrators user name. If "use_user_token" is not in effect, then admin credentials can be specified. allow_additional_image_properties = True boolean value Allow users to add additional/custom properties to images. Glance defines a standard set of properties (in its schema) that appear on every image. These properties are also known as base properties . In addition to these properties, Glance allows users to add custom properties to images. These are known as additional properties . By default, this configuration option is set to True and users are allowed to add additional properties. The number of additional properties that can be added to an image can be controlled via image_property_quota configuration option. Possible values: True False Related options: image_property_quota allow_anonymous_access = False boolean value Allow limited access to unauthenticated users. Assign a boolean to determine API access for unathenticated users. When set to False, the API cannot be accessed by unauthenticated users. When set to True, unauthenticated users can access the API with read-only privileges. This however only applies when using ContextMiddleware. Possible values: True False Related options: None allowed_rpc_exception_modules = ['glance.common.exception', 'builtins', 'exceptions'] list value List of allowed exception modules to handle RPC exceptions. Provide a comma separated list of modules whose exceptions are permitted to be recreated upon receiving exception data via an RPC call made to Glance. The default list includes glance.common.exception , builtins , and exceptions . The RPC protocol permits interaction with Glance via calls across a network or within the same system. Including a list of exception namespaces with this option enables RPC to propagate the exceptions back to the users. Possible values: A comma separated list of valid exception modules Related options: None api_limit_max = 1000 integer value Maximum number of results that could be returned by a request. As described in the help text of limit_param_default , some requests may return multiple results. The number of results to be returned are governed either by the limit parameter in the request or the limit_param_default configuration option. The value in either case, can't be greater than the absolute maximum defined by this configuration option. Anything greater than this value is trimmed down to the maximum value defined here. Note Setting this to a very large value may slow down database queries and increase response times. Setting this to a very low value may result in poor user experience. Possible values: Any positive integer Related options: limit_param_default auth_region = None string value The region for the authentication service. If "use_user_token" is not in effect and using keystone auth, then region name can be specified. auth_strategy = noauth string value The strategy to use for authentication. If "use_user_token" is not in effect, then auth strategy can be specified. auth_url = None string value The URL to the keystone service. If "use_user_token" is not in effect and using keystone auth, then URL of keystone can be specified. backlog = 4096 integer value Set the number of incoming connection requests. Provide a positive integer value to limit the number of requests in the backlog queue. The default queue size is 4096. An incoming connection to a TCP listener socket is queued before a connection can be established with the server. Setting the backlog for a TCP socket ensures a limited queue size for incoming traffic. Possible values: Positive integer Related options: None bind_host = 0.0.0.0 host address value IP address to bind the glance servers to. Provide an IP address to bind the glance server to. The default value is 0.0.0.0 . Edit this option to enable the server to listen on one particular IP address on the network card. This facilitates selection of a particular network interface for the server. Possible values: A valid IPv4 address A valid IPv6 address Related options: None bind_port = None port value Port number on which the server will listen. Provide a valid port number to bind the server's socket to. This port is then set to identify processes and forward network messages that arrive at the server. The default bind_port value for the API server is 9292 and for the registry server is 9191. Possible values: A valid port number (0 to 65535) Related options: None ca_file = None string value Absolute path to the CA file. Provide a string value representing a valid absolute path to the Certificate Authority file to use for client authentication. A CA file typically contains necessary trusted certificates to use for the client authentication. This is essential to ensure that a secure connection is established to the server via the internet. Possible values: Valid absolute path to the CA file Related options: None cert_file = None string value Absolute path to the certificate file. Provide a string value representing a valid absolute path to the certificate file which is required to start the API service securely. A certificate file typically is a public key container and includes the server's public key, server name, server information and the signature which was a result of the verification process using the CA certificate. This is required for a secure connection establishment. Possible values: Valid absolute path to the certificate file Related options: None client_socket_timeout = 900 integer value Timeout for client connections' socket operations. Provide a valid integer value representing time in seconds to set the period of wait before an incoming connection can be closed. The default value is 900 seconds. The value zero implies wait forever. Possible values: Zero Positive integer Related options: None conn_pool_min_size = 2 integer value The pool size limit for connections expiration policy conn_pool_ttl = 1200 integer value The time-to-live in sec of idle connections in the pool control_exchange = openstack string value The default exchange under which topics are scoped. May be overridden by an exchange name specified in the transport_url option. data_api = glance.db.sqlalchemy.api string value Python module path of data access API. Specifies the path to the API to use for accessing the data model. This option determines how the image catalog data will be accessed. Possible values: glance.db.sqlalchemy.api glance.db.registry.api glance.db.simple.api If this option is set to glance.db.sqlalchemy.api then the image catalog data is stored in and read from the database via the SQLAlchemy Core and ORM APIs. Setting this option to glance.db.registry.api will force all database access requests to be routed through the Registry service. This avoids data access from the Glance API nodes for an added layer of security, scalability and manageability. Note In v2 OpenStack Images API, the registry service is optional. In order to use the Registry API in v2, the option enable_v2_registry must be set to True . Finally, when this configuration option is set to glance.db.simple.api , image catalog data is stored in and read from an in-memory data structure. This is primarily used for testing. Related options: enable_v2_api enable_v2_registry debug = False boolean value If set to true, the logging level will be set to DEBUG instead of the default INFO level. default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO'] list value List of package logging levels in logger=LEVEL pairs. This option is ignored if log_config_append is set. default_publisher_id = image.localhost string value Default publisher_id for outgoing Glance notifications. This is the value that the notification driver will use to identify messages for events originating from the Glance service. Typically, this is the hostname of the instance that generated the message. Possible values: Any reasonable instance identifier, for example: image.host1 Related options: None delayed_delete = False boolean value Turn on/off delayed delete. Typically when an image is deleted, the glance-api service puts the image into deleted state and deletes its data at the same time. Delayed delete is a feature in Glance that delays the actual deletion of image data until a later point in time (as determined by the configuration option scrub_time ). When delayed delete is turned on, the glance-api service puts the image into pending_delete state upon deletion and leaves the image data in the storage backend for the image scrubber to delete at a later time. The image scrubber will move the image into deleted state upon successful deletion of image data. Note When delayed delete is turned on, image scrubber MUST be running as a periodic task to prevent the backend storage from filling up with undesired usage. Possible values: True False Related options: scrub_time wakeup_time scrub_pool_size digest_algorithm = sha256 string value Digest algorithm to use for digital signature. Provide a string value representing the digest algorithm to use for generating digital signatures. By default, sha256 is used. To get a list of the available algorithms supported by the version of OpenSSL on your platform, run the command: openssl list-message-digest-algorithms . Examples are sha1 , sha256 , and sha512 . Note digest_algorithm is not related to Glance's image signing and verification. It is only used to sign the universally unique identifier (UUID) as a part of the certificate file and key file validation. Possible values: An OpenSSL message digest algorithm identifier Relation options: None disabled_notifications = [] list value List of notifications to be disabled. Specify a list of notifications that should not be emitted. A notification can be given either as a notification type to disable a single event notification, or as a notification group prefix to disable all event notifications within a group. Possible values: A comma-separated list of individual notification types or notification groups to be disabled. Currently supported groups: image image.member task metadef_namespace metadef_object metadef_property metadef_resource_type metadef_tag For a complete listing and description of each event refer to: http://docs.openstack.org/developer/glance/notifications.html Related options: None enable_v1_registry = True boolean value DEPRECATED FOR REMOVAL enable_v2_api = True boolean value Deploy the v2 OpenStack Images API. When this option is set to True , Glance service will respond to requests on registered endpoints conforming to the v2 OpenStack Images API. NOTES: If this option is disabled, then the enable_v2_registry option, which is enabled by default, is also recommended to be disabled. Possible values: True False Related options: enable_v2_registry enable_v2_registry = True boolean value Deploy the v2 API Registry service. When this option is set to True , the Registry service will be enabled in Glance for v2 API requests. NOTES: Use of Registry is optional in v2 API, so this option must only be enabled if both enable_v2_api is set to True and the data_api option is set to glance.db.registry.api . If deploying only the v1 OpenStack Images API, this option, which is enabled by default, should be disabled. Possible values: True False Related options: enable_v2_api data_api enabled_backends = None dict value Key:Value pair of store identifier and store type. In case of multiple backends should be separated using comma. enabled_import_methods = ['glance-direct', 'web-download'] list value List of enabled Image Import Methods Both glance-direct and web-download are enabled by default. Related options: [DEFAULT]/node_staging_uri executor_thread_pool_size = 64 integer value Size of executor thread pool when executor is threading or eventlet. fatal_deprecations = False boolean value Enables or disables fatal status of deprecations. hashing_algorithm = sha512 string value Secure hashing algorithm used for computing the os_hash_value property. This option configures the Glance "multihash", which consists of two image properties: the os_hash_algo and the os_hash_value . The os_hash_algo will be populated by the value of this configuration option, and the os_hash_value will be populated by the hexdigest computed when the algorithm is applied to the uploaded or imported image data. The value must be a valid secure hash algorithm name recognized by the python hashlib library. You can determine what these are by examining the hashlib.algorithms_available data member of the version of the library being used in your Glance installation. For interoperability purposes, however, we recommend that you use the set of secure hash names supplied by the hashlib.algorithms_guaranteed data member because those algorithms are guaranteed to be supported by the hashlib library on all platforms. Thus, any image consumer using hashlib locally should be able to verify the os_hash_value of the image. The default value of sha512 is a performant secure hash algorithm. If this option is misconfigured, any attempts to store image data will fail. For that reason, we recommend using the default value. Possible values: Any secure hash algorithm name recognized by the Python hashlib library Related options: None http_keepalive = True boolean value Set keep alive option for HTTP over TCP. Provide a boolean value to determine sending of keep alive packets. If set to False , the server returns the header "Connection: close". If set to True , the server returns a "Connection: Keep-Alive" in its responses. This enables retention of the same TCP connection for HTTP conversations instead of opening a new one with each new request. This option must be set to False if the client socket connection needs to be closed explicitly after the response is received and read successfully by the client. Possible values: True False Related options: None image_cache_dir = None string value Base directory for image cache. This is the location where image data is cached and served out of. All cached images are stored directly under this directory. This directory also contains three subdirectories, namely, incomplete , invalid and queue . The incomplete subdirectory is the staging area for downloading images. An image is first downloaded to this directory. When the image download is successful it is moved to the base directory. However, if the download fails, the partially downloaded image file is moved to the invalid subdirectory. The queue`subdirectory is used for queuing images for download. This is used primarily by the cache-prefetcher, which can be scheduled as a periodic task like cache-pruner and cache-cleaner, to cache images ahead of their usage. Upon receiving the request to cache an image, Glance touches a file in the `queue directory with the image id as the file name. The cache-prefetcher, when running, polls for the files in queue directory and starts downloading them in the order they were created. When the download is successful, the zero-sized file is deleted from the queue directory. If the download fails, the zero-sized file remains and it'll be retried the time cache-prefetcher runs. Possible values: A valid path Related options: image_cache_sqlite_db image_cache_driver = sqlite string value The driver to use for image cache management. This configuration option provides the flexibility to choose between the different image-cache drivers available. An image-cache driver is responsible for providing the essential functions of image-cache like write images to/read images from cache, track age and usage of cached images, provide a list of cached images, fetch size of the cache, queue images for caching and clean up the cache, etc. The essential functions of a driver are defined in the base class glance.image_cache.drivers.base.Driver . All image-cache drivers (existing and prospective) must implement this interface. Currently available drivers are sqlite and xattr . These drivers primarily differ in the way they store the information about cached images: The sqlite driver uses a sqlite database (which sits on every glance node locally) to track the usage of cached images. The xattr driver uses the extended attributes of files to store this information. It also requires a filesystem that sets atime on the files when accessed. Possible values: sqlite xattr Related options: None image_cache_max_size = 10737418240 integer value The upper limit on cache size, in bytes, after which the cache-pruner cleans up the image cache. Note This is just a threshold for cache-pruner to act upon. It is NOT a hard limit beyond which the image cache would never grow. In fact, depending on how often the cache-pruner runs and how quickly the cache fills, the image cache can far exceed the size specified here very easily. Hence, care must be taken to appropriately schedule the cache-pruner and in setting this limit. Glance caches an image when it is downloaded. Consequently, the size of the image cache grows over time as the number of downloads increases. To keep the cache size from becoming unmanageable, it is recommended to run the cache-pruner as a periodic task. When the cache pruner is kicked off, it compares the current size of image cache and triggers a cleanup if the image cache grew beyond the size specified here. After the cleanup, the size of cache is less than or equal to size specified here. Possible values: Any non-negative integer Related options: None image_cache_sqlite_db = cache.db string value The relative path to sqlite file database that will be used for image cache management. This is a relative path to the sqlite file database that tracks the age and usage statistics of image cache. The path is relative to image cache base directory, specified by the configuration option image_cache_dir . This is a lightweight database with just one table. Possible values: A valid relative path to sqlite file database Related options: image_cache_dir image_cache_stall_time = 86400 integer value The amount of time, in seconds, an incomplete image remains in the cache. Incomplete images are images for which download is in progress. Please see the description of configuration option image_cache_dir for more detail. Sometimes, due to various reasons, it is possible the download may hang and the incompletely downloaded image remains in the incomplete directory. This configuration option sets a time limit on how long the incomplete images should remain in the incomplete directory before they are cleaned up. Once an incomplete image spends more time than is specified here, it'll be removed by cache-cleaner on its run. It is recommended to run cache-cleaner as a periodic task on the Glance API nodes to keep the incomplete images from occupying disk space. Possible values: Any non-negative integer Related options: None image_location_quota = 10 integer value Maximum number of locations allowed on an image. Any negative value is interpreted as unlimited. Related options: None image_member_quota = 128 integer value Maximum number of image members per image. This limits the maximum of users an image can be shared with. Any negative value is interpreted as unlimited. Related options: None image_property_quota = 128 integer value Maximum number of properties allowed on an image. This enforces an upper limit on the number of additional properties an image can have. Any negative value is interpreted as unlimited. Note This won't have any impact if additional properties are disabled. Please refer to allow_additional_image_properties . Related options: allow_additional_image_properties image_size_cap = 1099511627776 integer value Maximum size of image a user can upload in bytes. An image upload greater than the size mentioned here would result in an image creation failure. This configuration option defaults to 1099511627776 bytes (1 TiB). NOTES: This value should only be increased after careful consideration and must be set less than or equal to 8 EiB (9223372036854775808). This value must be set with careful consideration of the backend storage capacity. Setting this to a very low value may result in a large number of image failures. And, setting this to a very large value may result in faster consumption of storage. Hence, this must be set according to the nature of images created and storage capacity available. Possible values: Any positive number less than or equal to 9223372036854775808 image_tag_quota = 128 integer value Maximum number of tags allowed on an image. Any negative value is interpreted as unlimited. Related options: None `instance_format = [instance: %(uuid)s] ` string value The format for an instance that is passed with the log message. `instance_uuid_format = [instance: %(uuid)s] ` string value The format for an instance UUID that is passed with the log message. key_file = None string value Absolute path to a private key file. Provide a string value representing a valid absolute path to a private key file which is required to establish the client-server connection. Possible values: Absolute path to the private key file Related options: None limit_param_default = 25 integer value The default number of results to return for a request. Responses to certain API requests, like list images, may return multiple items. The number of results returned can be explicitly controlled by specifying the limit parameter in the API request. However, if a limit parameter is not specified, this configuration value will be used as the default number of results to be returned for any API request. NOTES: The value of this configuration option may not be greater than the value specified by api_limit_max . Setting this to a very large value may slow down database queries and increase response times. Setting this to a very low value may result in poor user experience. Possible values: Any positive integer Related options: api_limit_max location_strategy = location_order string value Strategy to determine the preference order of image locations. This configuration option indicates the strategy to determine the order in which an image's locations must be accessed to serve the image's data. Glance then retrieves the image data from the first responsive active location it finds in this list. This option takes one of two possible values location_order and store_type . The default value is location_order , which suggests that image data be served by using locations in the order they are stored in Glance. The store_type value sets the image location preference based on the order in which the storage backends are listed as a comma separated list for the configuration option store_type_preference . Possible values: location_order store_type Related options: store_type_preference log-config-append = None string value The name of a logging configuration file. This file is appended to any existing logging configuration files. For details about logging configuration files, see the Python logging module documentation. Note that when logging configuration files are used then all logging configuration is set in the configuration file and other logging configuration options are ignored (for example, log-date-format). log-date-format = %Y-%m-%d %H:%M:%S string value Defines the format string for %%(asctime)s in log records. Default: %(default)s . This option is ignored if log_config_append is set. log-dir = None string value (Optional) The base directory used for relative log_file paths. This option is ignored if log_config_append is set. log-file = None string value (Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr as defined by use_stderr. This option is ignored if log_config_append is set. log_rotate_interval = 1 integer value The amount of time before the log files are rotated. This option is ignored unless log_rotation_type is setto "interval". log_rotate_interval_type = days string value Rotation interval type. The time of the last file change (or the time when the service was started) is used when scheduling the rotation. log_rotation_type = none string value Log rotation type. logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s string value Format string to use for log messages with context. Used by oslo_log.formatters.ContextFormatter logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d string value Additional data to append to log message when logging level for the message is DEBUG. Used by oslo_log.formatters.ContextFormatter logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s string value Format string to use for log messages when context is undefined. Used by oslo_log.formatters.ContextFormatter logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s string value Prefix each line of exception output with this format. Used by oslo_log.formatters.ContextFormatter logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s string value Defines the format string for %(user_identity)s that is used in logging_context_format_string. Used by oslo_log.formatters.ContextFormatter max_header_line = 16384 integer value Maximum line size of message headers. Provide an integer value representing a length to limit the size of message headers. The default value is 16384. Note max_header_line may need to be increased when using large tokens (typically those generated by the Keystone v3 API with big service catalogs). However, it is to be kept in mind that larger values for max_header_line would flood the logs. Setting max_header_line to 0 sets no limit for the line size of message headers. Possible values: 0 Positive integer Related options: None max_logfile_count = 30 integer value Maximum number of rotated log files. max_logfile_size_mb = 200 integer value Log file maximum size in MB. This option is ignored if "log_rotation_type" is not set to "size". max_request_id_length = 64 integer value Limit the request ID length. Provide an integer value to limit the length of the request ID to the specified length. The default value is 64. Users can change this to any ineteger value between 0 and 16384 however keeping in mind that a larger value may flood the logs. Possible values: Integer value between 0 and 16384 Related options: None metadata_encryption_key = None string value AES key for encrypting store location metadata. Provide a string value representing the AES cipher to use for encrypting Glance store metadata. Note The AES key to use must be set to a random string of length 16, 24 or 32 bytes. Possible values: String value representing a valid AES key Related options: None node_staging_uri = file:///tmp/staging/ string value The URL provides location where the temporary data will be stored This option is for Glance internal use only. Glance will save the image data uploaded by the user to staging endpoint during the image import process. This option does not change the staging API endpoint by any means. Note It is discouraged to use same path as [task]/work_dir Note file://<absolute-directory-path> is the only option api_image_import flow will support for now. Note The staging path must be on shared filesystem available to all Glance API nodes. Possible values: String starting with file:// followed by absolute FS path Related options: [task]/work_dir owner_is_tenant = True boolean value Set the image owner to tenant or the authenticated user. Assign a boolean value to determine the owner of an image. When set to True, the owner of the image is the tenant. When set to False, the owner of the image will be the authenticated user issuing the request. Setting it to False makes the image private to the associated user and sharing with other users within the same tenant (or "project") requires explicit image sharing via image membership. Possible values: True False Related options: None property_protection_file = None string value The location of the property protection file. Provide a valid path to the property protection file which contains the rules for property protections and the roles/policies associated with them. A property protection file, when set, restricts the Glance image properties to be created, read, updated and/or deleted by a specific set of users that are identified by either roles or policies. If this configuration option is not set, by default, property protections won't be enforced. If a value is specified and the file is not found, the glance-api service will fail to start. More information on property protections can be found at: https://docs.openstack.org/glance/latest/admin/property-protections.html Possible values: Empty string Valid path to the property protection configuration file Related options: property_protection_rule_format property_protection_rule_format = roles string value Rule format for property protection. Provide the desired way to set property protection on Glance image properties. The two permissible values are roles and policies . The default value is roles . If the value is roles , the property protection file must contain a comma separated list of user roles indicating permissions for each of the CRUD operations on each property being protected. If set to policies , a policy defined in policy.json is used to express property protections for each of the CRUD operations. Examples of how property protections are enforced based on roles or policies can be found at: https://docs.openstack.org/glance/latest/admin/property-protections.html#examples Possible values: roles policies Related options: property_protection_file public_endpoint = None string value Public url endpoint to use for Glance versions response. This is the public url endpoint that will appear in the Glance "versions" response. If no value is specified, the endpoint that is displayed in the version's response is that of the host running the API service. Change the endpoint to represent the proxy URL if the API service is running behind a proxy. If the service is running behind a load balancer, add the load balancer's URL for this value. Possible values: None Proxy URL Load balancer URL Related options: None publish_errors = False boolean value Enables or disables publication of error events. pydev_worker_debug_host = None host address value Host address of the pydev server. Provide a string value representing the hostname or IP of the pydev server to use for debugging. The pydev server listens for debug connections on this address, facilitating remote debugging in Glance. Possible values: Valid hostname Valid IP address Related options: None pydev_worker_debug_port = 5678 port value Port number that the pydev server will listen on. Provide a port number to bind the pydev server to. The pydev process accepts debug connections on this port and facilitates remote debugging in Glance. Possible values: A valid port number Related options: None rate_limit_burst = 0 integer value Maximum number of logged messages per rate_limit_interval. rate_limit_except_level = CRITICAL string value Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or empty string. Logs with level greater or equal to rate_limit_except_level are not filtered. An empty string means that all levels are filtered. rate_limit_interval = 0 integer value Interval, number of seconds, of log rate limiting. registry_client_ca_file = None string value Absolute path to the Certificate Authority file. Provide a string value representing a valid absolute path to the certificate authority file to use for establishing a secure connection to the registry server. Note This option must be set if registry_client_protocol is set to https . Alternatively, the GLANCE_CLIENT_CA_FILE environment variable may be set to a filepath of the CA file. This option is ignored if the registry_client_insecure option is set to True . Possible values: String value representing a valid absolute path to the CA file. Related options: registry_client_protocol registry_client_insecure registry_client_cert_file = None string value Absolute path to the certificate file. Provide a string value representing a valid absolute path to the certificate file to use for establishing a secure connection to the registry server. Note This option must be set if registry_client_protocol is set to https . Alternatively, the GLANCE_CLIENT_CERT_FILE environment variable may be set to a filepath of the certificate file. Possible values: String value representing a valid absolute path to the certificate file. Related options: registry_client_protocol registry_client_insecure = False boolean value Set verification of the registry server certificate. Provide a boolean value to determine whether or not to validate SSL connections to the registry server. By default, this option is set to False and the SSL connections are validated. If set to True , the connection to the registry server is not validated via a certifying authority and the registry_client_ca_file option is ignored. This is the registry's equivalent of specifying --insecure on the command line using glanceclient for the API. Possible values: True False Related options: registry_client_protocol registry_client_ca_file registry_client_key_file = None string value Absolute path to the private key file. Provide a string value representing a valid absolute path to the private key file to use for establishing a secure connection to the registry server. Note This option must be set if registry_client_protocol is set to https . Alternatively, the GLANCE_CLIENT_KEY_FILE environment variable may be set to a filepath of the key file. Possible values: String value representing a valid absolute path to the key file. Related options: registry_client_protocol registry_client_protocol = http string value Protocol to use for communication with the registry server. Provide a string value representing the protocol to use for communication with the registry server. By default, this option is set to http and the connection is not secure. This option can be set to https to establish a secure connection to the registry server. In this case, provide a key to use for the SSL connection using the registry_client_key_file option. Also include the CA file and cert file using the options registry_client_ca_file and registry_client_cert_file respectively. Possible values: http https Related options: registry_client_key_file registry_client_cert_file registry_client_ca_file registry_client_timeout = 600 integer value Timeout value for registry requests. Provide an integer value representing the period of time in seconds that the API server will wait for a registry request to complete. The default value is 600 seconds. A value of 0 implies that a request will never timeout. Possible values: Zero Positive integer Related options: None registry_host = 0.0.0.0 host address value Address the registry server is hosted on. Possible values: A valid IP or hostname Related options: None registry_port = 9191 port value Port the registry server is listening on. Possible values: A valid port number Related options: None rpc_conn_pool_size = 30 integer value Size of RPC connection pool. rpc_response_timeout = 60 integer value Seconds to wait for a response from a call. scrub_pool_size = 1 integer value The size of thread pool to be used for scrubbing images. When there are a large number of images to scrub, it is beneficial to scrub images in parallel so that the scrub queue stays in control and the backend storage is reclaimed in a timely fashion. This configuration option denotes the maximum number of images to be scrubbed in parallel. The default value is one, which signifies serial scrubbing. Any value above one indicates parallel scrubbing. Possible values: Any non-zero positive integer Related options: delayed_delete scrub_time = 0 integer value The amount of time, in seconds, to delay image scrubbing. When delayed delete is turned on, an image is put into pending_delete state upon deletion until the scrubber deletes its image data. Typically, soon after the image is put into pending_delete state, it is available for scrubbing. However, scrubbing can be delayed until a later point using this configuration option. This option denotes the time period an image spends in pending_delete state before it is available for scrubbing. It is important to realize that this has storage implications. The larger the scrub_time , the longer the time to reclaim backend storage from deleted images. Possible values: Any non-negative integer Related options: delayed_delete secure_proxy_ssl_header = None string value The HTTP header used to determine the scheme for the original request, even if it was removed by an SSL terminating proxy. Typical value is "HTTP_X_FORWARDED_PROTO". send_identity_headers = False boolean value Send headers received from identity when making requests to registry. Typically, Glance registry can be deployed in multiple flavors, which may or may not include authentication. For example, trusted-auth is a flavor that does not require the registry service to authenticate the requests it receives. However, the registry service may still need a user context to be populated to serve the requests. This can be achieved by the caller (the Glance API usually) passing through the headers it received from authenticating with identity for the same request. The typical headers sent are X-User-Id , X-Tenant-Id , X-Roles , X-Identity-Status and X-Service-Catalog . Provide a boolean value to determine whether to send the identity headers to provide tenant and user information along with the requests to registry service. By default, this option is set to False , which means that user and tenant information is not available readily. It must be obtained by authenticating. Hence, if this is set to False , flavor must be set to value that either includes authentication or authenticated user context. Possible values: True False Related options: flavor show_image_direct_url = False boolean value Show direct image location when returning an image. This configuration option indicates whether to show the direct image location when returning image details to the user. The direct image location is where the image data is stored in backend storage. This image location is shown under the image property direct_url . When multiple image locations exist for an image, the best location is displayed based on the location strategy indicated by the configuration option location_strategy . NOTES: Revealing image locations can present a GRAVE SECURITY RISK as image locations can sometimes include credentials. Hence, this is set to False by default. Set this to True with EXTREME CAUTION and ONLY IF you know what you are doing! If an operator wishes to avoid showing any image location(s) to the user, then both this option and show_multiple_locations MUST be set to False . Possible values: True False Related options: show_multiple_locations location_strategy show_multiple_locations = False boolean value Show all image locations when returning an image. This configuration option indicates whether to show all the image locations when returning image details to the user. When multiple image locations exist for an image, the locations are ordered based on the location strategy indicated by the configuration opt location_strategy . The image locations are shown under the image property locations . NOTES: Revealing image locations can present a GRAVE SECURITY RISK as image locations can sometimes include credentials. Hence, this is set to False by default. Set this to True with EXTREME CAUTION and ONLY IF you know what you are doing! See https://wiki.openstack.org/wiki/OSSN/OSSN-0065 for more information. If an operator wishes to avoid showing any image location(s) to the user, then both this option and show_image_direct_url MUST be set to False . Possible values: True False Related options: show_image_direct_url location_strategy syslog-log-facility = LOG_USER string value Syslog facility to receive log lines. This option is ignored if log_config_append is set. tcp_keepidle = 600 integer value Set the wait time before a connection recheck. Provide a positive integer value representing time in seconds which is set as the idle wait time before a TCP keep alive packet can be sent to the host. The default value is 600 seconds. Setting tcp_keepidle helps verify at regular intervals that a connection is intact and prevents frequent TCP connection reestablishment. Possible values: Positive integer value representing time in seconds Related options: None transport_url = rabbit:// string value The network address and optional user credentials for connecting to the messaging backend, in URL format. The expected format is: driver://[user:pass@]host:port[,[userN:passN@]hostN:portN]/virtual_host?query Example: rabbit://rabbitmq:[email protected]:5672// For full details on the fields in the URL see the documentation of oslo_messaging.TransportURL at https://docs.openstack.org/oslo.messaging/latest/reference/transport.html use-journal = False boolean value Enable journald for logging. If running in a systemd environment you may wish to enable journal support. Doing so will use the journal native protocol which includes structured metadata in addition to log messages.This option is ignored if log_config_append is set. use-json = False boolean value Use JSON formatting for logging. This option is ignored if log_config_append is set. use-syslog = False boolean value Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to honor RFC5424. This option is ignored if log_config_append is set. use_eventlog = False boolean value Log output to Windows Event Log. use_stderr = False boolean value Log output to standard error. This option is ignored if log_config_append is set. use_user_token = True boolean value Whether to pass through the user token when making requests to the registry. To prevent failures with token expiration during big files upload, it is recommended to set this parameter to False.If "use_user_token" is not in effect, then admin credentials can be specified. user_storage_quota = 0 string value Maximum amount of image storage per tenant. This enforces an upper limit on the cumulative storage consumed by all images of a tenant across all stores. This is a per-tenant limit. The default unit for this configuration option is Bytes. However, storage units can be specified using case-sensitive literals B , KB , MB , GB and TB representing Bytes, KiloBytes, MegaBytes, GigaBytes and TeraBytes respectively. Note that there should not be any space between the value and unit. Value 0 signifies no quota enforcement. Negative values are invalid and result in errors. Possible values: A string that is a valid concatenation of a non-negative integer representing the storage value and an optional string literal representing storage units as mentioned above. Related options: None watch-log-file = False boolean value Uses logging handler designed to watch file system. When log file is moved or removed this handler will open a new log file with specified path instantaneously. It makes sense only if log_file option is specified and Linux platform is used. This option is ignored if log_config_append is set. workers = None integer value Number of Glance worker processes to start. Provide a non-negative integer value to set the number of child process workers to service requests. By default, the number of CPUs available is set as the value for workers limited to 8. For example if the processor count is 6, 6 workers will be used, if the processor count is 24 only 8 workers will be used. The limit will only apply to the default value, if 24 workers is configured, 24 is used. Each worker process is made to listen on the port set in the configuration file and contains a greenthread pool of size 1000. Note Setting the number of workers to zero, triggers the creation of a single API process with a greenthread pool of size 1000. Possible values: 0 Positive integer value (typically equal to the number of CPUs) Related options: None 3.1.2. cinder The following table outlines the options available under the [cinder] group in the /etc/glance/glance-api.conf file. Table 3.1. cinder Configuration option = Default value Type Description cinder_api_insecure = False boolean value Allow to perform insecure SSL requests to cinder. If this option is set to True, HTTPS endpoint connection is verified using the CA certificates file specified by cinder_ca_certificates_file option. Possible values: True False Related options: cinder_ca_certificates_file cinder_ca_certificates_file = None string value Location of a CA certificates file used for cinder client requests. The specified CA certificates file, if set, is used to verify cinder connections via HTTPS endpoint. If the endpoint is HTTP, this value is ignored. cinder_api_insecure must be set to True to enable the verification. Possible values: Path to a ca certificates file Related options: cinder_api_insecure cinder_catalog_info = volumev2::publicURL string value Information to match when looking for cinder in the service catalog. When the cinder_endpoint_template is not set and any of cinder_store_auth_address , cinder_store_user_name , cinder_store_project_name , cinder_store_password is not set, cinder store uses this information to lookup cinder endpoint from the service catalog in the current context. cinder_os_region_name , if set, is taken into consideration to fetch the appropriate endpoint. The service catalog can be listed by the openstack catalog list command. Possible values: A string of of the following form: <service_type>:<service_name>:<interface> At least service_type and interface should be specified. service_name can be omitted. Related options: cinder_os_region_name cinder_endpoint_template cinder_store_auth_address cinder_store_user_name cinder_store_project_name cinder_store_password cinder_endpoint_template = None string value Override service catalog lookup with template for cinder endpoint. When this option is set, this value is used to generate cinder endpoint, instead of looking up from the service catalog. This value is ignored if cinder_store_auth_address , cinder_store_user_name , cinder_store_project_name , and cinder_store_password are specified. If this configuration option is set, cinder_catalog_info will be ignored. Possible values: URL template string for cinder endpoint, where %%(tenant)s is replaced with the current tenant (project) name. For example: http://cinder.openstack.example.org/v2/%%(tenant)s Related options: cinder_store_auth_address cinder_store_user_name cinder_store_project_name cinder_store_password cinder_catalog_info cinder_http_retries = 3 integer value Number of cinderclient retries on failed http calls. When a call failed by any errors, cinderclient will retry the call up to the specified times after sleeping a few seconds. Possible values: A positive integer Related options: None cinder_os_region_name = None string value Region name to lookup cinder service from the service catalog. This is used only when cinder_catalog_info is used for determining the endpoint. If set, the lookup for cinder endpoint by this node is filtered to the specified region. It is useful when multiple regions are listed in the catalog. If this is not set, the endpoint is looked up from every region. Possible values: A string that is a valid region name. Related options: cinder_catalog_info cinder_state_transition_timeout = 300 integer value Time period, in seconds, to wait for a cinder volume transition to complete. When the cinder volume is created, deleted, or attached to the glance node to read/write the volume data, the volume's state is changed. For example, the newly created volume status changes from creating to available after the creation process is completed. This specifies the maximum time to wait for the status change. If a timeout occurs while waiting, or the status is changed to an unexpected value (e.g. error ), the image creation fails. Possible values: A positive integer Related options: None cinder_store_auth_address = None string value The address where the cinder authentication service is listening. When all of cinder_store_auth_address , cinder_store_user_name , cinder_store_project_name , and cinder_store_password options are specified, the specified values are always used for the authentication. This is useful to hide the image volumes from users by storing them in a project/tenant specific to the image service. It also enables users to share the image volume among other projects under the control of glance's ACL. If either of these options are not set, the cinder endpoint is looked up from the service catalog, and current context's user and project are used. Possible values: A valid authentication service address, for example: http://openstack.example.org/identity/v2.0 Related options: cinder_store_user_name cinder_store_password cinder_store_project_name cinder_store_password = None string value Password for the user authenticating against cinder. This must be used with all the following related options. If any of these are not specified, the user of the current context is used. Possible values: A valid password for the user specified by cinder_store_user_name Related options: cinder_store_auth_address cinder_store_user_name cinder_store_project_name cinder_store_project_name = None string value Project name where the image volume is stored in cinder. If this configuration option is not set, the project in current context is used. This must be used with all the following related options. If any of these are not specified, the project of the current context is used. Possible values: A valid project name Related options: cinder_store_auth_address cinder_store_user_name cinder_store_password cinder_store_user_name = None string value User name to authenticate against cinder. This must be used with all the following related options. If any of these are not specified, the user of the current context is used. Possible values: A valid user name Related options: cinder_store_auth_address cinder_store_password cinder_store_project_name cinder_volume_type = None string value Volume type that will be used for volume creation in cinder. Some cinder backends can have several volume types to optimize storage usage. Adding this option allows an operator to choose a specific volume type in cinder that can be optimized for images. If this is not set, then the default volume type specified in the cinder configuration will be used for volume creation. Possible values: A valid volume type from cinder Related options: None rootwrap_config = /etc/glance/rootwrap.conf string value Path to the rootwrap configuration file to use for running commands as root. The cinder store requires root privileges to operate the image volumes (for connecting to iSCSI/FC volumes and reading/writing the volume data, etc.). The configuration file should allow the required commands by cinder store and os-brick library. Possible values: Path to the rootwrap config file Related options: None 3.1.3. cors The following table outlines the options available under the [cors] group in the /etc/glance/glance-api.conf file. Table 3.2. cors Configuration option = Default value Type Description allow_credentials = True boolean value Indicate that the actual request can include user credentials allow_headers = ['Content-MD5', 'X-Image-Meta-Checksum', 'X-Storage-Token', 'Accept-Encoding', 'X-Auth-Token', 'X-Identity-Status', 'X-Roles', 'X-Service-Catalog', 'X-User-Id', 'X-Tenant-Id', 'X-OpenStack-Request-ID'] list value Indicate which header field names may be used during the actual request. allow_methods = ['GET', 'PUT', 'POST', 'DELETE', 'PATCH'] list value Indicate which methods can be used during the actual request. allowed_origin = None list value Indicate whether this resource may be shared with the domain received in the requests "origin" header. Format: "<protocol>://<host>[:<port>]", no trailing slash. Example: https://horizon.example.com expose_headers = ['X-Image-Meta-Checksum', 'X-Auth-Token', 'X-Subject-Token', 'X-Service-Token', 'X-OpenStack-Request-ID'] list value Indicate which headers are safe to expose to the API. Defaults to HTTP Simple Headers. max_age = 3600 integer value Maximum cache age of CORS preflight requests. 3.1.4. database The following table outlines the options available under the [database] group in the /etc/glance/glance-api.conf file. Table 3.3. database Configuration option = Default value Type Description backend = sqlalchemy string value The back end to use for the database. connection = None string value The SQLAlchemy connection string to use to connect to the database. connection_debug = 0 integer value Verbosity of SQL debugging information: 0=None, 100=Everything. `connection_parameters = ` string value Optional URL parameters to append onto the connection URL at connect time; specify as param1=value1¶m2=value2&... connection_recycle_time = 3600 integer value Connections which have been present in the connection pool longer than this number of seconds will be replaced with a new one the time they are checked out from the pool. connection_trace = False boolean value Add Python stack traces to SQL as comment strings. db_inc_retry_interval = True boolean value If True, increases the interval between retries of a database operation up to db_max_retry_interval. db_max_retries = 20 integer value Maximum retries in case of connection error or deadlock error before error is raised. Set to -1 to specify an infinite retry count. db_max_retry_interval = 10 integer value If db_inc_retry_interval is set, the maximum seconds between retries of a database operation. db_retry_interval = 1 integer value Seconds between retries of a database transaction. max_overflow = 50 integer value If set, use this value for max_overflow with SQLAlchemy. max_pool_size = 5 integer value Maximum number of SQL connections to keep open in a pool. Setting a value of 0 indicates no limit. max_retries = 10 integer value Maximum number of database connection retries during startup. Set to -1 to specify an infinite retry count. mysql_enable_ndb = False boolean value If True, transparently enables support for handling MySQL Cluster (NDB). mysql_sql_mode = TRADITIONAL string value The SQL mode to be used for MySQL sessions. This option, including the default, overrides any server-set SQL mode. To use whatever SQL mode is set by the server configuration, set this to no value. Example: mysql_sql_mode= pool_timeout = None integer value If set, use this value for pool_timeout with SQLAlchemy. retry_interval = 10 integer value Interval between retries of opening a SQL connection. slave_connection = None string value The SQLAlchemy connection string to use to connect to the slave database. sqlite_synchronous = True boolean value If True, SQLite uses synchronous mode. use_db_reconnect = False boolean value Enable the experimental use of database reconnect on connection lost. use_tpool = False boolean value Enable the experimental use of thread pooling for all DB API calls 3.1.5. file The following table outlines the options available under the [file] group in the /etc/glance/glance-api.conf file. Table 3.4. file Configuration option = Default value Type Description filesystem_store_chunk_size = 65536 integer value Chunk size, in bytes. The chunk size used when reading or writing image files. Raising this value may improve the throughput but it may also slightly increase the memory usage when handling a large number of requests. Possible Values: Any positive integer value Related options: None filesystem_store_datadir = /var/lib/glance/images string value Directory to which the filesystem backend store writes images. Upon start up, Glance creates the directory if it doesn't already exist and verifies write access to the user under which glance-api runs. If the write access isn't available, a BadStoreConfiguration exception is raised and the filesystem store may not be available for adding new images. Note This directory is used only when filesystem store is used as a storage backend. Either filesystem_store_datadir or filesystem_store_datadirs option must be specified in glance-api.conf . If both options are specified, a BadStoreConfiguration will be raised and the filesystem store may not be available for adding new images. Possible values: A valid path to a directory Related options: filesystem_store_datadirs filesystem_store_file_perm filesystem_store_datadirs = None multi valued List of directories and their priorities to which the filesystem backend store writes images. The filesystem store can be configured to store images in multiple directories as opposed to using a single directory specified by the filesystem_store_datadir configuration option. When using multiple directories, each directory can be given an optional priority to specify the preference order in which they should be used. Priority is an integer that is concatenated to the directory path with a colon where a higher value indicates higher priority. When two directories have the same priority, the directory with most free space is used. When no priority is specified, it defaults to zero. More information on configuring filesystem store with multiple store directories can be found at https://docs.openstack.org/glance/latest/configuration/configuring.html Note This directory is used only when filesystem store is used as a storage backend. Either filesystem_store_datadir or filesystem_store_datadirs option must be specified in glance-api.conf . If both options are specified, a BadStoreConfiguration will be raised and the filesystem store may not be available for adding new images. Possible values: List of strings of the following form: <a valid directory path>:<optional integer priority> Related options: filesystem_store_datadir filesystem_store_file_perm filesystem_store_file_perm = 0 integer value File access permissions for the image files. Set the intended file access permissions for image data. This provides a way to enable other services, e.g. Nova, to consume images directly from the filesystem store. The users running the services that are intended to be given access to could be made a member of the group that owns the files created. Assigning a value less then or equal to zero for this configuration option signifies that no changes be made to the default permissions. This value will be decoded as an octal digit. For more information, please refer the documentation at https://docs.openstack.org/glance/latest/configuration/configuring.html Possible values: A valid file access permission Zero Any negative integer Related options: None filesystem_store_metadata_file = None string value Filesystem store metadata file. The path to a file which contains the metadata to be returned with any location associated with the filesystem store. The file must contain a valid JSON object. The object should contain the keys id and mountpoint . The value for both keys should be a string. Possible values: A valid path to the store metadata file Related options: None 3.1.6. glance.store.http.store The following table outlines the options available under the [glance.store.http.store] group in the /etc/glance/glance-api.conf file. Table 3.5. glance.store.http.store Configuration option = Default value Type Description http_proxy_information = {} dict value The http/https proxy information to be used to connect to the remote server. This configuration option specifies the http/https proxy information that should be used to connect to the remote server. The proxy information should be a key value pair of the scheme and proxy, for example, http:10.0.0.1:3128. You can also specify proxies for multiple schemes by separating the key value pairs with a comma, for example, http:10.0.0.1:3128, https:10.0.0.1:1080. Possible values: A comma separated list of scheme:proxy pairs as described above Related options: None https_ca_certificates_file = None string value Path to the CA bundle file. This configuration option enables the operator to use a custom Certificate Authority file to verify the remote server certificate. If this option is set, the https_insecure option will be ignored and the CA file specified will be used to authenticate the server certificate and establish a secure connection to the server. Possible values: A valid path to a CA file Related options: https_insecure https_insecure = True boolean value Set verification of the remote server certificate. This configuration option takes in a boolean value to determine whether or not to verify the remote server certificate. If set to True, the remote server certificate is not verified. If the option is set to False, then the default CA truststore is used for verification. This option is ignored if https_ca_certificates_file is set. The remote server certificate will then be verified using the file specified using the https_ca_certificates_file option. Possible values: True False Related options: https_ca_certificates_file 3.1.7. glance.store.rbd.store The following table outlines the options available under the [glance.store.rbd.store] group in the /etc/glance/glance-api.conf file. Table 3.6. glance.store.rbd.store Configuration option = Default value Type Description rados_connect_timeout = 0 integer value Timeout value for connecting to Ceph cluster. This configuration option takes in the timeout value in seconds used when connecting to the Ceph cluster i.e. it sets the time to wait for glance-api before closing the connection. This prevents glance-api hangups during the connection to RBD. If the value for this option is set to less than or equal to 0, no timeout is set and the default librados value is used. Possible Values: Any integer value Related options: None `rbd_store_ceph_conf = ` string value Ceph configuration file path. This configuration option specifies the path to the Ceph configuration file to be used. If the value for this option is not set by the user or is set to the empty string, librados will read the standard ceph.conf file by searching the default Ceph configuration file locations in sequential order. See the Ceph documentation for details. Note If using Cephx authentication, this file should include a reference to the right keyring in a client.<USER> section NOTE 2: If you leave this option empty (the default), the actual Ceph configuration file used may change depending on what version of librados is being used. If it is important for you to know exactly which configuration file is in effect, you may specify that file here using this option. Possible Values: A valid path to a configuration file Related options: rbd_store_user rbd_store_chunk_size = 8 integer value Size, in megabytes, to chunk RADOS images into. Provide an integer value representing the size in megabytes to chunk Glance images into. The default chunk size is 8 megabytes. For optimal performance, the value should be a power of two. When Ceph's RBD object storage system is used as the storage backend for storing Glance images, the images are chunked into objects of the size set using this option. These chunked objects are then stored across the distributed block data store to use for Glance. Possible Values: Any positive integer value Related options: None rbd_store_pool = images string value RADOS pool in which images are stored. When RBD is used as the storage backend for storing Glance images, the images are stored by means of logical grouping of the objects (chunks of images) into a pool . Each pool is defined with the number of placement groups it can contain. The default pool that is used is images . More information on the RBD storage backend can be found here: http://ceph.com/planet/how-data-is-stored-in-ceph-cluster/ Possible Values: A valid pool name Related options: None rbd_store_user = None string value RADOS user to authenticate as. This configuration option takes in the RADOS user to authenticate as. This is only needed when RADOS authentication is enabled and is applicable only if the user is using Cephx authentication. If the value for this option is not set by the user or is set to None, a default value will be chosen, which will be based on the client. section in rbd_store_ceph_conf. Possible Values: A valid RADOS user Related options: rbd_store_ceph_conf 3.1.8. glance.store.sheepdog.store The following table outlines the options available under the [glance.store.sheepdog.store] group in the /etc/glance/glance-api.conf file. Table 3.7. glance.store.sheepdog.store Configuration option = Default value Type Description sheepdog_store_address = 127.0.0.1 host address value Address to bind the Sheepdog daemon to. Provide a string value representing the address to bind the Sheepdog daemon to. The default address set for the sheep is 127.0.0.1. The Sheepdog daemon, also called sheep , manages the storage in the distributed cluster by writing objects across the storage network. It identifies and acts on the messages directed to the address set using sheepdog_store_address option to store chunks of Glance images. Possible values: A valid IPv4 address A valid IPv6 address A valid hostname Related Options: sheepdog_store_port sheepdog_store_chunk_size = 64 integer value Chunk size for images to be stored in Sheepdog data store. Provide an integer value representing the size in mebibyte (1048576 bytes) to chunk Glance images into. The default chunk size is 64 mebibytes. When using Sheepdog distributed storage system, the images are chunked into objects of this size and then stored across the distributed data store to use for Glance. Chunk sizes, if a power of two, help avoid fragmentation and enable improved performance. Possible values: Positive integer value representing size in mebibytes. Related Options: None sheepdog_store_port = 7000 port value Port number on which the sheep daemon will listen. Provide an integer value representing a valid port number on which you want the Sheepdog daemon to listen on. The default port is 7000. The Sheepdog daemon, also called sheep , manages the storage in the distributed cluster by writing objects across the storage network. It identifies and acts on the messages it receives on the port number set using sheepdog_store_port option to store chunks of Glance images. Possible values: A valid port number (0 to 65535) Related Options: sheepdog_store_address 3.1.9. glance.store.swift.store The following table outlines the options available under the [glance.store.swift.store] group in the /etc/glance/glance-api.conf file. Table 3.8. glance.store.swift.store Configuration option = Default value Type Description default_swift_reference = ref1 string value Reference to default Swift account/backing store parameters. Provide a string value representing a reference to the default set of parameters required for using swift account/backing store for image storage. The default reference value for this configuration option is ref1 . This configuration option dereferences the parameters and facilitates image storage in Swift storage backend every time a new image is added. Possible values: A valid string value Related options: None swift_buffer_on_upload = False boolean value Buffer image segments before upload to Swift. Provide a boolean value to indicate whether or not Glance should buffer image data to disk while uploading to swift. This enables Glance to resume uploads on error. NOTES: When enabling this option, one should take great care as this increases disk usage on the API node. Be aware that depending upon how the file system is configured, the disk space used for buffering may decrease the actual disk space available for the glance image cache. Disk utilization will cap according to the following equation: ( swift_store_large_object_chunk_size * workers * 1000) Possible values: True False Related options: swift_upload_buffer_dir swift_store_admin_tenants = [] list value List of tenants that will be granted admin access. This is a list of tenants that will be granted read/write access on all Swift containers created by Glance in multi-tenant mode. The default value is an empty list. Possible values: A comma separated list of strings representing UUIDs of Keystone projects/tenants Related options: None swift_store_auth_address = None string value The address where the Swift authentication service is listening. swift_store_auth_insecure = False boolean value Set verification of the server certificate. This boolean determines whether or not to verify the server certificate. If this option is set to True, swiftclient won't check for a valid SSL certificate when authenticating. If the option is set to False, then the default CA truststore is used for verification. Possible values: True False Related options: swift_store_cacert swift_store_auth_version = 2 string value Version of the authentication service to use. Valid versions are 2 and 3 for keystone and 1 (deprecated) for swauth and rackspace. swift_store_cacert = None string value Path to the CA bundle file. This configuration option enables the operator to specify the path to a custom Certificate Authority file for SSL verification when connecting to Swift. Possible values: A valid path to a CA file Related options: swift_store_auth_insecure swift_store_config_file = None string value Absolute path to the file containing the swift account(s) configurations. Include a string value representing the path to a configuration file that has references for each of the configured Swift account(s)/backing stores. By default, no file path is specified and customized Swift referencing is disabled. Configuring this option is highly recommended while using Swift storage backend for image storage as it avoids storage of credentials in the database. Note Please do not configure this option if you have set swift_store_multi_tenant to True . Possible values: String value representing an absolute path on the glance-api node Related options: swift_store_multi_tenant swift_store_container = glance string value Name of single container to store images/name prefix for multiple containers When a single container is being used to store images, this configuration option indicates the container within the Glance account to be used for storing all images. When multiple containers are used to store images, this will be the name prefix for all containers. Usage of single/multiple containers can be controlled using the configuration option swift_store_multiple_containers_seed . When using multiple containers, the containers will be named after the value set for this configuration option with the first N chars of the image UUID as the suffix delimited by an underscore (where N is specified by swift_store_multiple_containers_seed ). Example: if the seed is set to 3 and swift_store_container = glance , then an image with UUID fdae39a1-bac5-4238-aba4-69bcc726e848 would be placed in the container glance_fda . All dashes in the UUID are included when creating the container name but do not count toward the character limit, so when N=10 the container name would be glance_fdae39a1-ba. Possible values: If using single container, this configuration option can be any string that is a valid swift container name in Glance's Swift account If using multiple containers, this configuration option can be any string as long as it satisfies the container naming rules enforced by Swift. The value of swift_store_multiple_containers_seed should be taken into account as well. Related options: swift_store_multiple_containers_seed swift_store_multi_tenant swift_store_create_container_on_put swift_store_create_container_on_put = False boolean value Create container, if it doesn't already exist, when uploading image. At the time of uploading an image, if the corresponding container doesn't exist, it will be created provided this configuration option is set to True. By default, it won't be created. This behavior is applicable for both single and multiple containers mode. Possible values: True False Related options: None swift_store_endpoint = None string value The URL endpoint to use for Swift backend storage. Provide a string value representing the URL endpoint to use for storing Glance images in Swift store. By default, an endpoint is not set and the storage URL returned by auth is used. Setting an endpoint with swift_store_endpoint overrides the storage URL and is used for Glance image storage. Note The URL should include the path up to, but excluding the container. The location of an object is obtained by appending the container and object to the configured URL. Possible values: String value representing a valid URL path up to a Swift container Related Options: None swift_store_endpoint_type = publicURL string value Endpoint Type of Swift service. This string value indicates the endpoint type to use to fetch the Swift endpoint. The endpoint type determines the actions the user will be allowed to perform, for instance, reading and writing to the Store. This setting is only used if swift_store_auth_version is greater than 1. Possible values: publicURL adminURL internalURL Related options: swift_store_endpoint swift_store_expire_soon_interval = 60 integer value Time in seconds defining the size of the window in which a new token may be requested before the current token is due to expire. Typically, the Swift storage driver fetches a new token upon the expiration of the current token to ensure continued access to Swift. However, some Swift transactions (like uploading image segments) may not recover well if the token expires on the fly. Hence, by fetching a new token before the current token expiration, we make sure that the token does not expire or is close to expiry before a transaction is attempted. By default, the Swift storage driver requests for a new token 60 seconds or less before the current token expiration. Possible values: Zero Positive integer value Related Options: None swift_store_key = None string value Auth key for the user authenticating against the Swift authentication service. swift_store_large_object_chunk_size = 200 integer value The maximum size, in MB, of the segments when image data is segmented. When image data is segmented to upload images that are larger than the limit enforced by the Swift cluster, image data is broken into segments that are no bigger than the size specified by this configuration option. Refer to swift_store_large_object_size for more detail. For example: if swift_store_large_object_size is 5GB and swift_store_large_object_chunk_size is 1GB, an image of size 6.2GB will be segmented into 7 segments where the first six segments will be 1GB in size and the seventh segment will be 0.2GB. Possible values: A positive integer that is less than or equal to the large object limit enforced by Swift cluster in consideration. Related options: swift_store_large_object_size swift_store_large_object_size = 5120 integer value The size threshold, in MB, after which Glance will start segmenting image data. Swift has an upper limit on the size of a single uploaded object. By default, this is 5GB. To upload objects bigger than this limit, objects are segmented into multiple smaller objects that are tied together with a manifest file. For more detail, refer to https://docs.openstack.org/swift/latest/overview_large_objects.html This configuration option specifies the size threshold over which the Swift driver will start segmenting image data into multiple smaller files. Currently, the Swift driver only supports creating Dynamic Large Objects. Note This should be set by taking into account the large object limit enforced by the Swift cluster in consideration. Possible values: A positive integer that is less than or equal to the large object limit enforced by the Swift cluster in consideration. Related options: swift_store_large_object_chunk_size swift_store_multi_tenant = False boolean value Store images in tenant's Swift account. This enables multi-tenant storage mode which causes Glance images to be stored in tenant specific Swift accounts. If this is disabled, Glance stores all images in its own account. More details multi-tenant store can be found at https://wiki.openstack.org/wiki/GlanceSwiftTenantSpecificStorage Note If using multi-tenant swift store, please make sure that you do not set a swift configuration file with the swift_store_config_file option. Possible values: True False Related options: swift_store_config_file swift_store_multiple_containers_seed = 0 integer value Seed indicating the number of containers to use for storing images. When using a single-tenant store, images can be stored in one or more than one containers. When set to 0, all images will be stored in one single container. When set to an integer value between 1 and 32, multiple containers will be used to store images. This configuration option will determine how many containers are created. The total number of containers that will be used is equal to 16^N, so if this config option is set to 2, then 16^2=256 containers will be used to store images. Please refer to swift_store_container for more detail on the naming convention. More detail about using multiple containers can be found at https://specs.openstack.org/openstack/glance-specs/specs/kilo/swift-store-multiple-containers.html Note This is used only when swift_store_multi_tenant is disabled. Possible values: A non-negative integer less than or equal to 32 Related options: swift_store_container swift_store_multi_tenant swift_store_create_container_on_put swift_store_region = None string value The region of Swift endpoint to use by Glance. Provide a string value representing a Swift region where Glance can connect to for image storage. By default, there is no region set. When Glance uses Swift as the storage backend to store images for a specific tenant that has multiple endpoints, setting of a Swift region with swift_store_region allows Glance to connect to Swift in the specified region as opposed to a single region connectivity. This option can be configured for both single-tenant and multi-tenant storage. Note Setting the region with swift_store_region is tenant-specific and is necessary only if the tenant has multiple endpoints across different regions. Possible values: A string value representing a valid Swift region. Related Options: None swift_store_retry_get_count = 0 integer value The number of times a Swift download will be retried before the request fails. Provide an integer value representing the number of times an image download must be retried before erroring out. The default value is zero (no retry on a failed image download). When set to a positive integer value, swift_store_retry_get_count ensures that the download is attempted this many more times upon a download failure before sending an error message. Possible values: Zero Positive integer value Related Options: None swift_store_service_type = object-store string value Type of Swift service to use. Provide a string value representing the service type to use for storing images while using Swift backend storage. The default service type is set to object-store . Note If swift_store_auth_version is set to 2, the value for this configuration option needs to be object-store . If using a higher version of Keystone or a different auth scheme, this option may be modified. Possible values: A string representing a valid service type for Swift storage. Related Options: None swift_store_ssl_compression = True boolean value SSL layer compression for HTTPS Swift requests. Provide a boolean value to determine whether or not to compress HTTPS Swift requests for images at the SSL layer. By default, compression is enabled. When using Swift as the backend store for Glance image storage, SSL layer compression of HTTPS Swift requests can be set using this option. If set to False, SSL layer compression of HTTPS Swift requests is disabled. Disabling this option may improve performance for images which are already in a compressed format, for example, qcow2. Possible values: True False Related Options: None swift_store_use_trusts = True boolean value Use trusts for multi-tenant Swift store. This option instructs the Swift store to create a trust for each add/get request when the multi-tenant store is in use. Using trusts allows the Swift store to avoid problems that can be caused by an authentication token expiring during the upload or download of data. By default, swift_store_use_trusts is set to True (use of trusts is enabled). If set to False , a user token is used for the Swift connection instead, eliminating the overhead of trust creation. Note This option is considered only when swift_store_multi_tenant is set to True Possible values: True False Related options: swift_store_multi_tenant swift_store_user = None string value The user to authenticate against the Swift authentication service. swift_upload_buffer_dir = None string value Directory to buffer image segments before upload to Swift. Provide a string value representing the absolute path to the directory on the glance node where image segments will be buffered briefly before they are uploaded to swift. NOTES: * This is required only when the configuration option swift_buffer_on_upload is set to True. * This directory should be provisioned keeping in mind the swift_store_large_object_chunk_size and the maximum number of images that could be uploaded simultaneously by a given glance node. Possible values: String value representing an absolute directory path Related options: swift_buffer_on_upload swift_store_large_object_chunk_size 3.1.10. glance.store.vmware_datastore.store The following table outlines the options available under the [glance.store.vmware_datastore.store] group in the /etc/glance/glance-api.conf file. Table 3.9. glance.store.vmware_datastore.store Configuration option = Default value Type Description vmware_api_retry_count = 10 integer value The number of VMware API retries. This configuration option specifies the number of times the VMware ESX/VC server API must be retried upon connection related issues or server API call overload. It is not possible to specify retry forever . Possible Values: Any positive integer value Related options: None vmware_ca_file = None string value Absolute path to the CA bundle file. This configuration option enables the operator to use a custom Cerificate Authority File to verify the ESX/vCenter certificate. If this option is set, the "vmware_insecure" option will be ignored and the CA file specified will be used to authenticate the ESX/vCenter server certificate and establish a secure connection to the server. Possible Values: Any string that is a valid absolute path to a CA file Related options: vmware_insecure vmware_datastores = None multi valued The datastores where the image can be stored. This configuration option specifies the datastores where the image can be stored in the VMWare store backend. This option may be specified multiple times for specifying multiple datastores. The datastore name should be specified after its datacenter path, separated by ":". An optional weight may be given after the datastore name, separated again by ":" to specify the priority. Thus, the required format becomes <datacenter_path>:<datastore_name>:<optional_weight>. When adding an image, the datastore with highest weight will be selected, unless there is not enough free space available in cases where the image size is already known. If no weight is given, it is assumed to be zero and the directory will be considered for selection last. If multiple datastores have the same weight, then the one with the most free space available is selected. Possible Values: Any string of the format: <datacenter_path>:<datastore_name>:<optional_weight> Related options: * None vmware_insecure = False boolean value Set verification of the ESX/vCenter server certificate. This configuration option takes a boolean value to determine whether or not to verify the ESX/vCenter server certificate. If this option is set to True, the ESX/vCenter server certificate is not verified. If this option is set to False, then the default CA truststore is used for verification. This option is ignored if the "vmware_ca_file" option is set. In that case, the ESX/vCenter server certificate will then be verified using the file specified using the "vmware_ca_file" option . Possible Values: True False Related options: vmware_ca_file vmware_server_host = None host address value Address of the ESX/ESXi or vCenter Server target system. This configuration option sets the address of the ESX/ESXi or vCenter Server target system. This option is required when using the VMware storage backend. The address can contain an IP address (127.0.0.1) or a DNS name (www.my-domain.com). Possible Values: A valid IPv4 or IPv6 address A valid DNS name Related options: vmware_server_username vmware_server_password vmware_server_password = None string value Server password. This configuration option takes the password for authenticating with the VMware ESX/ESXi or vCenter Server. This option is required when using the VMware storage backend. Possible Values: Any string that is a password corresponding to the username specified using the "vmware_server_username" option Related options: vmware_server_host vmware_server_username vmware_server_username = None string value Server username. This configuration option takes the username for authenticating with the VMware ESX/ESXi or vCenter Server. This option is required when using the VMware storage backend. Possible Values: Any string that is the username for a user with appropriate privileges Related options: vmware_server_host vmware_server_password vmware_store_image_dir = /openstack_glance string value The directory where the glance images will be stored in the datastore. This configuration option specifies the path to the directory where the glance images will be stored in the VMware datastore. If this option is not set, the default directory where the glance images are stored is openstack_glance. Possible Values: Any string that is a valid path to a directory Related options: None vmware_task_poll_interval = 5 integer value Interval in seconds used for polling remote tasks invoked on VMware ESX/VC server. This configuration option takes in the sleep time in seconds for polling an on-going async task as part of the VMWare ESX/VC server API call. Possible Values: Any positive integer value Related options: None 3.1.11. glance_store The following table outlines the options available under the [glance_store] group in the /etc/glance/glance-api.conf file. Table 3.10. glance_store Configuration option = Default value Type Description cinder_api_insecure = False boolean value Allow to perform insecure SSL requests to cinder. If this option is set to True, HTTPS endpoint connection is verified using the CA certificates file specified by cinder_ca_certificates_file option. Possible values: True False Related options: cinder_ca_certificates_file cinder_ca_certificates_file = None string value Location of a CA certificates file used for cinder client requests. The specified CA certificates file, if set, is used to verify cinder connections via HTTPS endpoint. If the endpoint is HTTP, this value is ignored. cinder_api_insecure must be set to True to enable the verification. Possible values: Path to a ca certificates file Related options: cinder_api_insecure cinder_catalog_info = volumev2::publicURL string value Information to match when looking for cinder in the service catalog. When the cinder_endpoint_template is not set and any of cinder_store_auth_address , cinder_store_user_name , cinder_store_project_name , cinder_store_password is not set, cinder store uses this information to lookup cinder endpoint from the service catalog in the current context. cinder_os_region_name , if set, is taken into consideration to fetch the appropriate endpoint. The service catalog can be listed by the openstack catalog list command. Possible values: A string of of the following form: <service_type>:<service_name>:<interface> At least service_type and interface should be specified. service_name can be omitted. Related options: cinder_os_region_name cinder_endpoint_template cinder_store_auth_address cinder_store_user_name cinder_store_project_name cinder_store_password cinder_endpoint_template = None string value Override service catalog lookup with template for cinder endpoint. When this option is set, this value is used to generate cinder endpoint, instead of looking up from the service catalog. This value is ignored if cinder_store_auth_address , cinder_store_user_name , cinder_store_project_name , and cinder_store_password are specified. If this configuration option is set, cinder_catalog_info will be ignored. Possible values: URL template string for cinder endpoint, where %%(tenant)s is replaced with the current tenant (project) name. For example: http://cinder.openstack.example.org/v2/%%(tenant)s Related options: cinder_store_auth_address cinder_store_user_name cinder_store_project_name cinder_store_password cinder_catalog_info cinder_http_retries = 3 integer value Number of cinderclient retries on failed http calls. When a call failed by any errors, cinderclient will retry the call up to the specified times after sleeping a few seconds. Possible values: A positive integer Related options: None cinder_os_region_name = None string value Region name to lookup cinder service from the service catalog. This is used only when cinder_catalog_info is used for determining the endpoint. If set, the lookup for cinder endpoint by this node is filtered to the specified region. It is useful when multiple regions are listed in the catalog. If this is not set, the endpoint is looked up from every region. Possible values: A string that is a valid region name. Related options: cinder_catalog_info cinder_state_transition_timeout = 300 integer value Time period, in seconds, to wait for a cinder volume transition to complete. When the cinder volume is created, deleted, or attached to the glance node to read/write the volume data, the volume's state is changed. For example, the newly created volume status changes from creating to available after the creation process is completed. This specifies the maximum time to wait for the status change. If a timeout occurs while waiting, or the status is changed to an unexpected value (e.g. error ), the image creation fails. Possible values: A positive integer Related options: None cinder_store_auth_address = None string value The address where the cinder authentication service is listening. When all of cinder_store_auth_address , cinder_store_user_name , cinder_store_project_name , and cinder_store_password options are specified, the specified values are always used for the authentication. This is useful to hide the image volumes from users by storing them in a project/tenant specific to the image service. It also enables users to share the image volume among other projects under the control of glance's ACL. If either of these options are not set, the cinder endpoint is looked up from the service catalog, and current context's user and project are used. Possible values: A valid authentication service address, for example: http://openstack.example.org/identity/v2.0 Related options: cinder_store_user_name cinder_store_password cinder_store_project_name cinder_store_password = None string value Password for the user authenticating against cinder. This must be used with all the following related options. If any of these are not specified, the user of the current context is used. Possible values: A valid password for the user specified by cinder_store_user_name Related options: cinder_store_auth_address cinder_store_user_name cinder_store_project_name cinder_store_project_name = None string value Project name where the image volume is stored in cinder. If this configuration option is not set, the project in current context is used. This must be used with all the following related options. If any of these are not specified, the project of the current context is used. Possible values: A valid project name Related options: cinder_store_auth_address cinder_store_user_name cinder_store_password cinder_store_user_name = None string value User name to authenticate against cinder. This must be used with all the following related options. If any of these are not specified, the user of the current context is used. Possible values: A valid user name Related options: cinder_store_auth_address cinder_store_password cinder_store_project_name cinder_volume_type = None string value Volume type that will be used for volume creation in cinder. Some cinder backends can have several volume types to optimize storage usage. Adding this option allows an operator to choose a specific volume type in cinder that can be optimized for images. If this is not set, then the default volume type specified in the cinder configuration will be used for volume creation. Possible values: A valid volume type from cinder Related options: None default_backend = None string value The store identifier for the default backend in which data will be stored. The value must be defined as one of the keys in the dict defined by the enabled_backends configuration option in the DEFAULT configuration group. If a value is not defined for this option: the consuming service may refuse to start store_add calls that do not specify a specific backend will raise a glance_store.exceptions.UnknownScheme exception Related Options: enabled_backends default_store = file string value The default scheme to use for storing images. Provide a string value representing the default scheme to use for storing images. If not set, Glance uses file as the default scheme to store images with the file store. Note The value given for this configuration option must be a valid scheme for a store registered with the stores configuration option. Possible values: file filesystem http https swift swift+http swift+https swift+config rbd sheepdog cinder vsphere Related Options: stores default_swift_reference = ref1 string value Reference to default Swift account/backing store parameters. Provide a string value representing a reference to the default set of parameters required for using swift account/backing store for image storage. The default reference value for this configuration option is ref1 . This configuration option dereferences the parameters and facilitates image storage in Swift storage backend every time a new image is added. Possible values: A valid string value Related options: None filesystem_store_chunk_size = 65536 integer value Chunk size, in bytes. The chunk size used when reading or writing image files. Raising this value may improve the throughput but it may also slightly increase the memory usage when handling a large number of requests. Possible Values: Any positive integer value Related options: None filesystem_store_datadir = /var/lib/glance/images string value Directory to which the filesystem backend store writes images. Upon start up, Glance creates the directory if it doesn't already exist and verifies write access to the user under which glance-api runs. If the write access isn't available, a BadStoreConfiguration exception is raised and the filesystem store may not be available for adding new images. Note This directory is used only when filesystem store is used as a storage backend. Either filesystem_store_datadir or filesystem_store_datadirs option must be specified in glance-api.conf . If both options are specified, a BadStoreConfiguration will be raised and the filesystem store may not be available for adding new images. Possible values: A valid path to a directory Related options: filesystem_store_datadirs filesystem_store_file_perm filesystem_store_datadirs = None multi valued List of directories and their priorities to which the filesystem backend store writes images. The filesystem store can be configured to store images in multiple directories as opposed to using a single directory specified by the filesystem_store_datadir configuration option. When using multiple directories, each directory can be given an optional priority to specify the preference order in which they should be used. Priority is an integer that is concatenated to the directory path with a colon where a higher value indicates higher priority. When two directories have the same priority, the directory with most free space is used. When no priority is specified, it defaults to zero. More information on configuring filesystem store with multiple store directories can be found at https://docs.openstack.org/glance/latest/configuration/configuring.html Note This directory is used only when filesystem store is used as a storage backend. Either filesystem_store_datadir or filesystem_store_datadirs option must be specified in glance-api.conf . If both options are specified, a BadStoreConfiguration will be raised and the filesystem store may not be available for adding new images. Possible values: List of strings of the following form: <a valid directory path>:<optional integer priority> Related options: filesystem_store_datadir filesystem_store_file_perm filesystem_store_file_perm = 0 integer value File access permissions for the image files. Set the intended file access permissions for image data. This provides a way to enable other services, e.g. Nova, to consume images directly from the filesystem store. The users running the services that are intended to be given access to could be made a member of the group that owns the files created. Assigning a value less then or equal to zero for this configuration option signifies that no changes be made to the default permissions. This value will be decoded as an octal digit. For more information, please refer the documentation at https://docs.openstack.org/glance/latest/configuration/configuring.html Possible values: A valid file access permission Zero Any negative integer Related options: None filesystem_store_metadata_file = None string value Filesystem store metadata file. The path to a file which contains the metadata to be returned with any location associated with the filesystem store. The file must contain a valid JSON object. The object should contain the keys id and mountpoint . The value for both keys should be a string. Possible values: A valid path to the store metadata file Related options: None http_proxy_information = {} dict value The http/https proxy information to be used to connect to the remote server. This configuration option specifies the http/https proxy information that should be used to connect to the remote server. The proxy information should be a key value pair of the scheme and proxy, for example, http:10.0.0.1:3128. You can also specify proxies for multiple schemes by separating the key value pairs with a comma, for example, http:10.0.0.1:3128, https:10.0.0.1:1080. Possible values: A comma separated list of scheme:proxy pairs as described above Related options: None https_ca_certificates_file = None string value Path to the CA bundle file. This configuration option enables the operator to use a custom Certificate Authority file to verify the remote server certificate. If this option is set, the https_insecure option will be ignored and the CA file specified will be used to authenticate the server certificate and establish a secure connection to the server. Possible values: A valid path to a CA file Related options: https_insecure https_insecure = True boolean value Set verification of the remote server certificate. This configuration option takes in a boolean value to determine whether or not to verify the remote server certificate. If set to True, the remote server certificate is not verified. If the option is set to False, then the default CA truststore is used for verification. This option is ignored if https_ca_certificates_file is set. The remote server certificate will then be verified using the file specified using the https_ca_certificates_file option. Possible values: True False Related options: https_ca_certificates_file rados_connect_timeout = 0 integer value Timeout value for connecting to Ceph cluster. This configuration option takes in the timeout value in seconds used when connecting to the Ceph cluster i.e. it sets the time to wait for glance-api before closing the connection. This prevents glance-api hangups during the connection to RBD. If the value for this option is set to less than or equal to 0, no timeout is set and the default librados value is used. Possible Values: Any integer value Related options: None `rbd_store_ceph_conf = ` string value Ceph configuration file path. This configuration option specifies the path to the Ceph configuration file to be used. If the value for this option is not set by the user or is set to the empty string, librados will read the standard ceph.conf file by searching the default Ceph configuration file locations in sequential order. See the Ceph documentation for details. Note If using Cephx authentication, this file should include a reference to the right keyring in a client.<USER> section NOTE 2: If you leave this option empty (the default), the actual Ceph configuration file used may change depending on what version of librados is being used. If it is important for you to know exactly which configuration file is in effect, you may specify that file here using this option. Possible Values: A valid path to a configuration file Related options: rbd_store_user rbd_store_chunk_size = 8 integer value Size, in megabytes, to chunk RADOS images into. Provide an integer value representing the size in megabytes to chunk Glance images into. The default chunk size is 8 megabytes. For optimal performance, the value should be a power of two. When Ceph's RBD object storage system is used as the storage backend for storing Glance images, the images are chunked into objects of the size set using this option. These chunked objects are then stored across the distributed block data store to use for Glance. Possible Values: Any positive integer value Related options: None rbd_store_pool = images string value RADOS pool in which images are stored. When RBD is used as the storage backend for storing Glance images, the images are stored by means of logical grouping of the objects (chunks of images) into a pool . Each pool is defined with the number of placement groups it can contain. The default pool that is used is images . More information on the RBD storage backend can be found here: http://ceph.com/planet/how-data-is-stored-in-ceph-cluster/ Possible Values: A valid pool name Related options: None rbd_store_user = None string value RADOS user to authenticate as. This configuration option takes in the RADOS user to authenticate as. This is only needed when RADOS authentication is enabled and is applicable only if the user is using Cephx authentication. If the value for this option is not set by the user or is set to None, a default value will be chosen, which will be based on the client. section in rbd_store_ceph_conf. Possible Values: A valid RADOS user Related options: rbd_store_ceph_conf rootwrap_config = /etc/glance/rootwrap.conf string value Path to the rootwrap configuration file to use for running commands as root. The cinder store requires root privileges to operate the image volumes (for connecting to iSCSI/FC volumes and reading/writing the volume data, etc.). The configuration file should allow the required commands by cinder store and os-brick library. Possible values: Path to the rootwrap config file Related options: None sheepdog_store_address = 127.0.0.1 host address value Address to bind the Sheepdog daemon to. Provide a string value representing the address to bind the Sheepdog daemon to. The default address set for the sheep is 127.0.0.1. The Sheepdog daemon, also called sheep , manages the storage in the distributed cluster by writing objects across the storage network. It identifies and acts on the messages directed to the address set using sheepdog_store_address option to store chunks of Glance images. Possible values: A valid IPv4 address A valid IPv6 address A valid hostname Related Options: sheepdog_store_port sheepdog_store_chunk_size = 64 integer value Chunk size for images to be stored in Sheepdog data store. Provide an integer value representing the size in mebibyte (1048576 bytes) to chunk Glance images into. The default chunk size is 64 mebibytes. When using Sheepdog distributed storage system, the images are chunked into objects of this size and then stored across the distributed data store to use for Glance. Chunk sizes, if a power of two, help avoid fragmentation and enable improved performance. Possible values: Positive integer value representing size in mebibytes. Related Options: None sheepdog_store_port = 7000 port value Port number on which the sheep daemon will listen. Provide an integer value representing a valid port number on which you want the Sheepdog daemon to listen on. The default port is 7000. The Sheepdog daemon, also called sheep , manages the storage in the distributed cluster by writing objects across the storage network. It identifies and acts on the messages it receives on the port number set using sheepdog_store_port option to store chunks of Glance images. Possible values: A valid port number (0 to 65535) Related Options: sheepdog_store_address stores = ['file', 'http'] list value List of enabled Glance stores. Register the storage backends to use for storing disk images as a comma separated list. The default stores enabled for storing disk images with Glance are file and http . Possible values: A comma separated list that could include: file http swift rbd sheepdog cinder vmware Related Options: default_store swift_buffer_on_upload = False boolean value Buffer image segments before upload to Swift. Provide a boolean value to indicate whether or not Glance should buffer image data to disk while uploading to swift. This enables Glance to resume uploads on error. NOTES: When enabling this option, one should take great care as this increases disk usage on the API node. Be aware that depending upon how the file system is configured, the disk space used for buffering may decrease the actual disk space available for the glance image cache. Disk utilization will cap according to the following equation: ( swift_store_large_object_chunk_size * workers * 1000) Possible values: True False Related options: swift_upload_buffer_dir swift_store_admin_tenants = [] list value List of tenants that will be granted admin access. This is a list of tenants that will be granted read/write access on all Swift containers created by Glance in multi-tenant mode. The default value is an empty list. Possible values: A comma separated list of strings representing UUIDs of Keystone projects/tenants Related options: None swift_store_auth_address = None string value The address where the Swift authentication service is listening. swift_store_auth_insecure = False boolean value Set verification of the server certificate. This boolean determines whether or not to verify the server certificate. If this option is set to True, swiftclient won't check for a valid SSL certificate when authenticating. If the option is set to False, then the default CA truststore is used for verification. Possible values: True False Related options: swift_store_cacert swift_store_auth_version = 2 string value Version of the authentication service to use. Valid versions are 2 and 3 for keystone and 1 (deprecated) for swauth and rackspace. swift_store_cacert = None string value Path to the CA bundle file. This configuration option enables the operator to specify the path to a custom Certificate Authority file for SSL verification when connecting to Swift. Possible values: A valid path to a CA file Related options: swift_store_auth_insecure swift_store_config_file = None string value Absolute path to the file containing the swift account(s) configurations. Include a string value representing the path to a configuration file that has references for each of the configured Swift account(s)/backing stores. By default, no file path is specified and customized Swift referencing is disabled. Configuring this option is highly recommended while using Swift storage backend for image storage as it avoids storage of credentials in the database. Note Please do not configure this option if you have set swift_store_multi_tenant to True . Possible values: String value representing an absolute path on the glance-api node Related options: swift_store_multi_tenant swift_store_container = glance string value Name of single container to store images/name prefix for multiple containers When a single container is being used to store images, this configuration option indicates the container within the Glance account to be used for storing all images. When multiple containers are used to store images, this will be the name prefix for all containers. Usage of single/multiple containers can be controlled using the configuration option swift_store_multiple_containers_seed . When using multiple containers, the containers will be named after the value set for this configuration option with the first N chars of the image UUID as the suffix delimited by an underscore (where N is specified by swift_store_multiple_containers_seed ). Example: if the seed is set to 3 and swift_store_container = glance , then an image with UUID fdae39a1-bac5-4238-aba4-69bcc726e848 would be placed in the container glance_fda . All dashes in the UUID are included when creating the container name but do not count toward the character limit, so when N=10 the container name would be glance_fdae39a1-ba. Possible values: If using single container, this configuration option can be any string that is a valid swift container name in Glance's Swift account If using multiple containers, this configuration option can be any string as long as it satisfies the container naming rules enforced by Swift. The value of swift_store_multiple_containers_seed should be taken into account as well. Related options: swift_store_multiple_containers_seed swift_store_multi_tenant swift_store_create_container_on_put swift_store_create_container_on_put = False boolean value Create container, if it doesn't already exist, when uploading image. At the time of uploading an image, if the corresponding container doesn't exist, it will be created provided this configuration option is set to True. By default, it won't be created. This behavior is applicable for both single and multiple containers mode. Possible values: True False Related options: None swift_store_endpoint = None string value The URL endpoint to use for Swift backend storage. Provide a string value representing the URL endpoint to use for storing Glance images in Swift store. By default, an endpoint is not set and the storage URL returned by auth is used. Setting an endpoint with swift_store_endpoint overrides the storage URL and is used for Glance image storage. Note The URL should include the path up to, but excluding the container. The location of an object is obtained by appending the container and object to the configured URL. Possible values: String value representing a valid URL path up to a Swift container Related Options: None swift_store_endpoint_type = publicURL string value Endpoint Type of Swift service. This string value indicates the endpoint type to use to fetch the Swift endpoint. The endpoint type determines the actions the user will be allowed to perform, for instance, reading and writing to the Store. This setting is only used if swift_store_auth_version is greater than 1. Possible values: publicURL adminURL internalURL Related options: swift_store_endpoint swift_store_expire_soon_interval = 60 integer value Time in seconds defining the size of the window in which a new token may be requested before the current token is due to expire. Typically, the Swift storage driver fetches a new token upon the expiration of the current token to ensure continued access to Swift. However, some Swift transactions (like uploading image segments) may not recover well if the token expires on the fly. Hence, by fetching a new token before the current token expiration, we make sure that the token does not expire or is close to expiry before a transaction is attempted. By default, the Swift storage driver requests for a new token 60 seconds or less before the current token expiration. Possible values: Zero Positive integer value Related Options: None swift_store_key = None string value Auth key for the user authenticating against the Swift authentication service. swift_store_large_object_chunk_size = 200 integer value The maximum size, in MB, of the segments when image data is segmented. When image data is segmented to upload images that are larger than the limit enforced by the Swift cluster, image data is broken into segments that are no bigger than the size specified by this configuration option. Refer to swift_store_large_object_size for more detail. For example: if swift_store_large_object_size is 5GB and swift_store_large_object_chunk_size is 1GB, an image of size 6.2GB will be segmented into 7 segments where the first six segments will be 1GB in size and the seventh segment will be 0.2GB. Possible values: A positive integer that is less than or equal to the large object limit enforced by Swift cluster in consideration. Related options: swift_store_large_object_size swift_store_large_object_size = 5120 integer value The size threshold, in MB, after which Glance will start segmenting image data. Swift has an upper limit on the size of a single uploaded object. By default, this is 5GB. To upload objects bigger than this limit, objects are segmented into multiple smaller objects that are tied together with a manifest file. For more detail, refer to https://docs.openstack.org/swift/latest/overview_large_objects.html This configuration option specifies the size threshold over which the Swift driver will start segmenting image data into multiple smaller files. Currently, the Swift driver only supports creating Dynamic Large Objects. Note This should be set by taking into account the large object limit enforced by the Swift cluster in consideration. Possible values: A positive integer that is less than or equal to the large object limit enforced by the Swift cluster in consideration. Related options: swift_store_large_object_chunk_size swift_store_multi_tenant = False boolean value Store images in tenant's Swift account. This enables multi-tenant storage mode which causes Glance images to be stored in tenant specific Swift accounts. If this is disabled, Glance stores all images in its own account. More details multi-tenant store can be found at https://wiki.openstack.org/wiki/GlanceSwiftTenantSpecificStorage Note If using multi-tenant swift store, please make sure that you do not set a swift configuration file with the swift_store_config_file option. Possible values: True False Related options: swift_store_config_file swift_store_multiple_containers_seed = 0 integer value Seed indicating the number of containers to use for storing images. When using a single-tenant store, images can be stored in one or more than one containers. When set to 0, all images will be stored in one single container. When set to an integer value between 1 and 32, multiple containers will be used to store images. This configuration option will determine how many containers are created. The total number of containers that will be used is equal to 16^N, so if this config option is set to 2, then 16^2=256 containers will be used to store images. Please refer to swift_store_container for more detail on the naming convention. More detail about using multiple containers can be found at https://specs.openstack.org/openstack/glance-specs/specs/kilo/swift-store-multiple-containers.html Note This is used only when swift_store_multi_tenant is disabled. Possible values: A non-negative integer less than or equal to 32 Related options: swift_store_container swift_store_multi_tenant swift_store_create_container_on_put swift_store_region = None string value The region of Swift endpoint to use by Glance. Provide a string value representing a Swift region where Glance can connect to for image storage. By default, there is no region set. When Glance uses Swift as the storage backend to store images for a specific tenant that has multiple endpoints, setting of a Swift region with swift_store_region allows Glance to connect to Swift in the specified region as opposed to a single region connectivity. This option can be configured for both single-tenant and multi-tenant storage. Note Setting the region with swift_store_region is tenant-specific and is necessary only if the tenant has multiple endpoints across different regions. Possible values: A string value representing a valid Swift region. Related Options: None swift_store_retry_get_count = 0 integer value The number of times a Swift download will be retried before the request fails. Provide an integer value representing the number of times an image download must be retried before erroring out. The default value is zero (no retry on a failed image download). When set to a positive integer value, swift_store_retry_get_count ensures that the download is attempted this many more times upon a download failure before sending an error message. Possible values: Zero Positive integer value Related Options: None swift_store_service_type = object-store string value Type of Swift service to use. Provide a string value representing the service type to use for storing images while using Swift backend storage. The default service type is set to object-store . Note If swift_store_auth_version is set to 2, the value for this configuration option needs to be object-store . If using a higher version of Keystone or a different auth scheme, this option may be modified. Possible values: A string representing a valid service type for Swift storage. Related Options: None swift_store_ssl_compression = True boolean value SSL layer compression for HTTPS Swift requests. Provide a boolean value to determine whether or not to compress HTTPS Swift requests for images at the SSL layer. By default, compression is enabled. When using Swift as the backend store for Glance image storage, SSL layer compression of HTTPS Swift requests can be set using this option. If set to False, SSL layer compression of HTTPS Swift requests is disabled. Disabling this option may improve performance for images which are already in a compressed format, for example, qcow2. Possible values: True False Related Options: None swift_store_use_trusts = True boolean value Use trusts for multi-tenant Swift store. This option instructs the Swift store to create a trust for each add/get request when the multi-tenant store is in use. Using trusts allows the Swift store to avoid problems that can be caused by an authentication token expiring during the upload or download of data. By default, swift_store_use_trusts is set to True (use of trusts is enabled). If set to False , a user token is used for the Swift connection instead, eliminating the overhead of trust creation. Note This option is considered only when swift_store_multi_tenant is set to True Possible values: True False Related options: swift_store_multi_tenant swift_store_user = None string value The user to authenticate against the Swift authentication service. swift_upload_buffer_dir = None string value Directory to buffer image segments before upload to Swift. Provide a string value representing the absolute path to the directory on the glance node where image segments will be buffered briefly before they are uploaded to swift. NOTES: * This is required only when the configuration option swift_buffer_on_upload is set to True. * This directory should be provisioned keeping in mind the swift_store_large_object_chunk_size and the maximum number of images that could be uploaded simultaneously by a given glance node. Possible values: String value representing an absolute directory path Related options: swift_buffer_on_upload swift_store_large_object_chunk_size vmware_api_retry_count = 10 integer value The number of VMware API retries. This configuration option specifies the number of times the VMware ESX/VC server API must be retried upon connection related issues or server API call overload. It is not possible to specify retry forever . Possible Values: Any positive integer value Related options: None vmware_ca_file = None string value Absolute path to the CA bundle file. This configuration option enables the operator to use a custom Cerificate Authority File to verify the ESX/vCenter certificate. If this option is set, the "vmware_insecure" option will be ignored and the CA file specified will be used to authenticate the ESX/vCenter server certificate and establish a secure connection to the server. Possible Values: Any string that is a valid absolute path to a CA file Related options: vmware_insecure vmware_datastores = None multi valued The datastores where the image can be stored. This configuration option specifies the datastores where the image can be stored in the VMWare store backend. This option may be specified multiple times for specifying multiple datastores. The datastore name should be specified after its datacenter path, separated by ":". An optional weight may be given after the datastore name, separated again by ":" to specify the priority. Thus, the required format becomes <datacenter_path>:<datastore_name>:<optional_weight>. When adding an image, the datastore with highest weight will be selected, unless there is not enough free space available in cases where the image size is already known. If no weight is given, it is assumed to be zero and the directory will be considered for selection last. If multiple datastores have the same weight, then the one with the most free space available is selected. Possible Values: Any string of the format: <datacenter_path>:<datastore_name>:<optional_weight> Related options: * None vmware_insecure = False boolean value Set verification of the ESX/vCenter server certificate. This configuration option takes a boolean value to determine whether or not to verify the ESX/vCenter server certificate. If this option is set to True, the ESX/vCenter server certificate is not verified. If this option is set to False, then the default CA truststore is used for verification. This option is ignored if the "vmware_ca_file" option is set. In that case, the ESX/vCenter server certificate will then be verified using the file specified using the "vmware_ca_file" option . Possible Values: True False Related options: vmware_ca_file vmware_server_host = None host address value Address of the ESX/ESXi or vCenter Server target system. This configuration option sets the address of the ESX/ESXi or vCenter Server target system. This option is required when using the VMware storage backend. The address can contain an IP address (127.0.0.1) or a DNS name (www.my-domain.com). Possible Values: A valid IPv4 or IPv6 address A valid DNS name Related options: vmware_server_username vmware_server_password vmware_server_password = None string value Server password. This configuration option takes the password for authenticating with the VMware ESX/ESXi or vCenter Server. This option is required when using the VMware storage backend. Possible Values: Any string that is a password corresponding to the username specified using the "vmware_server_username" option Related options: vmware_server_host vmware_server_username vmware_server_username = None string value Server username. This configuration option takes the username for authenticating with the VMware ESX/ESXi or vCenter Server. This option is required when using the VMware storage backend. Possible Values: Any string that is the username for a user with appropriate privileges Related options: vmware_server_host vmware_server_password vmware_store_image_dir = /openstack_glance string value The directory where the glance images will be stored in the datastore. This configuration option specifies the path to the directory where the glance images will be stored in the VMware datastore. If this option is not set, the default directory where the glance images are stored is openstack_glance. Possible Values: Any string that is a valid path to a directory Related options: None vmware_task_poll_interval = 5 integer value Interval in seconds used for polling remote tasks invoked on VMware ESX/VC server. This configuration option takes in the sleep time in seconds for polling an on-going async task as part of the VMWare ESX/VC server API call. Possible Values: Any positive integer value Related options: None 3.1.12. image_format The following table outlines the options available under the [image_format] group in the /etc/glance/glance-api.conf file. Table 3.11. image_format Configuration option = Default value Type Description container_formats = ['ami', 'ari', 'aki', 'bare', 'ovf', 'ova', 'docker', 'compressed'] list value Supported values for the container_format image attribute disk_formats = ['ami', 'ari', 'aki', 'vhd', 'vhdx', 'vmdk', 'raw', 'qcow2', 'vdi', 'iso', 'ploop'] list value Supported values for the disk_format image attribute 3.1.13. keystone_authtoken The following table outlines the options available under the [keystone_authtoken] group in the /etc/glance/glance-api.conf file. Table 3.12. keystone_authtoken Configuration option = Default value Type Description auth_section = None string value Config Section from which to load plugin specific options auth_type = None string value Authentication type to load auth_uri = None string value Complete "public" Identity API endpoint. This endpoint should not be an "admin" endpoint, as it should be accessible by all end users. Unauthenticated clients are redirected to this endpoint to authenticate. Although this endpoint should ideally be unversioned, client support in the wild varies. If you're using a versioned v2 endpoint here, then this should not be the same endpoint the service user utilizes for validating tokens, because normal end users may not be able to reach that endpoint. This option is deprecated in favor of www_authenticate_uri and will be removed in the S release. auth_version = None string value API version of the Identity API endpoint. cache = None string value Request environment key where the Swift cache object is stored. When auth_token middleware is deployed with a Swift cache, use this option to have the middleware share a caching backend with swift. Otherwise, use the memcached_servers option instead. cafile = None string value A PEM encoded Certificate Authority to use when verifying HTTPs connections. Defaults to system CAs. certfile = None string value Required if identity server requires client certificate delay_auth_decision = False boolean value Do not handle authorization requests within the middleware, but delegate the authorization decision to downstream WSGI components. enforce_token_bind = permissive string value Used to control the use and type of token binding. Can be set to: "disabled" to not check token binding. "permissive" (default) to validate binding information if the bind type is of a form known to the server and ignore it if not. "strict" like "permissive" but if the bind type is unknown the token will be rejected. "required" any form of token binding is needed to be allowed. Finally the name of a binding method that must be present in tokens. http_connect_timeout = None integer value Request timeout value for communicating with Identity API server. http_request_max_retries = 3 integer value How many times are we trying to reconnect when communicating with Identity API Server. include_service_catalog = True boolean value (Optional) Indicate whether to set the X-Service-Catalog header. If False, middleware will not ask for service catalog on token validation and will not set the X-Service-Catalog header. insecure = False boolean value Verify HTTPS connections. interface = admin string value Interface to use for the Identity API endpoint. Valid values are "public", "internal" or "admin"(default). keyfile = None string value Required if identity server requires client certificate memcache_pool_conn_get_timeout = 10 integer value (Optional) Number of seconds that an operation will wait to get a memcached client connection from the pool. memcache_pool_dead_retry = 300 integer value (Optional) Number of seconds memcached server is considered dead before it is tried again. memcache_pool_maxsize = 10 integer value (Optional) Maximum total number of open connections to every memcached server. memcache_pool_socket_timeout = 3 integer value (Optional) Socket timeout in seconds for communicating with a memcached server. memcache_pool_unused_timeout = 60 integer value (Optional) Number of seconds a connection to memcached is held unused in the pool before it is closed. memcache_secret_key = None string value (Optional, mandatory if memcache_security_strategy is defined) This string is used for key derivation. memcache_security_strategy = None string value (Optional) If defined, indicate whether token data should be authenticated or authenticated and encrypted. If MAC, token data is authenticated (with HMAC) in the cache. If ENCRYPT, token data is encrypted and authenticated in the cache. If the value is not one of these options or empty, auth_token will raise an exception on initialization. memcache_use_advanced_pool = False boolean value (Optional) Use the advanced (eventlet safe) memcached client pool. The advanced pool will only work under python 2.x. memcached_servers = None list value Optionally specify a list of memcached server(s) to use for caching. If left undefined, tokens will instead be cached in-process. region_name = None string value The region in which the identity server can be found. service_token_roles = ['service'] list value A choice of roles that must be present in a service token. Service tokens are allowed to request that an expired token can be used and so this check should tightly control that only actual services should be sending this token. Roles here are applied as an ANY check so any role in this list must be present. For backwards compatibility reasons this currently only affects the allow_expired check. service_token_roles_required = False boolean value For backwards compatibility reasons we must let valid service tokens pass that don't pass the service_token_roles check as valid. Setting this true will become the default in a future release and should be enabled if possible. service_type = None string value The name or type of the service as it appears in the service catalog. This is used to validate tokens that have restricted access rules. token_cache_time = 300 integer value In order to prevent excessive effort spent validating tokens, the middleware caches previously-seen tokens for a configurable duration (in seconds). Set to -1 to disable caching completely. www_authenticate_uri = None string value Complete "public" Identity API endpoint. This endpoint should not be an "admin" endpoint, as it should be accessible by all end users. Unauthenticated clients are redirected to this endpoint to authenticate. Although this endpoint should ideally be unversioned, client support in the wild varies. If you're using a versioned v2 endpoint here, then this should not be the same endpoint the service user utilizes for validating tokens, because normal end users may not be able to reach that endpoint. 3.1.14. oslo_concurrency The following table outlines the options available under the [oslo_concurrency] group in the /etc/glance/glance-api.conf file. Table 3.13. oslo_concurrency Configuration option = Default value Type Description disable_process_locking = False boolean value Enables or disables inter-process locks. lock_path = None string value Directory to use for lock files. For security, the specified directory should only be writable by the user running the processes that need locking. Defaults to environment variable OSLO_LOCK_PATH. If external locks are used, a lock path must be set. 3.1.15. oslo_messaging_amqp The following table outlines the options available under the [oslo_messaging_amqp] group in the /etc/glance/glance-api.conf file. Table 3.14. oslo_messaging_amqp Configuration option = Default value Type Description addressing_mode = dynamic string value Indicates the addressing mode used by the driver. Permitted values: legacy - use legacy non-routable addressing routable - use routable addresses dynamic - use legacy addresses if the message bus does not support routing otherwise use routable addressing anycast_address = anycast string value Appended to the address prefix when sending to a group of consumers. Used by the message bus to identify messages that should be delivered in a round-robin fashion across consumers. broadcast_prefix = broadcast string value address prefix used when broadcasting to all servers connection_retry_backoff = 2 integer value Increase the connection_retry_interval by this many seconds after each unsuccessful failover attempt. connection_retry_interval = 1 integer value Seconds to pause before attempting to re-connect. connection_retry_interval_max = 30 integer value Maximum limit for connection_retry_interval + connection_retry_backoff container_name = None string value Name for the AMQP container. must be globally unique. Defaults to a generated UUID default_notification_exchange = None string value Exchange name used in notification addresses. Exchange name resolution precedence: Target.exchange if set else default_notification_exchange if set else control_exchange if set else notify default_notify_timeout = 30 integer value The deadline for a sent notification message delivery. Only used when caller does not provide a timeout expiry. default_reply_retry = 0 integer value The maximum number of attempts to re-send a reply message which failed due to a recoverable error. default_reply_timeout = 30 integer value The deadline for an rpc reply message delivery. default_rpc_exchange = None string value Exchange name used in RPC addresses. Exchange name resolution precedence: Target.exchange if set else default_rpc_exchange if set else control_exchange if set else rpc default_send_timeout = 30 integer value The deadline for an rpc cast or call message delivery. Only used when caller does not provide a timeout expiry. default_sender_link_timeout = 600 integer value The duration to schedule a purge of idle sender links. Detach link after expiry. group_request_prefix = unicast string value address prefix when sending to any server in group idle_timeout = 0 integer value Timeout for inactive connections (in seconds) link_retry_delay = 10 integer value Time to pause between re-connecting an AMQP 1.0 link that failed due to a recoverable error. multicast_address = multicast string value Appended to the address prefix when sending a fanout message. Used by the message bus to identify fanout messages. notify_address_prefix = openstack.org/om/notify string value Address prefix for all generated Notification addresses notify_server_credit = 100 integer value Window size for incoming Notification messages pre_settled = ['rpc-cast', 'rpc-reply'] multi valued Send messages of this type pre-settled. Pre-settled messages will not receive acknowledgement from the peer. Note well: pre-settled messages may be silently discarded if the delivery fails. Permitted values: rpc-call - send RPC Calls pre-settled rpc-reply - send RPC Replies pre-settled rpc-cast - Send RPC Casts pre-settled notify - Send Notifications pre-settled pseudo_vhost = True boolean value Enable virtual host support for those message buses that do not natively support virtual hosting (such as qpidd). When set to true the virtual host name will be added to all message bus addresses, effectively creating a private subnet per virtual host. Set to False if the message bus supports virtual hosting using the hostname field in the AMQP 1.0 Open performative as the name of the virtual host. reply_link_credit = 200 integer value Window size for incoming RPC Reply messages. rpc_address_prefix = openstack.org/om/rpc string value Address prefix for all generated RPC addresses rpc_server_credit = 100 integer value Window size for incoming RPC Request messages `sasl_config_dir = ` string value Path to directory that contains the SASL configuration `sasl_config_name = ` string value Name of configuration file (without .conf suffix) `sasl_default_realm = ` string value SASL realm to use if no realm present in username `sasl_mechanisms = ` string value Space separated list of acceptable SASL mechanisms server_request_prefix = exclusive string value address prefix used when sending to a specific server ssl = False boolean value Attempt to connect via SSL. If no other ssl-related parameters are given, it will use the system's CA-bundle to verify the server's certificate. `ssl_ca_file = ` string value CA certificate PEM file used to verify the server's certificate `ssl_cert_file = ` string value Self-identifying certificate PEM file for client authentication `ssl_key_file = ` string value Private key PEM file used to sign ssl_cert_file certificate (optional) ssl_key_password = None string value Password for decrypting ssl_key_file (if encrypted) ssl_verify_vhost = False boolean value By default SSL checks that the name in the server's certificate matches the hostname in the transport_url. In some configurations it may be preferable to use the virtual hostname instead, for example if the server uses the Server Name Indication TLS extension (rfc6066) to provide a certificate per virtual host. Set ssl_verify_vhost to True if the server's SSL certificate uses the virtual host name instead of the DNS name. trace = False boolean value Debug: dump AMQP frames to stdout unicast_address = unicast string value Appended to the address prefix when sending to a particular RPC/Notification server. Used by the message bus to identify messages sent to a single destination. 3.1.16. oslo_messaging_kafka The following table outlines the options available under the [oslo_messaging_kafka] group in the /etc/glance/glance-api.conf file. Table 3.15. oslo_messaging_kafka Configuration option = Default value Type Description compression_codec = none string value The compression codec for all data generated by the producer. If not set, compression will not be used. Note that the allowed values of this depend on the kafka version conn_pool_min_size = 2 integer value The pool size limit for connections expiration policy conn_pool_ttl = 1200 integer value The time-to-live in sec of idle connections in the pool consumer_group = oslo_messaging_consumer string value Group id for Kafka consumer. Consumers in one group will coordinate message consumption enable_auto_commit = False boolean value Enable asynchronous consumer commits kafka_consumer_timeout = 1.0 floating point value Default timeout(s) for Kafka consumers kafka_max_fetch_bytes = 1048576 integer value Max fetch bytes of Kafka consumer max_poll_records = 500 integer value The maximum number of records returned in a poll call pool_size = 10 integer value Pool Size for Kafka Consumers producer_batch_size = 16384 integer value Size of batch for the producer async send producer_batch_timeout = 0.0 floating point value Upper bound on the delay for KafkaProducer batching in seconds sasl_mechanism = PLAIN string value Mechanism when security protocol is SASL security_protocol = PLAINTEXT string value Protocol used to communicate with brokers `ssl_cafile = ` string value CA certificate PEM file used to verify the server certificate 3.1.17. oslo_messaging_notifications The following table outlines the options available under the [oslo_messaging_notifications] group in the /etc/glance/glance-api.conf file. Table 3.16. oslo_messaging_notifications Configuration option = Default value Type Description driver = [] multi valued The Drivers(s) to handle sending notifications. Possible values are messaging, messagingv2, routing, log, test, noop retry = -1 integer value The maximum number of attempts to re-send a notification message which failed to be delivered due to a recoverable error. 0 - No retry, -1 - indefinite topics = ['notifications'] list value AMQP topic used for OpenStack notifications. transport_url = None string value A URL representing the messaging driver to use for notifications. If not set, we fall back to the same configuration used for RPC. 3.1.18. oslo_messaging_rabbit The following table outlines the options available under the [oslo_messaging_rabbit] group in the /etc/glance/glance-api.conf file. Table 3.17. oslo_messaging_rabbit Configuration option = Default value Type Description amqp_auto_delete = False boolean value Auto-delete queues in AMQP. amqp_durable_queues = False boolean value Use durable queues in AMQP. direct_mandatory_flag = True integer value Enable/Disable the RabbitMQ mandatory flag for direct send. The direct send is used as reply,so the MessageUndeliverable exception is raised in case the client queue does not exist. heartbeat_in_pthread = False boolean value EXPERIMENTAL: Run the health check heartbeat threadthrough a native python thread. By default if thisoption isn't provided the health check heartbeat willinherit the execution model from the parent process. Byexample if the parent process have monkey patched thestdlib by using eventlet/greenlet then the heartbeatwill be run through a green thread. heartbeat_rate = 2 integer value How often times during the heartbeat_timeout_threshold we check the heartbeat. heartbeat_timeout_threshold = 60 integer value Number of seconds after which the Rabbit broker is considered down if heartbeat's keep-alive fails (0 disables heartbeat). kombu_compression = None string value EXPERIMENTAL: Possible values are: gzip, bz2. If not set compression will not be used. This option may not be available in future versions. kombu_failover_strategy = round-robin string value Determines how the RabbitMQ node is chosen in case the one we are currently connected to becomes unavailable. Takes effect only if more than one RabbitMQ node is provided in config. kombu_missing_consumer_retry_timeout = 60 integer value How long to wait a missing client before abandoning to send it its replies. This value should not be longer than rpc_response_timeout. kombu_reconnect_delay = 1.0 floating point value How long to wait before reconnecting in response to an AMQP consumer cancel notification. rabbit_ha_queues = False boolean value Try to use HA queues in RabbitMQ (x-ha-policy: all). If you change this option, you must wipe the RabbitMQ database. In RabbitMQ 3.0, queue mirroring is no longer controlled by the x-ha-policy argument when declaring a queue. If you just want to make sure that all queues (except those with auto-generated names) are mirrored across all nodes, run: "rabbitmqctl set_policy HA ^(?!amq\.).* {"ha-mode": "all"} " rabbit_interval_max = 30 integer value Maximum interval of RabbitMQ connection retries. Default is 30 seconds. rabbit_login_method = AMQPLAIN string value The RabbitMQ login method. rabbit_qos_prefetch_count = 0 integer value Specifies the number of messages to prefetch. Setting to zero allows unlimited messages. rabbit_retry_backoff = 2 integer value How long to backoff for between retries when connecting to RabbitMQ. rabbit_retry_interval = 1 integer value How frequently to retry connecting with RabbitMQ. rabbit_transient_queues_ttl = 1800 integer value Positive integer representing duration in seconds for queue TTL (x-expires). Queues which are unused for the duration of the TTL are automatically deleted. The parameter affects only reply and fanout queues. ssl = False boolean value Connect over SSL. `ssl_ca_file = ` string value SSL certification authority file (valid only if SSL enabled). `ssl_cert_file = ` string value SSL cert file (valid only if SSL enabled). `ssl_key_file = ` string value SSL key file (valid only if SSL enabled). `ssl_version = ` string value SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some distributions. 3.1.19. oslo_middleware The following table outlines the options available under the [oslo_middleware] group in the /etc/glance/glance-api.conf file. Table 3.18. oslo_middleware Configuration option = Default value Type Description enable_proxy_headers_parsing = False boolean value Whether the application is behind a proxy or not. This determines if the middleware should parse the headers or not. 3.1.20. oslo_policy The following table outlines the options available under the [oslo_policy] group in the /etc/glance/glance-api.conf file. Table 3.19. oslo_policy Configuration option = Default value Type Description enforce_scope = False boolean value This option controls whether or not to enforce scope when evaluating policies. If True , the scope of the token used in the request is compared to the scope_types of the policy being enforced. If the scopes do not match, an InvalidScope exception will be raised. If False , a message will be logged informing operators that policies are being invoked with mismatching scope. policy_default_rule = default string value Default rule. Enforced when a requested rule is not found. policy_dirs = ['policy.d'] multi valued Directories where policy configuration files are stored. They can be relative to any directory in the search path defined by the config_dir option, or absolute paths. The file defined by policy_file must exist for these directories to be searched. Missing or empty directories are ignored. policy_file = policy.json string value The relative or absolute path of a file that maps roles to permissions for a given service. Relative paths must be specified in relation to the configuration file setting this option. remote_content_type = application/x-www-form-urlencoded string value Content Type to send and receive data for REST based policy check remote_ssl_ca_crt_file = None string value Absolute path to ca cert file for REST based policy check remote_ssl_client_crt_file = None string value Absolute path to client cert for REST based policy check remote_ssl_client_key_file = None string value Absolute path client key file REST based policy check remote_ssl_verify_server_crt = False boolean value server identity verification for REST based policy check 3.1.21. paste_deploy The following table outlines the options available under the [paste_deploy] group in the /etc/glance/glance-api.conf file. Table 3.20. paste_deploy Configuration option = Default value Type Description config_file = None string value Name of the paste configuration file. Provide a string value representing the name of the paste configuration file to use for configuring piplelines for server application deployments. NOTES: Provide the name or the path relative to the glance directory for the paste configuration file and not the absolute path. The sample paste configuration file shipped with Glance need not be edited in most cases as it comes with ready-made pipelines for all common deployment flavors. If no value is specified for this option, the paste.ini file with the prefix of the corresponding Glance service's configuration file name will be searched for in the known configuration directories. (For example, if this option is missing from or has no value set in glance-api.conf , the service will look for a file named glance-api-paste.ini .) If the paste configuration file is not found, the service will not start. Possible values: A string value representing the name of the paste configuration file. Related Options: flavor flavor = None string value Deployment flavor to use in the server application pipeline. Provide a string value representing the appropriate deployment flavor used in the server application pipleline. This is typically the partial name of a pipeline in the paste configuration file with the service name removed. For example, if your paste section name in the paste configuration file is [pipeline:glance-api-keystone], set flavor to keystone . Possible values: String value representing a partial pipeline name. Related Options: config_file 3.1.22. profiler The following table outlines the options available under the [profiler] group in the /etc/glance/glance-api.conf file. Table 3.21. profiler Configuration option = Default value Type Description connection_string = messaging:// string value Connection string for a notifier backend. Default value is messaging:// which sets the notifier to oslo_messaging. Examples of possible values: messaging:// - use oslo_messaging driver for sending spans. redis://127.0.0.1:6379 - use redis driver for sending spans. mongodb://127.0.0.1:27017 - use mongodb driver for sending spans. elasticsearch://127.0.0.1:9200 - use elasticsearch driver for sending spans. jaeger://127.0.0.1:6831 - use jaeger tracing as driver for sending spans. enabled = False boolean value Enable the profiling for all services on this node. Default value is False (fully disable the profiling feature). Possible values: True: Enables the feature False: Disables the feature. The profiling cannot be started via this project operations. If the profiling is triggered by another project, this project part will be empty. es_doc_type = notification string value Document type for notification indexing in elasticsearch. es_scroll_size = 10000 integer value Elasticsearch splits large requests in batches. This parameter defines maximum size of each batch (for example: es_scroll_size=10000). es_scroll_time = 2m string value This parameter is a time value parameter (for example: es_scroll_time=2m), indicating for how long the nodes that participate in the search will maintain relevant resources in order to continue and support it. filter_error_trace = False boolean value Enable filter traces that contain error/exception to a separated place. Default value is set to False. Possible values: True: Enable filter traces that contain error/exception. False: Disable the filter. hmac_keys = SECRET_KEY string value Secret key(s) to use for encrypting context data for performance profiling. This string value should have the following format: <key1>[,<key2>,... <keyn>], where each key is some random string. A user who triggers the profiling via the REST API has to set one of these keys in the headers of the REST API call to include profiling results of this node for this particular project. Both "enabled" flag and "hmac_keys" config options should be set to enable profiling. Also, to generate correct profiling information across all services at least one key needs to be consistent between OpenStack projects. This ensures it can be used from client side to generate the trace, containing information from all possible resources. sentinel_service_name = mymaster string value Redissentinel uses a service name to identify a master redis service. This parameter defines the name (for example: sentinal_service_name=mymaster ). socket_timeout = 0.1 floating point value Redissentinel provides a timeout option on the connections. This parameter defines that timeout (for example: socket_timeout=0.1). trace_sqlalchemy = False boolean value Enable SQL requests profiling in services. Default value is False (SQL requests won't be traced). Possible values: True: Enables SQL requests profiling. Each SQL query will be part of the trace and can the be analyzed by how much time was spent for that. False: Disables SQL requests profiling. The spent time is only shown on a higher level of operations. Single SQL queries cannot be analyzed this way. 3.1.23. store_type_location_strategy The following table outlines the options available under the [store_type_location_strategy] group in the /etc/glance/glance-api.conf file. Table 3.22. store_type_location_strategy Configuration option = Default value Type Description store_type_preference = [] list value Preference order of storage backends. Provide a comma separated list of store names in the order in which images should be retrieved from storage backends. These store names must be registered with the stores configuration option. Note The store_type_preference configuration option is applied only if store_type is chosen as a value for the location_strategy configuration option. An empty list will not change the location order. Possible values: Empty list Comma separated list of registered store names. Legal values are: file http rbd swift sheepdog cinder vmware Related options: location_strategy stores 3.1.24. task The following table outlines the options available under the [task] group in the /etc/glance/glance-api.conf file. Table 3.23. task Configuration option = Default value Type Description task_executor = taskflow string value Task executor to be used to run task scripts. Provide a string value representing the executor to use for task executions. By default, TaskFlow executor is used. TaskFlow helps make task executions easy, consistent, scalable and reliable. It also enables creation of lightweight task objects and/or functions that are combined together into flows in a declarative manner. Possible values: taskflow Related Options: None task_time_to_live = 48 integer value Time in hours for which a task lives after, either succeeding or failing work_dir = None string value Absolute path to the work directory to use for asynchronous task operations. The directory set here will be used to operate over images - normally before they are imported in the destination store. Note When providing a value for work_dir , please make sure that enough space is provided for concurrent tasks to run efficiently without running out of space. A rough estimation can be done by multiplying the number of max_workers with an average image size (e.g 500MB). The image size estimation should be done based on the average size in your deployment. Note that depending on the tasks running you may need to multiply this number by some factor depending on what the task does. For example, you may want to double the available size if image conversion is enabled. All this being said, remember these are just estimations and you should do them based on the worst case scenario and be prepared to act in case they were wrong. Possible values: String value representing the absolute path to the working directory Related Options: None 3.1.25. taskflow_executor The following table outlines the options available under the [taskflow_executor] group in the /etc/glance/glance-api.conf file. Table 3.24. taskflow_executor Configuration option = Default value Type Description conversion_format = None string value Set the desired image conversion format. Provide a valid image format to which you want images to be converted before they are stored for consumption by Glance. Appropriate image format conversions are desirable for specific storage backends in order to facilitate efficient handling of bandwidth and usage of the storage infrastructure. By default, conversion_format is not set and must be set explicitly in the configuration file. The allowed values for this option are raw , qcow2 and vmdk . The raw format is the unstructured disk format and should be chosen when RBD or Ceph storage backends are used for image storage. qcow2 is supported by the QEMU emulator that expands dynamically and supports Copy on Write. The vmdk is another common disk format supported by many common virtual machine monitors like VMWare Workstation. Possible values: qcow2 raw vmdk Related options: disk_formats engine_mode = parallel string value Set the taskflow engine mode. Provide a string type value to set the mode in which the taskflow engine would schedule tasks to the workers on the hosts. Based on this mode, the engine executes tasks either in single or multiple threads. The possible values for this configuration option are: serial and parallel . When set to serial , the engine runs all the tasks in a single thread which results in serial execution of tasks. Setting this to parallel makes the engine run tasks in multiple threads. This results in parallel execution of tasks. Possible values: serial parallel Related options: max_workers max_workers = 10 integer value Set the number of engine executable tasks. Provide an integer value to limit the number of workers that can be instantiated on the hosts. In other words, this number defines the number of parallel tasks that can be executed at the same time by the taskflow engine. This value can be greater than one when the engine mode is set to parallel. Possible values: Integer value greater than or equal to 1 Related options: engine_mode 3.2. glance-registry.conf This section contains options for the /etc/glance/glance-registry.conf file. 3.2.1. DEFAULT The following table outlines the options available under the [DEFAULT] group in the /etc/glance/glance-registry.conf file. . Configuration option = Default value Type Description admin_role = admin string value Role used to identify an authenticated user as administrator. Provide a string value representing a Keystone role to identify an administrative user. Users with this role will be granted administrative privileges. The default value for this option is admin . Possible values: A string value which is a valid Keystone role Related options: None allow_additional_image_properties = True boolean value Allow users to add additional/custom properties to images. Glance defines a standard set of properties (in its schema) that appear on every image. These properties are also known as base properties . In addition to these properties, Glance allows users to add custom properties to images. These are known as additional properties . By default, this configuration option is set to True and users are allowed to add additional properties. The number of additional properties that can be added to an image can be controlled via image_property_quota configuration option. Possible values: True False Related options: image_property_quota allow_anonymous_access = False boolean value Allow limited access to unauthenticated users. Assign a boolean to determine API access for unathenticated users. When set to False, the API cannot be accessed by unauthenticated users. When set to True, unauthenticated users can access the API with read-only privileges. This however only applies when using ContextMiddleware. Possible values: True False Related options: None api_limit_max = 1000 integer value Maximum number of results that could be returned by a request. As described in the help text of limit_param_default , some requests may return multiple results. The number of results to be returned are governed either by the limit parameter in the request or the limit_param_default configuration option. The value in either case, can't be greater than the absolute maximum defined by this configuration option. Anything greater than this value is trimmed down to the maximum value defined here. Note Setting this to a very large value may slow down database queries and increase response times. Setting this to a very low value may result in poor user experience. Possible values: Any positive integer Related options: limit_param_default backlog = 4096 integer value Set the number of incoming connection requests. Provide a positive integer value to limit the number of requests in the backlog queue. The default queue size is 4096. An incoming connection to a TCP listener socket is queued before a connection can be established with the server. Setting the backlog for a TCP socket ensures a limited queue size for incoming traffic. Possible values: Positive integer Related options: None bind_host = 0.0.0.0 host address value IP address to bind the glance servers to. Provide an IP address to bind the glance server to. The default value is 0.0.0.0 . Edit this option to enable the server to listen on one particular IP address on the network card. This facilitates selection of a particular network interface for the server. Possible values: A valid IPv4 address A valid IPv6 address Related options: None bind_port = None port value Port number on which the server will listen. Provide a valid port number to bind the server's socket to. This port is then set to identify processes and forward network messages that arrive at the server. The default bind_port value for the API server is 9292 and for the registry server is 9191. Possible values: A valid port number (0 to 65535) Related options: None ca_file = None string value Absolute path to the CA file. Provide a string value representing a valid absolute path to the Certificate Authority file to use for client authentication. A CA file typically contains necessary trusted certificates to use for the client authentication. This is essential to ensure that a secure connection is established to the server via the internet. Possible values: Valid absolute path to the CA file Related options: None cert_file = None string value Absolute path to the certificate file. Provide a string value representing a valid absolute path to the certificate file which is required to start the API service securely. A certificate file typically is a public key container and includes the server's public key, server name, server information and the signature which was a result of the verification process using the CA certificate. This is required for a secure connection establishment. Possible values: Valid absolute path to the certificate file Related options: None client_socket_timeout = 900 integer value Timeout for client connections' socket operations. Provide a valid integer value representing time in seconds to set the period of wait before an incoming connection can be closed. The default value is 900 seconds. The value zero implies wait forever. Possible values: Zero Positive integer Related options: None conn_pool_min_size = 2 integer value The pool size limit for connections expiration policy conn_pool_ttl = 1200 integer value The time-to-live in sec of idle connections in the pool control_exchange = openstack string value The default exchange under which topics are scoped. May be overridden by an exchange name specified in the transport_url option. data_api = glance.db.sqlalchemy.api string value Python module path of data access API. Specifies the path to the API to use for accessing the data model. This option determines how the image catalog data will be accessed. Possible values: glance.db.sqlalchemy.api glance.db.registry.api glance.db.simple.api If this option is set to glance.db.sqlalchemy.api then the image catalog data is stored in and read from the database via the SQLAlchemy Core and ORM APIs. Setting this option to glance.db.registry.api will force all database access requests to be routed through the Registry service. This avoids data access from the Glance API nodes for an added layer of security, scalability and manageability. Note In v2 OpenStack Images API, the registry service is optional. In order to use the Registry API in v2, the option enable_v2_registry must be set to True . Finally, when this configuration option is set to glance.db.simple.api , image catalog data is stored in and read from an in-memory data structure. This is primarily used for testing. Related options: enable_v2_api enable_v2_registry debug = False boolean value If set to true, the logging level will be set to DEBUG instead of the default INFO level. default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO'] list value List of package logging levels in logger=LEVEL pairs. This option is ignored if log_config_append is set. digest_algorithm = sha256 string value Digest algorithm to use for digital signature. Provide a string value representing the digest algorithm to use for generating digital signatures. By default, sha256 is used. To get a list of the available algorithms supported by the version of OpenSSL on your platform, run the command: openssl list-message-digest-algorithms . Examples are sha1 , sha256 , and sha512 . Note digest_algorithm is not related to Glance's image signing and verification. It is only used to sign the universally unique identifier (UUID) as a part of the certificate file and key file validation. Possible values: An OpenSSL message digest algorithm identifier Relation options: None enable_v1_registry = True boolean value DEPRECATED FOR REMOVAL enable_v2_api = True boolean value Deploy the v2 OpenStack Images API. When this option is set to True , Glance service will respond to requests on registered endpoints conforming to the v2 OpenStack Images API. NOTES: If this option is disabled, then the enable_v2_registry option, which is enabled by default, is also recommended to be disabled. Possible values: True False Related options: enable_v2_registry enable_v2_registry = True boolean value Deploy the v2 API Registry service. When this option is set to True , the Registry service will be enabled in Glance for v2 API requests. NOTES: Use of Registry is optional in v2 API, so this option must only be enabled if both enable_v2_api is set to True and the data_api option is set to glance.db.registry.api . If deploying only the v1 OpenStack Images API, this option, which is enabled by default, should be disabled. Possible values: True False Related options: enable_v2_api data_api enabled_import_methods = ['glance-direct', 'web-download'] list value List of enabled Image Import Methods Both glance-direct and web-download are enabled by default. Related options: [DEFAULT]/node_staging_uri executor_thread_pool_size = 64 integer value Size of executor thread pool when executor is threading or eventlet. fatal_deprecations = False boolean value Enables or disables fatal status of deprecations. hashing_algorithm = sha512 string value Secure hashing algorithm used for computing the os_hash_value property. This option configures the Glance "multihash", which consists of two image properties: the os_hash_algo and the os_hash_value . The os_hash_algo will be populated by the value of this configuration option, and the os_hash_value will be populated by the hexdigest computed when the algorithm is applied to the uploaded or imported image data. The value must be a valid secure hash algorithm name recognized by the python hashlib library. You can determine what these are by examining the hashlib.algorithms_available data member of the version of the library being used in your Glance installation. For interoperability purposes, however, we recommend that you use the set of secure hash names supplied by the hashlib.algorithms_guaranteed data member because those algorithms are guaranteed to be supported by the hashlib library on all platforms. Thus, any image consumer using hashlib locally should be able to verify the os_hash_value of the image. The default value of sha512 is a performant secure hash algorithm. If this option is misconfigured, any attempts to store image data will fail. For that reason, we recommend using the default value. Possible values: Any secure hash algorithm name recognized by the Python hashlib library Related options: None http_keepalive = True boolean value Set keep alive option for HTTP over TCP. Provide a boolean value to determine sending of keep alive packets. If set to False , the server returns the header "Connection: close". If set to True , the server returns a "Connection: Keep-Alive" in its responses. This enables retention of the same TCP connection for HTTP conversations instead of opening a new one with each new request. This option must be set to False if the client socket connection needs to be closed explicitly after the response is received and read successfully by the client. Possible values: True False Related options: None image_location_quota = 10 integer value Maximum number of locations allowed on an image. Any negative value is interpreted as unlimited. Related options: None image_member_quota = 128 integer value Maximum number of image members per image. This limits the maximum of users an image can be shared with. Any negative value is interpreted as unlimited. Related options: None image_property_quota = 128 integer value Maximum number of properties allowed on an image. This enforces an upper limit on the number of additional properties an image can have. Any negative value is interpreted as unlimited. Note This won't have any impact if additional properties are disabled. Please refer to allow_additional_image_properties . Related options: allow_additional_image_properties image_size_cap = 1099511627776 integer value Maximum size of image a user can upload in bytes. An image upload greater than the size mentioned here would result in an image creation failure. This configuration option defaults to 1099511627776 bytes (1 TiB). NOTES: This value should only be increased after careful consideration and must be set less than or equal to 8 EiB (9223372036854775808). This value must be set with careful consideration of the backend storage capacity. Setting this to a very low value may result in a large number of image failures. And, setting this to a very large value may result in faster consumption of storage. Hence, this must be set according to the nature of images created and storage capacity available. Possible values: Any positive number less than or equal to 9223372036854775808 image_tag_quota = 128 integer value Maximum number of tags allowed on an image. Any negative value is interpreted as unlimited. Related options: None `instance_format = [instance: %(uuid)s] ` string value The format for an instance that is passed with the log message. `instance_uuid_format = [instance: %(uuid)s] ` string value The format for an instance UUID that is passed with the log message. key_file = None string value Absolute path to a private key file. Provide a string value representing a valid absolute path to a private key file which is required to establish the client-server connection. Possible values: Absolute path to the private key file Related options: None limit_param_default = 25 integer value The default number of results to return for a request. Responses to certain API requests, like list images, may return multiple items. The number of results returned can be explicitly controlled by specifying the limit parameter in the API request. However, if a limit parameter is not specified, this configuration value will be used as the default number of results to be returned for any API request. NOTES: The value of this configuration option may not be greater than the value specified by api_limit_max . Setting this to a very large value may slow down database queries and increase response times. Setting this to a very low value may result in poor user experience. Possible values: Any positive integer Related options: api_limit_max log-config-append = None string value The name of a logging configuration file. This file is appended to any existing logging configuration files. For details about logging configuration files, see the Python logging module documentation. Note that when logging configuration files are used then all logging configuration is set in the configuration file and other logging configuration options are ignored (for example, log-date-format). log-date-format = %Y-%m-%d %H:%M:%S string value Defines the format string for %%(asctime)s in log records. Default: %(default)s . This option is ignored if log_config_append is set. log-dir = None string value (Optional) The base directory used for relative log_file paths. This option is ignored if log_config_append is set. log-file = None string value (Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr as defined by use_stderr. This option is ignored if log_config_append is set. log_rotate_interval = 1 integer value The amount of time before the log files are rotated. This option is ignored unless log_rotation_type is setto "interval". log_rotate_interval_type = days string value Rotation interval type. The time of the last file change (or the time when the service was started) is used when scheduling the rotation. log_rotation_type = none string value Log rotation type. logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s string value Format string to use for log messages with context. Used by oslo_log.formatters.ContextFormatter logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d string value Additional data to append to log message when logging level for the message is DEBUG. Used by oslo_log.formatters.ContextFormatter logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s string value Format string to use for log messages when context is undefined. Used by oslo_log.formatters.ContextFormatter logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s string value Prefix each line of exception output with this format. Used by oslo_log.formatters.ContextFormatter logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s string value Defines the format string for %(user_identity)s that is used in logging_context_format_string. Used by oslo_log.formatters.ContextFormatter max_header_line = 16384 integer value Maximum line size of message headers. Provide an integer value representing a length to limit the size of message headers. The default value is 16384. Note max_header_line may need to be increased when using large tokens (typically those generated by the Keystone v3 API with big service catalogs). However, it is to be kept in mind that larger values for max_header_line would flood the logs. Setting max_header_line to 0 sets no limit for the line size of message headers. Possible values: 0 Positive integer Related options: None max_logfile_count = 30 integer value Maximum number of rotated log files. max_logfile_size_mb = 200 integer value Log file maximum size in MB. This option is ignored if "log_rotation_type" is not set to "size". max_request_id_length = 64 integer value Limit the request ID length. Provide an integer value to limit the length of the request ID to the specified length. The default value is 64. Users can change this to any ineteger value between 0 and 16384 however keeping in mind that a larger value may flood the logs. Possible values: Integer value between 0 and 16384 Related options: None metadata_encryption_key = None string value AES key for encrypting store location metadata. Provide a string value representing the AES cipher to use for encrypting Glance store metadata. Note The AES key to use must be set to a random string of length 16, 24 or 32 bytes. Possible values: String value representing a valid AES key Related options: None node_staging_uri = file:///tmp/staging/ string value The URL provides location where the temporary data will be stored This option is for Glance internal use only. Glance will save the image data uploaded by the user to staging endpoint during the image import process. This option does not change the staging API endpoint by any means. Note It is discouraged to use same path as [task]/work_dir Note file://<absolute-directory-path> is the only option api_image_import flow will support for now. Note The staging path must be on shared filesystem available to all Glance API nodes. Possible values: String starting with file:// followed by absolute FS path Related options: [task]/work_dir owner_is_tenant = True boolean value Set the image owner to tenant or the authenticated user. Assign a boolean value to determine the owner of an image. When set to True, the owner of the image is the tenant. When set to False, the owner of the image will be the authenticated user issuing the request. Setting it to False makes the image private to the associated user and sharing with other users within the same tenant (or "project") requires explicit image sharing via image membership. Possible values: True False Related options: None publish_errors = False boolean value Enables or disables publication of error events. pydev_worker_debug_host = None host address value Host address of the pydev server. Provide a string value representing the hostname or IP of the pydev server to use for debugging. The pydev server listens for debug connections on this address, facilitating remote debugging in Glance. Possible values: Valid hostname Valid IP address Related options: None pydev_worker_debug_port = 5678 port value Port number that the pydev server will listen on. Provide a port number to bind the pydev server to. The pydev process accepts debug connections on this port and facilitates remote debugging in Glance. Possible values: A valid port number Related options: None rate_limit_burst = 0 integer value Maximum number of logged messages per rate_limit_interval. rate_limit_except_level = CRITICAL string value Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or empty string. Logs with level greater or equal to rate_limit_except_level are not filtered. An empty string means that all levels are filtered. rate_limit_interval = 0 integer value Interval, number of seconds, of log rate limiting. rpc_conn_pool_size = 30 integer value Size of RPC connection pool. rpc_response_timeout = 60 integer value Seconds to wait for a response from a call. secure_proxy_ssl_header = None string value The HTTP header used to determine the scheme for the original request, even if it was removed by an SSL terminating proxy. Typical value is "HTTP_X_FORWARDED_PROTO". show_image_direct_url = False boolean value Show direct image location when returning an image. This configuration option indicates whether to show the direct image location when returning image details to the user. The direct image location is where the image data is stored in backend storage. This image location is shown under the image property direct_url . When multiple image locations exist for an image, the best location is displayed based on the location strategy indicated by the configuration option location_strategy . NOTES: Revealing image locations can present a GRAVE SECURITY RISK as image locations can sometimes include credentials. Hence, this is set to False by default. Set this to True with EXTREME CAUTION and ONLY IF you know what you are doing! If an operator wishes to avoid showing any image location(s) to the user, then both this option and show_multiple_locations MUST be set to False . Possible values: True False Related options: show_multiple_locations location_strategy show_multiple_locations = False boolean value Show all image locations when returning an image. This configuration option indicates whether to show all the image locations when returning image details to the user. When multiple image locations exist for an image, the locations are ordered based on the location strategy indicated by the configuration opt location_strategy . The image locations are shown under the image property locations . NOTES: Revealing image locations can present a GRAVE SECURITY RISK as image locations can sometimes include credentials. Hence, this is set to False by default. Set this to True with EXTREME CAUTION and ONLY IF you know what you are doing! See https://wiki.openstack.org/wiki/OSSN/OSSN-0065 for more information. If an operator wishes to avoid showing any image location(s) to the user, then both this option and show_image_direct_url MUST be set to False . Possible values: True False Related options: show_image_direct_url location_strategy syslog-log-facility = LOG_USER string value Syslog facility to receive log lines. This option is ignored if log_config_append is set. tcp_keepidle = 600 integer value Set the wait time before a connection recheck. Provide a positive integer value representing time in seconds which is set as the idle wait time before a TCP keep alive packet can be sent to the host. The default value is 600 seconds. Setting tcp_keepidle helps verify at regular intervals that a connection is intact and prevents frequent TCP connection reestablishment. Possible values: Positive integer value representing time in seconds Related options: None transport_url = rabbit:// string value The network address and optional user credentials for connecting to the messaging backend, in URL format. The expected format is: driver://[user:pass@]host:port[,[userN:passN@]hostN:portN]/virtual_host?query Example: rabbit://rabbitmq:[email protected]:5672// For full details on the fields in the URL see the documentation of oslo_messaging.TransportURL at https://docs.openstack.org/oslo.messaging/latest/reference/transport.html use-journal = False boolean value Enable journald for logging. If running in a systemd environment you may wish to enable journal support. Doing so will use the journal native protocol which includes structured metadata in addition to log messages.This option is ignored if log_config_append is set. use-json = False boolean value Use JSON formatting for logging. This option is ignored if log_config_append is set. use-syslog = False boolean value Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to honor RFC5424. This option is ignored if log_config_append is set. use_eventlog = False boolean value Log output to Windows Event Log. use_stderr = False boolean value Log output to standard error. This option is ignored if log_config_append is set. user_storage_quota = 0 string value Maximum amount of image storage per tenant. This enforces an upper limit on the cumulative storage consumed by all images of a tenant across all stores. This is a per-tenant limit. The default unit for this configuration option is Bytes. However, storage units can be specified using case-sensitive literals B , KB , MB , GB and TB representing Bytes, KiloBytes, MegaBytes, GigaBytes and TeraBytes respectively. Note that there should not be any space between the value and unit. Value 0 signifies no quota enforcement. Negative values are invalid and result in errors. Possible values: A string that is a valid concatenation of a non-negative integer representing the storage value and an optional string literal representing storage units as mentioned above. Related options: None watch-log-file = False boolean value Uses logging handler designed to watch file system. When log file is moved or removed this handler will open a new log file with specified path instantaneously. It makes sense only if log_file option is specified and Linux platform is used. This option is ignored if log_config_append is set. workers = None integer value Number of Glance worker processes to start. Provide a non-negative integer value to set the number of child process workers to service requests. By default, the number of CPUs available is set as the value for workers limited to 8. For example if the processor count is 6, 6 workers will be used, if the processor count is 24 only 8 workers will be used. The limit will only apply to the default value, if 24 workers is configured, 24 is used. Each worker process is made to listen on the port set in the configuration file and contains a greenthread pool of size 1000. Note Setting the number of workers to zero, triggers the creation of a single API process with a greenthread pool of size 1000. Possible values: 0 Positive integer value (typically equal to the number of CPUs) Related options: None 3.2.2. database The following table outlines the options available under the [database] group in the /etc/glance/glance-registry.conf file. Table 3.25. database Configuration option = Default value Type Description backend = sqlalchemy string value The back end to use for the database. connection = None string value The SQLAlchemy connection string to use to connect to the database. connection_debug = 0 integer value Verbosity of SQL debugging information: 0=None, 100=Everything. `connection_parameters = ` string value Optional URL parameters to append onto the connection URL at connect time; specify as param1=value1¶m2=value2&... connection_recycle_time = 3600 integer value Connections which have been present in the connection pool longer than this number of seconds will be replaced with a new one the time they are checked out from the pool. connection_trace = False boolean value Add Python stack traces to SQL as comment strings. db_inc_retry_interval = True boolean value If True, increases the interval between retries of a database operation up to db_max_retry_interval. db_max_retries = 20 integer value Maximum retries in case of connection error or deadlock error before error is raised. Set to -1 to specify an infinite retry count. db_max_retry_interval = 10 integer value If db_inc_retry_interval is set, the maximum seconds between retries of a database operation. db_retry_interval = 1 integer value Seconds between retries of a database transaction. max_overflow = 50 integer value If set, use this value for max_overflow with SQLAlchemy. max_pool_size = 5 integer value Maximum number of SQL connections to keep open in a pool. Setting a value of 0 indicates no limit. max_retries = 10 integer value Maximum number of database connection retries during startup. Set to -1 to specify an infinite retry count. mysql_enable_ndb = False boolean value If True, transparently enables support for handling MySQL Cluster (NDB). mysql_sql_mode = TRADITIONAL string value The SQL mode to be used for MySQL sessions. This option, including the default, overrides any server-set SQL mode. To use whatever SQL mode is set by the server configuration, set this to no value. Example: mysql_sql_mode= pool_timeout = None integer value If set, use this value for pool_timeout with SQLAlchemy. retry_interval = 10 integer value Interval between retries of opening a SQL connection. slave_connection = None string value The SQLAlchemy connection string to use to connect to the slave database. sqlite_synchronous = True boolean value If True, SQLite uses synchronous mode. use_db_reconnect = False boolean value Enable the experimental use of database reconnect on connection lost. use_tpool = False boolean value Enable the experimental use of thread pooling for all DB API calls 3.2.3. keystone_authtoken The following table outlines the options available under the [keystone_authtoken] group in the /etc/glance/glance-registry.conf file. Table 3.26. keystone_authtoken Configuration option = Default value Type Description auth_section = None string value Config Section from which to load plugin specific options auth_type = None string value Authentication type to load auth_uri = None string value Complete "public" Identity API endpoint. This endpoint should not be an "admin" endpoint, as it should be accessible by all end users. Unauthenticated clients are redirected to this endpoint to authenticate. Although this endpoint should ideally be unversioned, client support in the wild varies. If you're using a versioned v2 endpoint here, then this should not be the same endpoint the service user utilizes for validating tokens, because normal end users may not be able to reach that endpoint. This option is deprecated in favor of www_authenticate_uri and will be removed in the S release. auth_version = None string value API version of the Identity API endpoint. cache = None string value Request environment key where the Swift cache object is stored. When auth_token middleware is deployed with a Swift cache, use this option to have the middleware share a caching backend with swift. Otherwise, use the memcached_servers option instead. cafile = None string value A PEM encoded Certificate Authority to use when verifying HTTPs connections. Defaults to system CAs. certfile = None string value Required if identity server requires client certificate delay_auth_decision = False boolean value Do not handle authorization requests within the middleware, but delegate the authorization decision to downstream WSGI components. enforce_token_bind = permissive string value Used to control the use and type of token binding. Can be set to: "disabled" to not check token binding. "permissive" (default) to validate binding information if the bind type is of a form known to the server and ignore it if not. "strict" like "permissive" but if the bind type is unknown the token will be rejected. "required" any form of token binding is needed to be allowed. Finally the name of a binding method that must be present in tokens. http_connect_timeout = None integer value Request timeout value for communicating with Identity API server. http_request_max_retries = 3 integer value How many times are we trying to reconnect when communicating with Identity API Server. include_service_catalog = True boolean value (Optional) Indicate whether to set the X-Service-Catalog header. If False, middleware will not ask for service catalog on token validation and will not set the X-Service-Catalog header. insecure = False boolean value Verify HTTPS connections. interface = admin string value Interface to use for the Identity API endpoint. Valid values are "public", "internal" or "admin"(default). keyfile = None string value Required if identity server requires client certificate memcache_pool_conn_get_timeout = 10 integer value (Optional) Number of seconds that an operation will wait to get a memcached client connection from the pool. memcache_pool_dead_retry = 300 integer value (Optional) Number of seconds memcached server is considered dead before it is tried again. memcache_pool_maxsize = 10 integer value (Optional) Maximum total number of open connections to every memcached server. memcache_pool_socket_timeout = 3 integer value (Optional) Socket timeout in seconds for communicating with a memcached server. memcache_pool_unused_timeout = 60 integer value (Optional) Number of seconds a connection to memcached is held unused in the pool before it is closed. memcache_secret_key = None string value (Optional, mandatory if memcache_security_strategy is defined) This string is used for key derivation. memcache_security_strategy = None string value (Optional) If defined, indicate whether token data should be authenticated or authenticated and encrypted. If MAC, token data is authenticated (with HMAC) in the cache. If ENCRYPT, token data is encrypted and authenticated in the cache. If the value is not one of these options or empty, auth_token will raise an exception on initialization. memcache_use_advanced_pool = False boolean value (Optional) Use the advanced (eventlet safe) memcached client pool. The advanced pool will only work under python 2.x. memcached_servers = None list value Optionally specify a list of memcached server(s) to use for caching. If left undefined, tokens will instead be cached in-process. region_name = None string value The region in which the identity server can be found. service_token_roles = ['service'] list value A choice of roles that must be present in a service token. Service tokens are allowed to request that an expired token can be used and so this check should tightly control that only actual services should be sending this token. Roles here are applied as an ANY check so any role in this list must be present. For backwards compatibility reasons this currently only affects the allow_expired check. service_token_roles_required = False boolean value For backwards compatibility reasons we must let valid service tokens pass that don't pass the service_token_roles check as valid. Setting this true will become the default in a future release and should be enabled if possible. service_type = None string value The name or type of the service as it appears in the service catalog. This is used to validate tokens that have restricted access rules. token_cache_time = 300 integer value In order to prevent excessive effort spent validating tokens, the middleware caches previously-seen tokens for a configurable duration (in seconds). Set to -1 to disable caching completely. www_authenticate_uri = None string value Complete "public" Identity API endpoint. This endpoint should not be an "admin" endpoint, as it should be accessible by all end users. Unauthenticated clients are redirected to this endpoint to authenticate. Although this endpoint should ideally be unversioned, client support in the wild varies. If you're using a versioned v2 endpoint here, then this should not be the same endpoint the service user utilizes for validating tokens, because normal end users may not be able to reach that endpoint. 3.2.4. oslo_messaging_amqp The following table outlines the options available under the [oslo_messaging_amqp] group in the /etc/glance/glance-registry.conf file. Table 3.27. oslo_messaging_amqp Configuration option = Default value Type Description addressing_mode = dynamic string value Indicates the addressing mode used by the driver. Permitted values: legacy - use legacy non-routable addressing routable - use routable addresses dynamic - use legacy addresses if the message bus does not support routing otherwise use routable addressing anycast_address = anycast string value Appended to the address prefix when sending to a group of consumers. Used by the message bus to identify messages that should be delivered in a round-robin fashion across consumers. broadcast_prefix = broadcast string value address prefix used when broadcasting to all servers connection_retry_backoff = 2 integer value Increase the connection_retry_interval by this many seconds after each unsuccessful failover attempt. connection_retry_interval = 1 integer value Seconds to pause before attempting to re-connect. connection_retry_interval_max = 30 integer value Maximum limit for connection_retry_interval + connection_retry_backoff container_name = None string value Name for the AMQP container. must be globally unique. Defaults to a generated UUID default_notification_exchange = None string value Exchange name used in notification addresses. Exchange name resolution precedence: Target.exchange if set else default_notification_exchange if set else control_exchange if set else notify default_notify_timeout = 30 integer value The deadline for a sent notification message delivery. Only used when caller does not provide a timeout expiry. default_reply_retry = 0 integer value The maximum number of attempts to re-send a reply message which failed due to a recoverable error. default_reply_timeout = 30 integer value The deadline for an rpc reply message delivery. default_rpc_exchange = None string value Exchange name used in RPC addresses. Exchange name resolution precedence: Target.exchange if set else default_rpc_exchange if set else control_exchange if set else rpc default_send_timeout = 30 integer value The deadline for an rpc cast or call message delivery. Only used when caller does not provide a timeout expiry. default_sender_link_timeout = 600 integer value The duration to schedule a purge of idle sender links. Detach link after expiry. group_request_prefix = unicast string value address prefix when sending to any server in group idle_timeout = 0 integer value Timeout for inactive connections (in seconds) link_retry_delay = 10 integer value Time to pause between re-connecting an AMQP 1.0 link that failed due to a recoverable error. multicast_address = multicast string value Appended to the address prefix when sending a fanout message. Used by the message bus to identify fanout messages. notify_address_prefix = openstack.org/om/notify string value Address prefix for all generated Notification addresses notify_server_credit = 100 integer value Window size for incoming Notification messages pre_settled = ['rpc-cast', 'rpc-reply'] multi valued Send messages of this type pre-settled. Pre-settled messages will not receive acknowledgement from the peer. Note well: pre-settled messages may be silently discarded if the delivery fails. Permitted values: rpc-call - send RPC Calls pre-settled rpc-reply - send RPC Replies pre-settled rpc-cast - Send RPC Casts pre-settled notify - Send Notifications pre-settled pseudo_vhost = True boolean value Enable virtual host support for those message buses that do not natively support virtual hosting (such as qpidd). When set to true the virtual host name will be added to all message bus addresses, effectively creating a private subnet per virtual host. Set to False if the message bus supports virtual hosting using the hostname field in the AMQP 1.0 Open performative as the name of the virtual host. reply_link_credit = 200 integer value Window size for incoming RPC Reply messages. rpc_address_prefix = openstack.org/om/rpc string value Address prefix for all generated RPC addresses rpc_server_credit = 100 integer value Window size for incoming RPC Request messages `sasl_config_dir = ` string value Path to directory that contains the SASL configuration `sasl_config_name = ` string value Name of configuration file (without .conf suffix) `sasl_default_realm = ` string value SASL realm to use if no realm present in username `sasl_mechanisms = ` string value Space separated list of acceptable SASL mechanisms server_request_prefix = exclusive string value address prefix used when sending to a specific server ssl = False boolean value Attempt to connect via SSL. If no other ssl-related parameters are given, it will use the system's CA-bundle to verify the server's certificate. `ssl_ca_file = ` string value CA certificate PEM file used to verify the server's certificate `ssl_cert_file = ` string value Self-identifying certificate PEM file for client authentication `ssl_key_file = ` string value Private key PEM file used to sign ssl_cert_file certificate (optional) ssl_key_password = None string value Password for decrypting ssl_key_file (if encrypted) ssl_verify_vhost = False boolean value By default SSL checks that the name in the server's certificate matches the hostname in the transport_url. In some configurations it may be preferable to use the virtual hostname instead, for example if the server uses the Server Name Indication TLS extension (rfc6066) to provide a certificate per virtual host. Set ssl_verify_vhost to True if the server's SSL certificate uses the virtual host name instead of the DNS name. trace = False boolean value Debug: dump AMQP frames to stdout unicast_address = unicast string value Appended to the address prefix when sending to a particular RPC/Notification server. Used by the message bus to identify messages sent to a single destination. 3.2.5. oslo_messaging_kafka The following table outlines the options available under the [oslo_messaging_kafka] group in the /etc/glance/glance-registry.conf file. Table 3.28. oslo_messaging_kafka Configuration option = Default value Type Description compression_codec = none string value The compression codec for all data generated by the producer. If not set, compression will not be used. Note that the allowed values of this depend on the kafka version conn_pool_min_size = 2 integer value The pool size limit for connections expiration policy conn_pool_ttl = 1200 integer value The time-to-live in sec of idle connections in the pool consumer_group = oslo_messaging_consumer string value Group id for Kafka consumer. Consumers in one group will coordinate message consumption enable_auto_commit = False boolean value Enable asynchronous consumer commits kafka_consumer_timeout = 1.0 floating point value Default timeout(s) for Kafka consumers kafka_max_fetch_bytes = 1048576 integer value Max fetch bytes of Kafka consumer max_poll_records = 500 integer value The maximum number of records returned in a poll call pool_size = 10 integer value Pool Size for Kafka Consumers producer_batch_size = 16384 integer value Size of batch for the producer async send producer_batch_timeout = 0.0 floating point value Upper bound on the delay for KafkaProducer batching in seconds sasl_mechanism = PLAIN string value Mechanism when security protocol is SASL security_protocol = PLAINTEXT string value Protocol used to communicate with brokers `ssl_cafile = ` string value CA certificate PEM file used to verify the server certificate 3.2.6. oslo_messaging_notifications The following table outlines the options available under the [oslo_messaging_notifications] group in the /etc/glance/glance-registry.conf file. Table 3.29. oslo_messaging_notifications Configuration option = Default value Type Description driver = [] multi valued The Drivers(s) to handle sending notifications. Possible values are messaging, messagingv2, routing, log, test, noop retry = -1 integer value The maximum number of attempts to re-send a notification message which failed to be delivered due to a recoverable error. 0 - No retry, -1 - indefinite topics = ['notifications'] list value AMQP topic used for OpenStack notifications. transport_url = None string value A URL representing the messaging driver to use for notifications. If not set, we fall back to the same configuration used for RPC. 3.2.7. oslo_messaging_rabbit The following table outlines the options available under the [oslo_messaging_rabbit] group in the /etc/glance/glance-registry.conf file. Table 3.30. oslo_messaging_rabbit Configuration option = Default value Type Description amqp_auto_delete = False boolean value Auto-delete queues in AMQP. amqp_durable_queues = False boolean value Use durable queues in AMQP. direct_mandatory_flag = True integer value Enable/Disable the RabbitMQ mandatory flag for direct send. The direct send is used as reply,so the MessageUndeliverable exception is raised in case the client queue does not exist. heartbeat_in_pthread = False boolean value EXPERIMENTAL: Run the health check heartbeat threadthrough a native python thread. By default if thisoption isn't provided the health check heartbeat willinherit the execution model from the parent process. Byexample if the parent process have monkey patched thestdlib by using eventlet/greenlet then the heartbeatwill be run through a green thread. heartbeat_rate = 2 integer value How often times during the heartbeat_timeout_threshold we check the heartbeat. heartbeat_timeout_threshold = 60 integer value Number of seconds after which the Rabbit broker is considered down if heartbeat's keep-alive fails (0 disables heartbeat). kombu_compression = None string value EXPERIMENTAL: Possible values are: gzip, bz2. If not set compression will not be used. This option may not be available in future versions. kombu_failover_strategy = round-robin string value Determines how the RabbitMQ node is chosen in case the one we are currently connected to becomes unavailable. Takes effect only if more than one RabbitMQ node is provided in config. kombu_missing_consumer_retry_timeout = 60 integer value How long to wait a missing client before abandoning to send it its replies. This value should not be longer than rpc_response_timeout. kombu_reconnect_delay = 1.0 floating point value How long to wait before reconnecting in response to an AMQP consumer cancel notification. rabbit_ha_queues = False boolean value Try to use HA queues in RabbitMQ (x-ha-policy: all). If you change this option, you must wipe the RabbitMQ database. In RabbitMQ 3.0, queue mirroring is no longer controlled by the x-ha-policy argument when declaring a queue. If you just want to make sure that all queues (except those with auto-generated names) are mirrored across all nodes, run: "rabbitmqctl set_policy HA ^(?!amq\.).* {"ha-mode": "all"} " rabbit_interval_max = 30 integer value Maximum interval of RabbitMQ connection retries. Default is 30 seconds. rabbit_login_method = AMQPLAIN string value The RabbitMQ login method. rabbit_qos_prefetch_count = 0 integer value Specifies the number of messages to prefetch. Setting to zero allows unlimited messages. rabbit_retry_backoff = 2 integer value How long to backoff for between retries when connecting to RabbitMQ. rabbit_retry_interval = 1 integer value How frequently to retry connecting with RabbitMQ. rabbit_transient_queues_ttl = 1800 integer value Positive integer representing duration in seconds for queue TTL (x-expires). Queues which are unused for the duration of the TTL are automatically deleted. The parameter affects only reply and fanout queues. ssl = False boolean value Connect over SSL. `ssl_ca_file = ` string value SSL certification authority file (valid only if SSL enabled). `ssl_cert_file = ` string value SSL cert file (valid only if SSL enabled). `ssl_key_file = ` string value SSL key file (valid only if SSL enabled). `ssl_version = ` string value SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some distributions. 3.2.8. oslo_policy The following table outlines the options available under the [oslo_policy] group in the /etc/glance/glance-registry.conf file. Table 3.31. oslo_policy Configuration option = Default value Type Description enforce_scope = False boolean value This option controls whether or not to enforce scope when evaluating policies. If True , the scope of the token used in the request is compared to the scope_types of the policy being enforced. If the scopes do not match, an InvalidScope exception will be raised. If False , a message will be logged informing operators that policies are being invoked with mismatching scope. policy_default_rule = default string value Default rule. Enforced when a requested rule is not found. policy_dirs = ['policy.d'] multi valued Directories where policy configuration files are stored. They can be relative to any directory in the search path defined by the config_dir option, or absolute paths. The file defined by policy_file must exist for these directories to be searched. Missing or empty directories are ignored. policy_file = policy.json string value The relative or absolute path of a file that maps roles to permissions for a given service. Relative paths must be specified in relation to the configuration file setting this option. remote_content_type = application/x-www-form-urlencoded string value Content Type to send and receive data for REST based policy check remote_ssl_ca_crt_file = None string value Absolute path to ca cert file for REST based policy check remote_ssl_client_crt_file = None string value Absolute path to client cert for REST based policy check remote_ssl_client_key_file = None string value Absolute path client key file REST based policy check remote_ssl_verify_server_crt = False boolean value server identity verification for REST based policy check 3.2.9. paste_deploy The following table outlines the options available under the [paste_deploy] group in the /etc/glance/glance-registry.conf file. Table 3.32. paste_deploy Configuration option = Default value Type Description config_file = None string value Name of the paste configuration file. Provide a string value representing the name of the paste configuration file to use for configuring piplelines for server application deployments. NOTES: Provide the name or the path relative to the glance directory for the paste configuration file and not the absolute path. The sample paste configuration file shipped with Glance need not be edited in most cases as it comes with ready-made pipelines for all common deployment flavors. If no value is specified for this option, the paste.ini file with the prefix of the corresponding Glance service's configuration file name will be searched for in the known configuration directories. (For example, if this option is missing from or has no value set in glance-api.conf , the service will look for a file named glance-api-paste.ini .) If the paste configuration file is not found, the service will not start. Possible values: A string value representing the name of the paste configuration file. Related Options: flavor flavor = None string value Deployment flavor to use in the server application pipeline. Provide a string value representing the appropriate deployment flavor used in the server application pipleline. This is typically the partial name of a pipeline in the paste configuration file with the service name removed. For example, if your paste section name in the paste configuration file is [pipeline:glance-api-keystone], set flavor to keystone . Possible values: String value representing a partial pipeline name. Related Options: config_file 3.2.10. profiler The following table outlines the options available under the [profiler] group in the /etc/glance/glance-registry.conf file. Table 3.33. profiler Configuration option = Default value Type Description connection_string = messaging:// string value Connection string for a notifier backend. Default value is messaging:// which sets the notifier to oslo_messaging. Examples of possible values: messaging:// - use oslo_messaging driver for sending spans. redis://127.0.0.1:6379 - use redis driver for sending spans. mongodb://127.0.0.1:27017 - use mongodb driver for sending spans. elasticsearch://127.0.0.1:9200 - use elasticsearch driver for sending spans. jaeger://127.0.0.1:6831 - use jaeger tracing as driver for sending spans. enabled = False boolean value Enable the profiling for all services on this node. Default value is False (fully disable the profiling feature). Possible values: True: Enables the feature False: Disables the feature. The profiling cannot be started via this project operations. If the profiling is triggered by another project, this project part will be empty. es_doc_type = notification string value Document type for notification indexing in elasticsearch. es_scroll_size = 10000 integer value Elasticsearch splits large requests in batches. This parameter defines maximum size of each batch (for example: es_scroll_size=10000). es_scroll_time = 2m string value This parameter is a time value parameter (for example: es_scroll_time=2m), indicating for how long the nodes that participate in the search will maintain relevant resources in order to continue and support it. filter_error_trace = False boolean value Enable filter traces that contain error/exception to a separated place. Default value is set to False. Possible values: True: Enable filter traces that contain error/exception. False: Disable the filter. hmac_keys = SECRET_KEY string value Secret key(s) to use for encrypting context data for performance profiling. This string value should have the following format: <key1>[,<key2>,... <keyn>], where each key is some random string. A user who triggers the profiling via the REST API has to set one of these keys in the headers of the REST API call to include profiling results of this node for this particular project. Both "enabled" flag and "hmac_keys" config options should be set to enable profiling. Also, to generate correct profiling information across all services at least one key needs to be consistent between OpenStack projects. This ensures it can be used from client side to generate the trace, containing information from all possible resources. sentinel_service_name = mymaster string value Redissentinel uses a service name to identify a master redis service. This parameter defines the name (for example: sentinal_service_name=mymaster ). socket_timeout = 0.1 floating point value Redissentinel provides a timeout option on the connections. This parameter defines that timeout (for example: socket_timeout=0.1). trace_sqlalchemy = False boolean value Enable SQL requests profiling in services. Default value is False (SQL requests won't be traced). Possible values: True: Enables SQL requests profiling. Each SQL query will be part of the trace and can the be analyzed by how much time was spent for that. False: Disables SQL requests profiling. The spent time is only shown on a higher level of operations. Single SQL queries cannot be analyzed this way. 3.3. glance-scrubber.conf This section contains options for the /etc/glance/glance-scrubber.conf file. 3.3.1. DEFAULT The following table outlines the options available under the [DEFAULT] group in the /etc/glance/glance-scrubber.conf file. . Configuration option = Default value Type Description allow_additional_image_properties = True boolean value Allow users to add additional/custom properties to images. Glance defines a standard set of properties (in its schema) that appear on every image. These properties are also known as base properties . In addition to these properties, Glance allows users to add custom properties to images. These are known as additional properties . By default, this configuration option is set to True and users are allowed to add additional properties. The number of additional properties that can be added to an image can be controlled via image_property_quota configuration option. Possible values: True False Related options: image_property_quota api_limit_max = 1000 integer value Maximum number of results that could be returned by a request. As described in the help text of limit_param_default , some requests may return multiple results. The number of results to be returned are governed either by the limit parameter in the request or the limit_param_default configuration option. The value in either case, can't be greater than the absolute maximum defined by this configuration option. Anything greater than this value is trimmed down to the maximum value defined here. Note Setting this to a very large value may slow down database queries and increase response times. Setting this to a very low value may result in poor user experience. Possible values: Any positive integer Related options: limit_param_default daemon = False boolean value Run scrubber as a daemon. This boolean configuration option indicates whether scrubber should run as a long-running process that wakes up at regular intervals to scrub images. The wake up interval can be specified using the configuration option wakeup_time . If this configuration option is set to False , which is the default value, scrubber runs once to scrub images and exits. In this case, if the operator wishes to implement continuous scrubbing of images, scrubber needs to be scheduled as a cron job. Possible values: True False Related options: wakeup_time data_api = glance.db.sqlalchemy.api string value Python module path of data access API. Specifies the path to the API to use for accessing the data model. This option determines how the image catalog data will be accessed. Possible values: glance.db.sqlalchemy.api glance.db.registry.api glance.db.simple.api If this option is set to glance.db.sqlalchemy.api then the image catalog data is stored in and read from the database via the SQLAlchemy Core and ORM APIs. Setting this option to glance.db.registry.api will force all database access requests to be routed through the Registry service. This avoids data access from the Glance API nodes for an added layer of security, scalability and manageability. Note In v2 OpenStack Images API, the registry service is optional. In order to use the Registry API in v2, the option enable_v2_registry must be set to True . Finally, when this configuration option is set to glance.db.simple.api , image catalog data is stored in and read from an in-memory data structure. This is primarily used for testing. Related options: enable_v2_api enable_v2_registry debug = False boolean value If set to true, the logging level will be set to DEBUG instead of the default INFO level. default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO'] list value List of package logging levels in logger=LEVEL pairs. This option is ignored if log_config_append is set. delayed_delete = False boolean value Turn on/off delayed delete. Typically when an image is deleted, the glance-api service puts the image into deleted state and deletes its data at the same time. Delayed delete is a feature in Glance that delays the actual deletion of image data until a later point in time (as determined by the configuration option scrub_time ). When delayed delete is turned on, the glance-api service puts the image into pending_delete state upon deletion and leaves the image data in the storage backend for the image scrubber to delete at a later time. The image scrubber will move the image into deleted state upon successful deletion of image data. Note When delayed delete is turned on, image scrubber MUST be running as a periodic task to prevent the backend storage from filling up with undesired usage. Possible values: True False Related options: scrub_time wakeup_time scrub_pool_size digest_algorithm = sha256 string value Digest algorithm to use for digital signature. Provide a string value representing the digest algorithm to use for generating digital signatures. By default, sha256 is used. To get a list of the available algorithms supported by the version of OpenSSL on your platform, run the command: openssl list-message-digest-algorithms . Examples are sha1 , sha256 , and sha512 . Note digest_algorithm is not related to Glance's image signing and verification. It is only used to sign the universally unique identifier (UUID) as a part of the certificate file and key file validation. Possible values: An OpenSSL message digest algorithm identifier Relation options: None enable_v1_registry = True boolean value DEPRECATED FOR REMOVAL enable_v2_api = True boolean value Deploy the v2 OpenStack Images API. When this option is set to True , Glance service will respond to requests on registered endpoints conforming to the v2 OpenStack Images API. NOTES: If this option is disabled, then the enable_v2_registry option, which is enabled by default, is also recommended to be disabled. Possible values: True False Related options: enable_v2_registry enable_v2_registry = True boolean value Deploy the v2 API Registry service. When this option is set to True , the Registry service will be enabled in Glance for v2 API requests. NOTES: Use of Registry is optional in v2 API, so this option must only be enabled if both enable_v2_api is set to True and the data_api option is set to glance.db.registry.api . If deploying only the v1 OpenStack Images API, this option, which is enabled by default, should be disabled. Possible values: True False Related options: enable_v2_api data_api enabled_import_methods = ['glance-direct', 'web-download'] list value List of enabled Image Import Methods Both glance-direct and web-download are enabled by default. Related options: [DEFAULT]/node_staging_uri fatal_deprecations = False boolean value Enables or disables fatal status of deprecations. hashing_algorithm = sha512 string value Secure hashing algorithm used for computing the os_hash_value property. This option configures the Glance "multihash", which consists of two image properties: the os_hash_algo and the os_hash_value . The os_hash_algo will be populated by the value of this configuration option, and the os_hash_value will be populated by the hexdigest computed when the algorithm is applied to the uploaded or imported image data. The value must be a valid secure hash algorithm name recognized by the python hashlib library. You can determine what these are by examining the hashlib.algorithms_available data member of the version of the library being used in your Glance installation. For interoperability purposes, however, we recommend that you use the set of secure hash names supplied by the hashlib.algorithms_guaranteed data member because those algorithms are guaranteed to be supported by the hashlib library on all platforms. Thus, any image consumer using hashlib locally should be able to verify the os_hash_value of the image. The default value of sha512 is a performant secure hash algorithm. If this option is misconfigured, any attempts to store image data will fail. For that reason, we recommend using the default value. Possible values: Any secure hash algorithm name recognized by the Python hashlib library Related options: None image_location_quota = 10 integer value Maximum number of locations allowed on an image. Any negative value is interpreted as unlimited. Related options: None image_member_quota = 128 integer value Maximum number of image members per image. This limits the maximum of users an image can be shared with. Any negative value is interpreted as unlimited. Related options: None image_property_quota = 128 integer value Maximum number of properties allowed on an image. This enforces an upper limit on the number of additional properties an image can have. Any negative value is interpreted as unlimited. Note This won't have any impact if additional properties are disabled. Please refer to allow_additional_image_properties . Related options: allow_additional_image_properties image_size_cap = 1099511627776 integer value Maximum size of image a user can upload in bytes. An image upload greater than the size mentioned here would result in an image creation failure. This configuration option defaults to 1099511627776 bytes (1 TiB). NOTES: This value should only be increased after careful consideration and must be set less than or equal to 8 EiB (9223372036854775808). This value must be set with careful consideration of the backend storage capacity. Setting this to a very low value may result in a large number of image failures. And, setting this to a very large value may result in faster consumption of storage. Hence, this must be set according to the nature of images created and storage capacity available. Possible values: Any positive number less than or equal to 9223372036854775808 image_tag_quota = 128 integer value Maximum number of tags allowed on an image. Any negative value is interpreted as unlimited. Related options: None `instance_format = [instance: %(uuid)s] ` string value The format for an instance that is passed with the log message. `instance_uuid_format = [instance: %(uuid)s] ` string value The format for an instance UUID that is passed with the log message. limit_param_default = 25 integer value The default number of results to return for a request. Responses to certain API requests, like list images, may return multiple items. The number of results returned can be explicitly controlled by specifying the limit parameter in the API request. However, if a limit parameter is not specified, this configuration value will be used as the default number of results to be returned for any API request. NOTES: The value of this configuration option may not be greater than the value specified by api_limit_max . Setting this to a very large value may slow down database queries and increase response times. Setting this to a very low value may result in poor user experience. Possible values: Any positive integer Related options: api_limit_max log-config-append = None string value The name of a logging configuration file. This file is appended to any existing logging configuration files. For details about logging configuration files, see the Python logging module documentation. Note that when logging configuration files are used then all logging configuration is set in the configuration file and other logging configuration options are ignored (for example, log-date-format). log-date-format = %Y-%m-%d %H:%M:%S string value Defines the format string for %%(asctime)s in log records. Default: %(default)s . This option is ignored if log_config_append is set. log-dir = None string value (Optional) The base directory used for relative log_file paths. This option is ignored if log_config_append is set. log-file = None string value (Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr as defined by use_stderr. This option is ignored if log_config_append is set. log_rotate_interval = 1 integer value The amount of time before the log files are rotated. This option is ignored unless log_rotation_type is setto "interval". log_rotate_interval_type = days string value Rotation interval type. The time of the last file change (or the time when the service was started) is used when scheduling the rotation. log_rotation_type = none string value Log rotation type. logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s string value Format string to use for log messages with context. Used by oslo_log.formatters.ContextFormatter logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d string value Additional data to append to log message when logging level for the message is DEBUG. Used by oslo_log.formatters.ContextFormatter logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s string value Format string to use for log messages when context is undefined. Used by oslo_log.formatters.ContextFormatter logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s string value Prefix each line of exception output with this format. Used by oslo_log.formatters.ContextFormatter logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s string value Defines the format string for %(user_identity)s that is used in logging_context_format_string. Used by oslo_log.formatters.ContextFormatter max_logfile_count = 30 integer value Maximum number of rotated log files. max_logfile_size_mb = 200 integer value Log file maximum size in MB. This option is ignored if "log_rotation_type" is not set to "size". metadata_encryption_key = None string value AES key for encrypting store location metadata. Provide a string value representing the AES cipher to use for encrypting Glance store metadata. Note The AES key to use must be set to a random string of length 16, 24 or 32 bytes. Possible values: String value representing a valid AES key Related options: None node_staging_uri = file:///tmp/staging/ string value The URL provides location where the temporary data will be stored This option is for Glance internal use only. Glance will save the image data uploaded by the user to staging endpoint during the image import process. This option does not change the staging API endpoint by any means. Note It is discouraged to use same path as [task]/work_dir Note file://<absolute-directory-path> is the only option api_image_import flow will support for now. Note The staging path must be on shared filesystem available to all Glance API nodes. Possible values: String starting with file:// followed by absolute FS path Related options: [task]/work_dir publish_errors = False boolean value Enables or disables publication of error events. pydev_worker_debug_host = None host address value Host address of the pydev server. Provide a string value representing the hostname or IP of the pydev server to use for debugging. The pydev server listens for debug connections on this address, facilitating remote debugging in Glance. Possible values: Valid hostname Valid IP address Related options: None pydev_worker_debug_port = 5678 port value Port number that the pydev server will listen on. Provide a port number to bind the pydev server to. The pydev process accepts debug connections on this port and facilitates remote debugging in Glance. Possible values: A valid port number Related options: None rate_limit_burst = 0 integer value Maximum number of logged messages per rate_limit_interval. rate_limit_except_level = CRITICAL string value Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or empty string. Logs with level greater or equal to rate_limit_except_level are not filtered. An empty string means that all levels are filtered. rate_limit_interval = 0 integer value Interval, number of seconds, of log rate limiting. restore = None string value Restore the image status from pending_delete to active . This option is used by administrator to reset the image's status from pending_delete to active when the image is deleted by mistake and pending delete feature is enabled in Glance. Please make sure the glance-scrubber daemon is stopped before restoring the image to avoid image data inconsistency. Possible values: image's uuid scrub_pool_size = 1 integer value The size of thread pool to be used for scrubbing images. When there are a large number of images to scrub, it is beneficial to scrub images in parallel so that the scrub queue stays in control and the backend storage is reclaimed in a timely fashion. This configuration option denotes the maximum number of images to be scrubbed in parallel. The default value is one, which signifies serial scrubbing. Any value above one indicates parallel scrubbing. Possible values: Any non-zero positive integer Related options: delayed_delete scrub_time = 0 integer value The amount of time, in seconds, to delay image scrubbing. When delayed delete is turned on, an image is put into pending_delete state upon deletion until the scrubber deletes its image data. Typically, soon after the image is put into pending_delete state, it is available for scrubbing. However, scrubbing can be delayed until a later point using this configuration option. This option denotes the time period an image spends in pending_delete state before it is available for scrubbing. It is important to realize that this has storage implications. The larger the scrub_time , the longer the time to reclaim backend storage from deleted images. Possible values: Any non-negative integer Related options: delayed_delete show_image_direct_url = False boolean value Show direct image location when returning an image. This configuration option indicates whether to show the direct image location when returning image details to the user. The direct image location is where the image data is stored in backend storage. This image location is shown under the image property direct_url . When multiple image locations exist for an image, the best location is displayed based on the location strategy indicated by the configuration option location_strategy . NOTES: Revealing image locations can present a GRAVE SECURITY RISK as image locations can sometimes include credentials. Hence, this is set to False by default. Set this to True with EXTREME CAUTION and ONLY IF you know what you are doing! If an operator wishes to avoid showing any image location(s) to the user, then both this option and show_multiple_locations MUST be set to False . Possible values: True False Related options: show_multiple_locations location_strategy show_multiple_locations = False boolean value Show all image locations when returning an image. This configuration option indicates whether to show all the image locations when returning image details to the user. When multiple image locations exist for an image, the locations are ordered based on the location strategy indicated by the configuration opt location_strategy . The image locations are shown under the image property locations . NOTES: Revealing image locations can present a GRAVE SECURITY RISK as image locations can sometimes include credentials. Hence, this is set to False by default. Set this to True with EXTREME CAUTION and ONLY IF you know what you are doing! See https://wiki.openstack.org/wiki/OSSN/OSSN-0065 for more information. If an operator wishes to avoid showing any image location(s) to the user, then both this option and show_image_direct_url MUST be set to False . Possible values: True False Related options: show_image_direct_url location_strategy syslog-log-facility = LOG_USER string value Syslog facility to receive log lines. This option is ignored if log_config_append is set. use-journal = False boolean value Enable journald for logging. If running in a systemd environment you may wish to enable journal support. Doing so will use the journal native protocol which includes structured metadata in addition to log messages.This option is ignored if log_config_append is set. use-json = False boolean value Use JSON formatting for logging. This option is ignored if log_config_append is set. use-syslog = False boolean value Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to honor RFC5424. This option is ignored if log_config_append is set. use_eventlog = False boolean value Log output to Windows Event Log. use_stderr = False boolean value Log output to standard error. This option is ignored if log_config_append is set. user_storage_quota = 0 string value Maximum amount of image storage per tenant. This enforces an upper limit on the cumulative storage consumed by all images of a tenant across all stores. This is a per-tenant limit. The default unit for this configuration option is Bytes. However, storage units can be specified using case-sensitive literals B , KB , MB , GB and TB representing Bytes, KiloBytes, MegaBytes, GigaBytes and TeraBytes respectively. Note that there should not be any space between the value and unit. Value 0 signifies no quota enforcement. Negative values are invalid and result in errors. Possible values: A string that is a valid concatenation of a non-negative integer representing the storage value and an optional string literal representing storage units as mentioned above. Related options: None wakeup_time = 300 integer value Time interval, in seconds, between scrubber runs in daemon mode. Scrubber can be run either as a cron job or daemon. When run as a daemon, this configuration time specifies the time period between two runs. When the scrubber wakes up, it fetches and scrubs all pending_delete images that are available for scrubbing after taking scrub_time into consideration. If the wakeup time is set to a large number, there may be a large number of images to be scrubbed for each run. Also, this impacts how quickly the backend storage is reclaimed. Possible values: Any non-negative integer Related options: daemon delayed_delete watch-log-file = False boolean value Uses logging handler designed to watch file system. When log file is moved or removed this handler will open a new log file with specified path instantaneously. It makes sense only if log_file option is specified and Linux platform is used. This option is ignored if log_config_append is set. 3.3.2. database The following table outlines the options available under the [database] group in the /etc/glance/glance-scrubber.conf file. Table 3.34. database Configuration option = Default value Type Description backend = sqlalchemy string value The back end to use for the database. connection = None string value The SQLAlchemy connection string to use to connect to the database. connection_debug = 0 integer value Verbosity of SQL debugging information: 0=None, 100=Everything. `connection_parameters = ` string value Optional URL parameters to append onto the connection URL at connect time; specify as param1=value1¶m2=value2&... connection_recycle_time = 3600 integer value Connections which have been present in the connection pool longer than this number of seconds will be replaced with a new one the time they are checked out from the pool. connection_trace = False boolean value Add Python stack traces to SQL as comment strings. db_inc_retry_interval = True boolean value If True, increases the interval between retries of a database operation up to db_max_retry_interval. db_max_retries = 20 integer value Maximum retries in case of connection error or deadlock error before error is raised. Set to -1 to specify an infinite retry count. db_max_retry_interval = 10 integer value If db_inc_retry_interval is set, the maximum seconds between retries of a database operation. db_retry_interval = 1 integer value Seconds between retries of a database transaction. max_overflow = 50 integer value If set, use this value for max_overflow with SQLAlchemy. max_pool_size = 5 integer value Maximum number of SQL connections to keep open in a pool. Setting a value of 0 indicates no limit. max_retries = 10 integer value Maximum number of database connection retries during startup. Set to -1 to specify an infinite retry count. mysql_enable_ndb = False boolean value If True, transparently enables support for handling MySQL Cluster (NDB). mysql_sql_mode = TRADITIONAL string value The SQL mode to be used for MySQL sessions. This option, including the default, overrides any server-set SQL mode. To use whatever SQL mode is set by the server configuration, set this to no value. Example: mysql_sql_mode= pool_timeout = None integer value If set, use this value for pool_timeout with SQLAlchemy. retry_interval = 10 integer value Interval between retries of opening a SQL connection. slave_connection = None string value The SQLAlchemy connection string to use to connect to the slave database. sqlite_synchronous = True boolean value If True, SQLite uses synchronous mode. use_db_reconnect = False boolean value Enable the experimental use of database reconnect on connection lost. use_tpool = False boolean value Enable the experimental use of thread pooling for all DB API calls 3.3.3. glance_store The following table outlines the options available under the [glance_store] group in the /etc/glance/glance-scrubber.conf file. Table 3.35. glance_store Configuration option = Default value Type Description cinder_api_insecure = False boolean value Allow to perform insecure SSL requests to cinder. If this option is set to True, HTTPS endpoint connection is verified using the CA certificates file specified by cinder_ca_certificates_file option. Possible values: True False Related options: cinder_ca_certificates_file cinder_ca_certificates_file = None string value Location of a CA certificates file used for cinder client requests. The specified CA certificates file, if set, is used to verify cinder connections via HTTPS endpoint. If the endpoint is HTTP, this value is ignored. cinder_api_insecure must be set to True to enable the verification. Possible values: Path to a ca certificates file Related options: cinder_api_insecure cinder_catalog_info = volumev2::publicURL string value Information to match when looking for cinder in the service catalog. When the cinder_endpoint_template is not set and any of cinder_store_auth_address , cinder_store_user_name , cinder_store_project_name , cinder_store_password is not set, cinder store uses this information to lookup cinder endpoint from the service catalog in the current context. cinder_os_region_name , if set, is taken into consideration to fetch the appropriate endpoint. The service catalog can be listed by the openstack catalog list command. Possible values: A string of of the following form: <service_type>:<service_name>:<interface> At least service_type and interface should be specified. service_name can be omitted. Related options: cinder_os_region_name cinder_endpoint_template cinder_store_auth_address cinder_store_user_name cinder_store_project_name cinder_store_password cinder_endpoint_template = None string value Override service catalog lookup with template for cinder endpoint. When this option is set, this value is used to generate cinder endpoint, instead of looking up from the service catalog. This value is ignored if cinder_store_auth_address , cinder_store_user_name , cinder_store_project_name , and cinder_store_password are specified. If this configuration option is set, cinder_catalog_info will be ignored. Possible values: URL template string for cinder endpoint, where %%(tenant)s is replaced with the current tenant (project) name. For example: http://cinder.openstack.example.org/v2/%%(tenant)s Related options: cinder_store_auth_address cinder_store_user_name cinder_store_project_name cinder_store_password cinder_catalog_info cinder_http_retries = 3 integer value Number of cinderclient retries on failed http calls. When a call failed by any errors, cinderclient will retry the call up to the specified times after sleeping a few seconds. Possible values: A positive integer Related options: None cinder_os_region_name = None string value Region name to lookup cinder service from the service catalog. This is used only when cinder_catalog_info is used for determining the endpoint. If set, the lookup for cinder endpoint by this node is filtered to the specified region. It is useful when multiple regions are listed in the catalog. If this is not set, the endpoint is looked up from every region. Possible values: A string that is a valid region name. Related options: cinder_catalog_info cinder_state_transition_timeout = 300 integer value Time period, in seconds, to wait for a cinder volume transition to complete. When the cinder volume is created, deleted, or attached to the glance node to read/write the volume data, the volume's state is changed. For example, the newly created volume status changes from creating to available after the creation process is completed. This specifies the maximum time to wait for the status change. If a timeout occurs while waiting, or the status is changed to an unexpected value (e.g. error ), the image creation fails. Possible values: A positive integer Related options: None cinder_store_auth_address = None string value The address where the cinder authentication service is listening. When all of cinder_store_auth_address , cinder_store_user_name , cinder_store_project_name , and cinder_store_password options are specified, the specified values are always used for the authentication. This is useful to hide the image volumes from users by storing them in a project/tenant specific to the image service. It also enables users to share the image volume among other projects under the control of glance's ACL. If either of these options are not set, the cinder endpoint is looked up from the service catalog, and current context's user and project are used. Possible values: A valid authentication service address, for example: http://openstack.example.org/identity/v2.0 Related options: cinder_store_user_name cinder_store_password cinder_store_project_name cinder_store_password = None string value Password for the user authenticating against cinder. This must be used with all the following related options. If any of these are not specified, the user of the current context is used. Possible values: A valid password for the user specified by cinder_store_user_name Related options: cinder_store_auth_address cinder_store_user_name cinder_store_project_name cinder_store_project_name = None string value Project name where the image volume is stored in cinder. If this configuration option is not set, the project in current context is used. This must be used with all the following related options. If any of these are not specified, the project of the current context is used. Possible values: A valid project name Related options: cinder_store_auth_address cinder_store_user_name cinder_store_password cinder_store_user_name = None string value User name to authenticate against cinder. This must be used with all the following related options. If any of these are not specified, the user of the current context is used. Possible values: A valid user name Related options: cinder_store_auth_address cinder_store_password cinder_store_project_name cinder_volume_type = None string value Volume type that will be used for volume creation in cinder. Some cinder backends can have several volume types to optimize storage usage. Adding this option allows an operator to choose a specific volume type in cinder that can be optimized for images. If this is not set, then the default volume type specified in the cinder configuration will be used for volume creation. Possible values: A valid volume type from cinder Related options: None default_store = file string value The default scheme to use for storing images. Provide a string value representing the default scheme to use for storing images. If not set, Glance uses file as the default scheme to store images with the file store. Note The value given for this configuration option must be a valid scheme for a store registered with the stores configuration option. Possible values: file filesystem http https swift swift+http swift+https swift+config rbd sheepdog cinder vsphere Related Options: stores default_swift_reference = ref1 string value Reference to default Swift account/backing store parameters. Provide a string value representing a reference to the default set of parameters required for using swift account/backing store for image storage. The default reference value for this configuration option is ref1 . This configuration option dereferences the parameters and facilitates image storage in Swift storage backend every time a new image is added. Possible values: A valid string value Related options: None filesystem_store_chunk_size = 65536 integer value Chunk size, in bytes. The chunk size used when reading or writing image files. Raising this value may improve the throughput but it may also slightly increase the memory usage when handling a large number of requests. Possible Values: Any positive integer value Related options: None filesystem_store_datadir = /var/lib/glance/images string value Directory to which the filesystem backend store writes images. Upon start up, Glance creates the directory if it doesn't already exist and verifies write access to the user under which glance-api runs. If the write access isn't available, a BadStoreConfiguration exception is raised and the filesystem store may not be available for adding new images. Note This directory is used only when filesystem store is used as a storage backend. Either filesystem_store_datadir or filesystem_store_datadirs option must be specified in glance-api.conf . If both options are specified, a BadStoreConfiguration will be raised and the filesystem store may not be available for adding new images. Possible values: A valid path to a directory Related options: filesystem_store_datadirs filesystem_store_file_perm filesystem_store_datadirs = None multi valued List of directories and their priorities to which the filesystem backend store writes images. The filesystem store can be configured to store images in multiple directories as opposed to using a single directory specified by the filesystem_store_datadir configuration option. When using multiple directories, each directory can be given an optional priority to specify the preference order in which they should be used. Priority is an integer that is concatenated to the directory path with a colon where a higher value indicates higher priority. When two directories have the same priority, the directory with most free space is used. When no priority is specified, it defaults to zero. More information on configuring filesystem store with multiple store directories can be found at https://docs.openstack.org/glance/latest/configuration/configuring.html Note This directory is used only when filesystem store is used as a storage backend. Either filesystem_store_datadir or filesystem_store_datadirs option must be specified in glance-api.conf . If both options are specified, a BadStoreConfiguration will be raised and the filesystem store may not be available for adding new images. Possible values: List of strings of the following form: <a valid directory path>:<optional integer priority> Related options: filesystem_store_datadir filesystem_store_file_perm filesystem_store_file_perm = 0 integer value File access permissions for the image files. Set the intended file access permissions for image data. This provides a way to enable other services, e.g. Nova, to consume images directly from the filesystem store. The users running the services that are intended to be given access to could be made a member of the group that owns the files created. Assigning a value less then or equal to zero for this configuration option signifies that no changes be made to the default permissions. This value will be decoded as an octal digit. For more information, please refer the documentation at https://docs.openstack.org/glance/latest/configuration/configuring.html Possible values: A valid file access permission Zero Any negative integer Related options: None filesystem_store_metadata_file = None string value Filesystem store metadata file. The path to a file which contains the metadata to be returned with any location associated with the filesystem store. The file must contain a valid JSON object. The object should contain the keys id and mountpoint . The value for both keys should be a string. Possible values: A valid path to the store metadata file Related options: None http_proxy_information = {} dict value The http/https proxy information to be used to connect to the remote server. This configuration option specifies the http/https proxy information that should be used to connect to the remote server. The proxy information should be a key value pair of the scheme and proxy, for example, http:10.0.0.1:3128. You can also specify proxies for multiple schemes by separating the key value pairs with a comma, for example, http:10.0.0.1:3128, https:10.0.0.1:1080. Possible values: A comma separated list of scheme:proxy pairs as described above Related options: None https_ca_certificates_file = None string value Path to the CA bundle file. This configuration option enables the operator to use a custom Certificate Authority file to verify the remote server certificate. If this option is set, the https_insecure option will be ignored and the CA file specified will be used to authenticate the server certificate and establish a secure connection to the server. Possible values: A valid path to a CA file Related options: https_insecure https_insecure = True boolean value Set verification of the remote server certificate. This configuration option takes in a boolean value to determine whether or not to verify the remote server certificate. If set to True, the remote server certificate is not verified. If the option is set to False, then the default CA truststore is used for verification. This option is ignored if https_ca_certificates_file is set. The remote server certificate will then be verified using the file specified using the https_ca_certificates_file option. Possible values: True False Related options: https_ca_certificates_file rados_connect_timeout = 0 integer value Timeout value for connecting to Ceph cluster. This configuration option takes in the timeout value in seconds used when connecting to the Ceph cluster i.e. it sets the time to wait for glance-api before closing the connection. This prevents glance-api hangups during the connection to RBD. If the value for this option is set to less than or equal to 0, no timeout is set and the default librados value is used. Possible Values: Any integer value Related options: None `rbd_store_ceph_conf = ` string value Ceph configuration file path. This configuration option specifies the path to the Ceph configuration file to be used. If the value for this option is not set by the user or is set to the empty string, librados will read the standard ceph.conf file by searching the default Ceph configuration file locations in sequential order. See the Ceph documentation for details. Note If using Cephx authentication, this file should include a reference to the right keyring in a client.<USER> section NOTE 2: If you leave this option empty (the default), the actual Ceph configuration file used may change depending on what version of librados is being used. If it is important for you to know exactly which configuration file is in effect, you may specify that file here using this option. Possible Values: A valid path to a configuration file Related options: rbd_store_user rbd_store_chunk_size = 8 integer value Size, in megabytes, to chunk RADOS images into. Provide an integer value representing the size in megabytes to chunk Glance images into. The default chunk size is 8 megabytes. For optimal performance, the value should be a power of two. When Ceph's RBD object storage system is used as the storage backend for storing Glance images, the images are chunked into objects of the size set using this option. These chunked objects are then stored across the distributed block data store to use for Glance. Possible Values: Any positive integer value Related options: None rbd_store_pool = images string value RADOS pool in which images are stored. When RBD is used as the storage backend for storing Glance images, the images are stored by means of logical grouping of the objects (chunks of images) into a pool . Each pool is defined with the number of placement groups it can contain. The default pool that is used is images . More information on the RBD storage backend can be found here: http://ceph.com/planet/how-data-is-stored-in-ceph-cluster/ Possible Values: A valid pool name Related options: None rbd_store_user = None string value RADOS user to authenticate as. This configuration option takes in the RADOS user to authenticate as. This is only needed when RADOS authentication is enabled and is applicable only if the user is using Cephx authentication. If the value for this option is not set by the user or is set to None, a default value will be chosen, which will be based on the client. section in rbd_store_ceph_conf. Possible Values: A valid RADOS user Related options: rbd_store_ceph_conf rootwrap_config = /etc/glance/rootwrap.conf string value Path to the rootwrap configuration file to use for running commands as root. The cinder store requires root privileges to operate the image volumes (for connecting to iSCSI/FC volumes and reading/writing the volume data, etc.). The configuration file should allow the required commands by cinder store and os-brick library. Possible values: Path to the rootwrap config file Related options: None sheepdog_store_address = 127.0.0.1 host address value Address to bind the Sheepdog daemon to. Provide a string value representing the address to bind the Sheepdog daemon to. The default address set for the sheep is 127.0.0.1. The Sheepdog daemon, also called sheep , manages the storage in the distributed cluster by writing objects across the storage network. It identifies and acts on the messages directed to the address set using sheepdog_store_address option to store chunks of Glance images. Possible values: A valid IPv4 address A valid IPv6 address A valid hostname Related Options: sheepdog_store_port sheepdog_store_chunk_size = 64 integer value Chunk size for images to be stored in Sheepdog data store. Provide an integer value representing the size in mebibyte (1048576 bytes) to chunk Glance images into. The default chunk size is 64 mebibytes. When using Sheepdog distributed storage system, the images are chunked into objects of this size and then stored across the distributed data store to use for Glance. Chunk sizes, if a power of two, help avoid fragmentation and enable improved performance. Possible values: Positive integer value representing size in mebibytes. Related Options: None sheepdog_store_port = 7000 port value Port number on which the sheep daemon will listen. Provide an integer value representing a valid port number on which you want the Sheepdog daemon to listen on. The default port is 7000. The Sheepdog daemon, also called sheep , manages the storage in the distributed cluster by writing objects across the storage network. It identifies and acts on the messages it receives on the port number set using sheepdog_store_port option to store chunks of Glance images. Possible values: A valid port number (0 to 65535) Related Options: sheepdog_store_address stores = ['file', 'http'] list value List of enabled Glance stores. Register the storage backends to use for storing disk images as a comma separated list. The default stores enabled for storing disk images with Glance are file and http . Possible values: A comma separated list that could include: file http swift rbd sheepdog cinder vmware Related Options: default_store swift_buffer_on_upload = False boolean value Buffer image segments before upload to Swift. Provide a boolean value to indicate whether or not Glance should buffer image data to disk while uploading to swift. This enables Glance to resume uploads on error. NOTES: When enabling this option, one should take great care as this increases disk usage on the API node. Be aware that depending upon how the file system is configured, the disk space used for buffering may decrease the actual disk space available for the glance image cache. Disk utilization will cap according to the following equation: ( swift_store_large_object_chunk_size * workers * 1000) Possible values: True False Related options: swift_upload_buffer_dir swift_store_admin_tenants = [] list value List of tenants that will be granted admin access. This is a list of tenants that will be granted read/write access on all Swift containers created by Glance in multi-tenant mode. The default value is an empty list. Possible values: A comma separated list of strings representing UUIDs of Keystone projects/tenants Related options: None swift_store_auth_address = None string value The address where the Swift authentication service is listening. swift_store_auth_insecure = False boolean value Set verification of the server certificate. This boolean determines whether or not to verify the server certificate. If this option is set to True, swiftclient won't check for a valid SSL certificate when authenticating. If the option is set to False, then the default CA truststore is used for verification. Possible values: True False Related options: swift_store_cacert swift_store_auth_version = 2 string value Version of the authentication service to use. Valid versions are 2 and 3 for keystone and 1 (deprecated) for swauth and rackspace. swift_store_cacert = None string value Path to the CA bundle file. This configuration option enables the operator to specify the path to a custom Certificate Authority file for SSL verification when connecting to Swift. Possible values: A valid path to a CA file Related options: swift_store_auth_insecure swift_store_config_file = None string value Absolute path to the file containing the swift account(s) configurations. Include a string value representing the path to a configuration file that has references for each of the configured Swift account(s)/backing stores. By default, no file path is specified and customized Swift referencing is disabled. Configuring this option is highly recommended while using Swift storage backend for image storage as it avoids storage of credentials in the database. Note Please do not configure this option if you have set swift_store_multi_tenant to True . Possible values: String value representing an absolute path on the glance-api node Related options: swift_store_multi_tenant swift_store_container = glance string value Name of single container to store images/name prefix for multiple containers When a single container is being used to store images, this configuration option indicates the container within the Glance account to be used for storing all images. When multiple containers are used to store images, this will be the name prefix for all containers. Usage of single/multiple containers can be controlled using the configuration option swift_store_multiple_containers_seed . When using multiple containers, the containers will be named after the value set for this configuration option with the first N chars of the image UUID as the suffix delimited by an underscore (where N is specified by swift_store_multiple_containers_seed ). Example: if the seed is set to 3 and swift_store_container = glance , then an image with UUID fdae39a1-bac5-4238-aba4-69bcc726e848 would be placed in the container glance_fda . All dashes in the UUID are included when creating the container name but do not count toward the character limit, so when N=10 the container name would be glance_fdae39a1-ba. Possible values: If using single container, this configuration option can be any string that is a valid swift container name in Glance's Swift account If using multiple containers, this configuration option can be any string as long as it satisfies the container naming rules enforced by Swift. The value of swift_store_multiple_containers_seed should be taken into account as well. Related options: swift_store_multiple_containers_seed swift_store_multi_tenant swift_store_create_container_on_put swift_store_create_container_on_put = False boolean value Create container, if it doesn't already exist, when uploading image. At the time of uploading an image, if the corresponding container doesn't exist, it will be created provided this configuration option is set to True. By default, it won't be created. This behavior is applicable for both single and multiple containers mode. Possible values: True False Related options: None swift_store_endpoint = None string value The URL endpoint to use for Swift backend storage. Provide a string value representing the URL endpoint to use for storing Glance images in Swift store. By default, an endpoint is not set and the storage URL returned by auth is used. Setting an endpoint with swift_store_endpoint overrides the storage URL and is used for Glance image storage. Note The URL should include the path up to, but excluding the container. The location of an object is obtained by appending the container and object to the configured URL. Possible values: String value representing a valid URL path up to a Swift container Related Options: None swift_store_endpoint_type = publicURL string value Endpoint Type of Swift service. This string value indicates the endpoint type to use to fetch the Swift endpoint. The endpoint type determines the actions the user will be allowed to perform, for instance, reading and writing to the Store. This setting is only used if swift_store_auth_version is greater than 1. Possible values: publicURL adminURL internalURL Related options: swift_store_endpoint swift_store_expire_soon_interval = 60 integer value Time in seconds defining the size of the window in which a new token may be requested before the current token is due to expire. Typically, the Swift storage driver fetches a new token upon the expiration of the current token to ensure continued access to Swift. However, some Swift transactions (like uploading image segments) may not recover well if the token expires on the fly. Hence, by fetching a new token before the current token expiration, we make sure that the token does not expire or is close to expiry before a transaction is attempted. By default, the Swift storage driver requests for a new token 60 seconds or less before the current token expiration. Possible values: Zero Positive integer value Related Options: None swift_store_key = None string value Auth key for the user authenticating against the Swift authentication service. swift_store_large_object_chunk_size = 200 integer value The maximum size, in MB, of the segments when image data is segmented. When image data is segmented to upload images that are larger than the limit enforced by the Swift cluster, image data is broken into segments that are no bigger than the size specified by this configuration option. Refer to swift_store_large_object_size for more detail. For example: if swift_store_large_object_size is 5GB and swift_store_large_object_chunk_size is 1GB, an image of size 6.2GB will be segmented into 7 segments where the first six segments will be 1GB in size and the seventh segment will be 0.2GB. Possible values: A positive integer that is less than or equal to the large object limit enforced by Swift cluster in consideration. Related options: swift_store_large_object_size swift_store_large_object_size = 5120 integer value The size threshold, in MB, after which Glance will start segmenting image data. Swift has an upper limit on the size of a single uploaded object. By default, this is 5GB. To upload objects bigger than this limit, objects are segmented into multiple smaller objects that are tied together with a manifest file. For more detail, refer to https://docs.openstack.org/swift/latest/overview_large_objects.html This configuration option specifies the size threshold over which the Swift driver will start segmenting image data into multiple smaller files. Currently, the Swift driver only supports creating Dynamic Large Objects. Note This should be set by taking into account the large object limit enforced by the Swift cluster in consideration. Possible values: A positive integer that is less than or equal to the large object limit enforced by the Swift cluster in consideration. Related options: swift_store_large_object_chunk_size swift_store_multi_tenant = False boolean value Store images in tenant's Swift account. This enables multi-tenant storage mode which causes Glance images to be stored in tenant specific Swift accounts. If this is disabled, Glance stores all images in its own account. More details multi-tenant store can be found at https://wiki.openstack.org/wiki/GlanceSwiftTenantSpecificStorage Note If using multi-tenant swift store, please make sure that you do not set a swift configuration file with the swift_store_config_file option. Possible values: True False Related options: swift_store_config_file swift_store_multiple_containers_seed = 0 integer value Seed indicating the number of containers to use for storing images. When using a single-tenant store, images can be stored in one or more than one containers. When set to 0, all images will be stored in one single container. When set to an integer value between 1 and 32, multiple containers will be used to store images. This configuration option will determine how many containers are created. The total number of containers that will be used is equal to 16^N, so if this config option is set to 2, then 16^2=256 containers will be used to store images. Please refer to swift_store_container for more detail on the naming convention. More detail about using multiple containers can be found at https://specs.openstack.org/openstack/glance-specs/specs/kilo/swift-store-multiple-containers.html Note This is used only when swift_store_multi_tenant is disabled. Possible values: A non-negative integer less than or equal to 32 Related options: swift_store_container swift_store_multi_tenant swift_store_create_container_on_put swift_store_region = None string value The region of Swift endpoint to use by Glance. Provide a string value representing a Swift region where Glance can connect to for image storage. By default, there is no region set. When Glance uses Swift as the storage backend to store images for a specific tenant that has multiple endpoints, setting of a Swift region with swift_store_region allows Glance to connect to Swift in the specified region as opposed to a single region connectivity. This option can be configured for both single-tenant and multi-tenant storage. Note Setting the region with swift_store_region is tenant-specific and is necessary only if the tenant has multiple endpoints across different regions. Possible values: A string value representing a valid Swift region. Related Options: None swift_store_retry_get_count = 0 integer value The number of times a Swift download will be retried before the request fails. Provide an integer value representing the number of times an image download must be retried before erroring out. The default value is zero (no retry on a failed image download). When set to a positive integer value, swift_store_retry_get_count ensures that the download is attempted this many more times upon a download failure before sending an error message. Possible values: Zero Positive integer value Related Options: None swift_store_service_type = object-store string value Type of Swift service to use. Provide a string value representing the service type to use for storing images while using Swift backend storage. The default service type is set to object-store . Note If swift_store_auth_version is set to 2, the value for this configuration option needs to be object-store . If using a higher version of Keystone or a different auth scheme, this option may be modified. Possible values: A string representing a valid service type for Swift storage. Related Options: None swift_store_ssl_compression = True boolean value SSL layer compression for HTTPS Swift requests. Provide a boolean value to determine whether or not to compress HTTPS Swift requests for images at the SSL layer. By default, compression is enabled. When using Swift as the backend store for Glance image storage, SSL layer compression of HTTPS Swift requests can be set using this option. If set to False, SSL layer compression of HTTPS Swift requests is disabled. Disabling this option may improve performance for images which are already in a compressed format, for example, qcow2. Possible values: True False Related Options: None swift_store_use_trusts = True boolean value Use trusts for multi-tenant Swift store. This option instructs the Swift store to create a trust for each add/get request when the multi-tenant store is in use. Using trusts allows the Swift store to avoid problems that can be caused by an authentication token expiring during the upload or download of data. By default, swift_store_use_trusts is set to True (use of trusts is enabled). If set to False , a user token is used for the Swift connection instead, eliminating the overhead of trust creation. Note This option is considered only when swift_store_multi_tenant is set to True Possible values: True False Related options: swift_store_multi_tenant swift_store_user = None string value The user to authenticate against the Swift authentication service. swift_upload_buffer_dir = None string value Directory to buffer image segments before upload to Swift. Provide a string value representing the absolute path to the directory on the glance node where image segments will be buffered briefly before they are uploaded to swift. NOTES: * This is required only when the configuration option swift_buffer_on_upload is set to True. * This directory should be provisioned keeping in mind the swift_store_large_object_chunk_size and the maximum number of images that could be uploaded simultaneously by a given glance node. Possible values: String value representing an absolute directory path Related options: swift_buffer_on_upload swift_store_large_object_chunk_size vmware_api_retry_count = 10 integer value The number of VMware API retries. This configuration option specifies the number of times the VMware ESX/VC server API must be retried upon connection related issues or server API call overload. It is not possible to specify retry forever . Possible Values: Any positive integer value Related options: None vmware_ca_file = None string value Absolute path to the CA bundle file. This configuration option enables the operator to use a custom Cerificate Authority File to verify the ESX/vCenter certificate. If this option is set, the "vmware_insecure" option will be ignored and the CA file specified will be used to authenticate the ESX/vCenter server certificate and establish a secure connection to the server. Possible Values: Any string that is a valid absolute path to a CA file Related options: vmware_insecure vmware_datastores = None multi valued The datastores where the image can be stored. This configuration option specifies the datastores where the image can be stored in the VMWare store backend. This option may be specified multiple times for specifying multiple datastores. The datastore name should be specified after its datacenter path, separated by ":". An optional weight may be given after the datastore name, separated again by ":" to specify the priority. Thus, the required format becomes <datacenter_path>:<datastore_name>:<optional_weight>. When adding an image, the datastore with highest weight will be selected, unless there is not enough free space available in cases where the image size is already known. If no weight is given, it is assumed to be zero and the directory will be considered for selection last. If multiple datastores have the same weight, then the one with the most free space available is selected. Possible Values: Any string of the format: <datacenter_path>:<datastore_name>:<optional_weight> Related options: * None vmware_insecure = False boolean value Set verification of the ESX/vCenter server certificate. This configuration option takes a boolean value to determine whether or not to verify the ESX/vCenter server certificate. If this option is set to True, the ESX/vCenter server certificate is not verified. If this option is set to False, then the default CA truststore is used for verification. This option is ignored if the "vmware_ca_file" option is set. In that case, the ESX/vCenter server certificate will then be verified using the file specified using the "vmware_ca_file" option . Possible Values: True False Related options: vmware_ca_file vmware_server_host = None host address value Address of the ESX/ESXi or vCenter Server target system. This configuration option sets the address of the ESX/ESXi or vCenter Server target system. This option is required when using the VMware storage backend. The address can contain an IP address (127.0.0.1) or a DNS name (www.my-domain.com). Possible Values: A valid IPv4 or IPv6 address A valid DNS name Related options: vmware_server_username vmware_server_password vmware_server_password = None string value Server password. This configuration option takes the password for authenticating with the VMware ESX/ESXi or vCenter Server. This option is required when using the VMware storage backend. Possible Values: Any string that is a password corresponding to the username specified using the "vmware_server_username" option Related options: vmware_server_host vmware_server_username vmware_server_username = None string value Server username. This configuration option takes the username for authenticating with the VMware ESX/ESXi or vCenter Server. This option is required when using the VMware storage backend. Possible Values: Any string that is the username for a user with appropriate privileges Related options: vmware_server_host vmware_server_password vmware_store_image_dir = /openstack_glance string value The directory where the glance images will be stored in the datastore. This configuration option specifies the path to the directory where the glance images will be stored in the VMware datastore. If this option is not set, the default directory where the glance images are stored is openstack_glance. Possible Values: Any string that is a valid path to a directory Related options: None vmware_task_poll_interval = 5 integer value Interval in seconds used for polling remote tasks invoked on VMware ESX/VC server. This configuration option takes in the sleep time in seconds for polling an on-going async task as part of the VMWare ESX/VC server API call. Possible Values: Any positive integer value Related options: None 3.3.4. oslo_concurrency The following table outlines the options available under the [oslo_concurrency] group in the /etc/glance/glance-scrubber.conf file. Table 3.36. oslo_concurrency Configuration option = Default value Type Description disable_process_locking = False boolean value Enables or disables inter-process locks. lock_path = None string value Directory to use for lock files. For security, the specified directory should only be writable by the user running the processes that need locking. Defaults to environment variable OSLO_LOCK_PATH. If external locks are used, a lock path must be set. 3.3.5. oslo_policy The following table outlines the options available under the [oslo_policy] group in the /etc/glance/glance-scrubber.conf file. Table 3.37. oslo_policy Configuration option = Default value Type Description enforce_scope = False boolean value This option controls whether or not to enforce scope when evaluating policies. If True , the scope of the token used in the request is compared to the scope_types of the policy being enforced. If the scopes do not match, an InvalidScope exception will be raised. If False , a message will be logged informing operators that policies are being invoked with mismatching scope. policy_default_rule = default string value Default rule. Enforced when a requested rule is not found. policy_dirs = ['policy.d'] multi valued Directories where policy configuration files are stored. They can be relative to any directory in the search path defined by the config_dir option, or absolute paths. The file defined by policy_file must exist for these directories to be searched. Missing or empty directories are ignored. policy_file = policy.json string value The relative or absolute path of a file that maps roles to permissions for a given service. Relative paths must be specified in relation to the configuration file setting this option. remote_content_type = application/x-www-form-urlencoded string value Content Type to send and receive data for REST based policy check remote_ssl_ca_crt_file = None string value Absolute path to ca cert file for REST based policy check remote_ssl_client_crt_file = None string value Absolute path to client cert for REST based policy check remote_ssl_client_key_file = None string value Absolute path client key file REST based policy check remote_ssl_verify_server_crt = False boolean value server identity verification for REST based policy check 3.4. glance-cache.conf This section contains options for the /etc/glance/glance-cache.conf file. 3.4.1. DEFAULT The following table outlines the options available under the [DEFAULT] group in the /etc/glance/glance-cache.conf file. . Configuration option = Default value Type Description admin_password = None string value The administrators password. If "use_user_token" is not in effect, then admin credentials can be specified. admin_tenant_name = None string value The tenant name of the administrative user. If "use_user_token" is not in effect, then admin tenant name can be specified. admin_user = None string value The administrators user name. If "use_user_token" is not in effect, then admin credentials can be specified. allow_additional_image_properties = True boolean value Allow users to add additional/custom properties to images. Glance defines a standard set of properties (in its schema) that appear on every image. These properties are also known as base properties . In addition to these properties, Glance allows users to add custom properties to images. These are known as additional properties . By default, this configuration option is set to True and users are allowed to add additional properties. The number of additional properties that can be added to an image can be controlled via image_property_quota configuration option. Possible values: True False Related options: image_property_quota api_limit_max = 1000 integer value Maximum number of results that could be returned by a request. As described in the help text of limit_param_default , some requests may return multiple results. The number of results to be returned are governed either by the limit parameter in the request or the limit_param_default configuration option. The value in either case, can't be greater than the absolute maximum defined by this configuration option. Anything greater than this value is trimmed down to the maximum value defined here. Note Setting this to a very large value may slow down database queries and increase response times. Setting this to a very low value may result in poor user experience. Possible values: Any positive integer Related options: limit_param_default auth_region = None string value The region for the authentication service. If "use_user_token" is not in effect and using keystone auth, then region name can be specified. auth_strategy = noauth string value The strategy to use for authentication. If "use_user_token" is not in effect, then auth strategy can be specified. auth_url = None string value The URL to the keystone service. If "use_user_token" is not in effect and using keystone auth, then URL of keystone can be specified. data_api = glance.db.sqlalchemy.api string value Python module path of data access API. Specifies the path to the API to use for accessing the data model. This option determines how the image catalog data will be accessed. Possible values: glance.db.sqlalchemy.api glance.db.registry.api glance.db.simple.api If this option is set to glance.db.sqlalchemy.api then the image catalog data is stored in and read from the database via the SQLAlchemy Core and ORM APIs. Setting this option to glance.db.registry.api will force all database access requests to be routed through the Registry service. This avoids data access from the Glance API nodes for an added layer of security, scalability and manageability. Note In v2 OpenStack Images API, the registry service is optional. In order to use the Registry API in v2, the option enable_v2_registry must be set to True . Finally, when this configuration option is set to glance.db.simple.api , image catalog data is stored in and read from an in-memory data structure. This is primarily used for testing. Related options: enable_v2_api enable_v2_registry debug = False boolean value If set to true, the logging level will be set to DEBUG instead of the default INFO level. default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO'] list value List of package logging levels in logger=LEVEL pairs. This option is ignored if log_config_append is set. digest_algorithm = sha256 string value Digest algorithm to use for digital signature. Provide a string value representing the digest algorithm to use for generating digital signatures. By default, sha256 is used. To get a list of the available algorithms supported by the version of OpenSSL on your platform, run the command: openssl list-message-digest-algorithms . Examples are sha1 , sha256 , and sha512 . Note digest_algorithm is not related to Glance's image signing and verification. It is only used to sign the universally unique identifier (UUID) as a part of the certificate file and key file validation. Possible values: An OpenSSL message digest algorithm identifier Relation options: None enable_v1_registry = True boolean value DEPRECATED FOR REMOVAL enable_v2_api = True boolean value Deploy the v2 OpenStack Images API. When this option is set to True , Glance service will respond to requests on registered endpoints conforming to the v2 OpenStack Images API. NOTES: If this option is disabled, then the enable_v2_registry option, which is enabled by default, is also recommended to be disabled. Possible values: True False Related options: enable_v2_registry enable_v2_registry = True boolean value Deploy the v2 API Registry service. When this option is set to True , the Registry service will be enabled in Glance for v2 API requests. NOTES: Use of Registry is optional in v2 API, so this option must only be enabled if both enable_v2_api is set to True and the data_api option is set to glance.db.registry.api . If deploying only the v1 OpenStack Images API, this option, which is enabled by default, should be disabled. Possible values: True False Related options: enable_v2_api data_api enabled_import_methods = ['glance-direct', 'web-download'] list value List of enabled Image Import Methods Both glance-direct and web-download are enabled by default. Related options: [DEFAULT]/node_staging_uri fatal_deprecations = False boolean value Enables or disables fatal status of deprecations. hashing_algorithm = sha512 string value Secure hashing algorithm used for computing the os_hash_value property. This option configures the Glance "multihash", which consists of two image properties: the os_hash_algo and the os_hash_value . The os_hash_algo will be populated by the value of this configuration option, and the os_hash_value will be populated by the hexdigest computed when the algorithm is applied to the uploaded or imported image data. The value must be a valid secure hash algorithm name recognized by the python hashlib library. You can determine what these are by examining the hashlib.algorithms_available data member of the version of the library being used in your Glance installation. For interoperability purposes, however, we recommend that you use the set of secure hash names supplied by the hashlib.algorithms_guaranteed data member because those algorithms are guaranteed to be supported by the hashlib library on all platforms. Thus, any image consumer using hashlib locally should be able to verify the os_hash_value of the image. The default value of sha512 is a performant secure hash algorithm. If this option is misconfigured, any attempts to store image data will fail. For that reason, we recommend using the default value. Possible values: Any secure hash algorithm name recognized by the Python hashlib library Related options: None image_cache_dir = None string value Base directory for image cache. This is the location where image data is cached and served out of. All cached images are stored directly under this directory. This directory also contains three subdirectories, namely, incomplete , invalid and queue . The incomplete subdirectory is the staging area for downloading images. An image is first downloaded to this directory. When the image download is successful it is moved to the base directory. However, if the download fails, the partially downloaded image file is moved to the invalid subdirectory. The queue`subdirectory is used for queuing images for download. This is used primarily by the cache-prefetcher, which can be scheduled as a periodic task like cache-pruner and cache-cleaner, to cache images ahead of their usage. Upon receiving the request to cache an image, Glance touches a file in the `queue directory with the image id as the file name. The cache-prefetcher, when running, polls for the files in queue directory and starts downloading them in the order they were created. When the download is successful, the zero-sized file is deleted from the queue directory. If the download fails, the zero-sized file remains and it'll be retried the time cache-prefetcher runs. Possible values: A valid path Related options: image_cache_sqlite_db image_cache_driver = sqlite string value The driver to use for image cache management. This configuration option provides the flexibility to choose between the different image-cache drivers available. An image-cache driver is responsible for providing the essential functions of image-cache like write images to/read images from cache, track age and usage of cached images, provide a list of cached images, fetch size of the cache, queue images for caching and clean up the cache, etc. The essential functions of a driver are defined in the base class glance.image_cache.drivers.base.Driver . All image-cache drivers (existing and prospective) must implement this interface. Currently available drivers are sqlite and xattr . These drivers primarily differ in the way they store the information about cached images: The sqlite driver uses a sqlite database (which sits on every glance node locally) to track the usage of cached images. The xattr driver uses the extended attributes of files to store this information. It also requires a filesystem that sets atime on the files when accessed. Possible values: sqlite xattr Related options: None image_cache_max_size = 10737418240 integer value The upper limit on cache size, in bytes, after which the cache-pruner cleans up the image cache. Note This is just a threshold for cache-pruner to act upon. It is NOT a hard limit beyond which the image cache would never grow. In fact, depending on how often the cache-pruner runs and how quickly the cache fills, the image cache can far exceed the size specified here very easily. Hence, care must be taken to appropriately schedule the cache-pruner and in setting this limit. Glance caches an image when it is downloaded. Consequently, the size of the image cache grows over time as the number of downloads increases. To keep the cache size from becoming unmanageable, it is recommended to run the cache-pruner as a periodic task. When the cache pruner is kicked off, it compares the current size of image cache and triggers a cleanup if the image cache grew beyond the size specified here. After the cleanup, the size of cache is less than or equal to size specified here. Possible values: Any non-negative integer Related options: None image_cache_sqlite_db = cache.db string value The relative path to sqlite file database that will be used for image cache management. This is a relative path to the sqlite file database that tracks the age and usage statistics of image cache. The path is relative to image cache base directory, specified by the configuration option image_cache_dir . This is a lightweight database with just one table. Possible values: A valid relative path to sqlite file database Related options: image_cache_dir image_cache_stall_time = 86400 integer value The amount of time, in seconds, an incomplete image remains in the cache. Incomplete images are images for which download is in progress. Please see the description of configuration option image_cache_dir for more detail. Sometimes, due to various reasons, it is possible the download may hang and the incompletely downloaded image remains in the incomplete directory. This configuration option sets a time limit on how long the incomplete images should remain in the incomplete directory before they are cleaned up. Once an incomplete image spends more time than is specified here, it'll be removed by cache-cleaner on its run. It is recommended to run cache-cleaner as a periodic task on the Glance API nodes to keep the incomplete images from occupying disk space. Possible values: Any non-negative integer Related options: None image_location_quota = 10 integer value Maximum number of locations allowed on an image. Any negative value is interpreted as unlimited. Related options: None image_member_quota = 128 integer value Maximum number of image members per image. This limits the maximum of users an image can be shared with. Any negative value is interpreted as unlimited. Related options: None image_property_quota = 128 integer value Maximum number of properties allowed on an image. This enforces an upper limit on the number of additional properties an image can have. Any negative value is interpreted as unlimited. Note This won't have any impact if additional properties are disabled. Please refer to allow_additional_image_properties . Related options: allow_additional_image_properties image_size_cap = 1099511627776 integer value Maximum size of image a user can upload in bytes. An image upload greater than the size mentioned here would result in an image creation failure. This configuration option defaults to 1099511627776 bytes (1 TiB). NOTES: This value should only be increased after careful consideration and must be set less than or equal to 8 EiB (9223372036854775808). This value must be set with careful consideration of the backend storage capacity. Setting this to a very low value may result in a large number of image failures. And, setting this to a very large value may result in faster consumption of storage. Hence, this must be set according to the nature of images created and storage capacity available. Possible values: Any positive number less than or equal to 9223372036854775808 image_tag_quota = 128 integer value Maximum number of tags allowed on an image. Any negative value is interpreted as unlimited. Related options: None `instance_format = [instance: %(uuid)s] ` string value The format for an instance that is passed with the log message. `instance_uuid_format = [instance: %(uuid)s] ` string value The format for an instance UUID that is passed with the log message. limit_param_default = 25 integer value The default number of results to return for a request. Responses to certain API requests, like list images, may return multiple items. The number of results returned can be explicitly controlled by specifying the limit parameter in the API request. However, if a limit parameter is not specified, this configuration value will be used as the default number of results to be returned for any API request. NOTES: The value of this configuration option may not be greater than the value specified by api_limit_max . Setting this to a very large value may slow down database queries and increase response times. Setting this to a very low value may result in poor user experience. Possible values: Any positive integer Related options: api_limit_max log-config-append = None string value The name of a logging configuration file. This file is appended to any existing logging configuration files. For details about logging configuration files, see the Python logging module documentation. Note that when logging configuration files are used then all logging configuration is set in the configuration file and other logging configuration options are ignored (for example, log-date-format). log-date-format = %Y-%m-%d %H:%M:%S string value Defines the format string for %%(asctime)s in log records. Default: %(default)s . This option is ignored if log_config_append is set. log-dir = None string value (Optional) The base directory used for relative log_file paths. This option is ignored if log_config_append is set. log-file = None string value (Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr as defined by use_stderr. This option is ignored if log_config_append is set. log_rotate_interval = 1 integer value The amount of time before the log files are rotated. This option is ignored unless log_rotation_type is setto "interval". log_rotate_interval_type = days string value Rotation interval type. The time of the last file change (or the time when the service was started) is used when scheduling the rotation. log_rotation_type = none string value Log rotation type. logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s string value Format string to use for log messages with context. Used by oslo_log.formatters.ContextFormatter logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d string value Additional data to append to log message when logging level for the message is DEBUG. Used by oslo_log.formatters.ContextFormatter logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s string value Format string to use for log messages when context is undefined. Used by oslo_log.formatters.ContextFormatter logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s string value Prefix each line of exception output with this format. Used by oslo_log.formatters.ContextFormatter logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s string value Defines the format string for %(user_identity)s that is used in logging_context_format_string. Used by oslo_log.formatters.ContextFormatter max_logfile_count = 30 integer value Maximum number of rotated log files. max_logfile_size_mb = 200 integer value Log file maximum size in MB. This option is ignored if "log_rotation_type" is not set to "size". metadata_encryption_key = None string value AES key for encrypting store location metadata. Provide a string value representing the AES cipher to use for encrypting Glance store metadata. Note The AES key to use must be set to a random string of length 16, 24 or 32 bytes. Possible values: String value representing a valid AES key Related options: None node_staging_uri = file:///tmp/staging/ string value The URL provides location where the temporary data will be stored This option is for Glance internal use only. Glance will save the image data uploaded by the user to staging endpoint during the image import process. This option does not change the staging API endpoint by any means. Note It is discouraged to use same path as [task]/work_dir Note file://<absolute-directory-path> is the only option api_image_import flow will support for now. Note The staging path must be on shared filesystem available to all Glance API nodes. Possible values: String starting with file:// followed by absolute FS path Related options: [task]/work_dir publish_errors = False boolean value Enables or disables publication of error events. pydev_worker_debug_host = None host address value Host address of the pydev server. Provide a string value representing the hostname or IP of the pydev server to use for debugging. The pydev server listens for debug connections on this address, facilitating remote debugging in Glance. Possible values: Valid hostname Valid IP address Related options: None pydev_worker_debug_port = 5678 port value Port number that the pydev server will listen on. Provide a port number to bind the pydev server to. The pydev process accepts debug connections on this port and facilitates remote debugging in Glance. Possible values: A valid port number Related options: None rate_limit_burst = 0 integer value Maximum number of logged messages per rate_limit_interval. rate_limit_except_level = CRITICAL string value Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or empty string. Logs with level greater or equal to rate_limit_except_level are not filtered. An empty string means that all levels are filtered. rate_limit_interval = 0 integer value Interval, number of seconds, of log rate limiting. registry_client_ca_file = None string value Absolute path to the Certificate Authority file. Provide a string value representing a valid absolute path to the certificate authority file to use for establishing a secure connection to the registry server. Note This option must be set if registry_client_protocol is set to https . Alternatively, the GLANCE_CLIENT_CA_FILE environment variable may be set to a filepath of the CA file. This option is ignored if the registry_client_insecure option is set to True . Possible values: String value representing a valid absolute path to the CA file. Related options: registry_client_protocol registry_client_insecure registry_client_cert_file = None string value Absolute path to the certificate file. Provide a string value representing a valid absolute path to the certificate file to use for establishing a secure connection to the registry server. Note This option must be set if registry_client_protocol is set to https . Alternatively, the GLANCE_CLIENT_CERT_FILE environment variable may be set to a filepath of the certificate file. Possible values: String value representing a valid absolute path to the certificate file. Related options: registry_client_protocol registry_client_insecure = False boolean value Set verification of the registry server certificate. Provide a boolean value to determine whether or not to validate SSL connections to the registry server. By default, this option is set to False and the SSL connections are validated. If set to True , the connection to the registry server is not validated via a certifying authority and the registry_client_ca_file option is ignored. This is the registry's equivalent of specifying --insecure on the command line using glanceclient for the API. Possible values: True False Related options: registry_client_protocol registry_client_ca_file registry_client_key_file = None string value Absolute path to the private key file. Provide a string value representing a valid absolute path to the private key file to use for establishing a secure connection to the registry server. Note This option must be set if registry_client_protocol is set to https . Alternatively, the GLANCE_CLIENT_KEY_FILE environment variable may be set to a filepath of the key file. Possible values: String value representing a valid absolute path to the key file. Related options: registry_client_protocol registry_client_protocol = http string value Protocol to use for communication with the registry server. Provide a string value representing the protocol to use for communication with the registry server. By default, this option is set to http and the connection is not secure. This option can be set to https to establish a secure connection to the registry server. In this case, provide a key to use for the SSL connection using the registry_client_key_file option. Also include the CA file and cert file using the options registry_client_ca_file and registry_client_cert_file respectively. Possible values: http https Related options: registry_client_key_file registry_client_cert_file registry_client_ca_file registry_client_timeout = 600 integer value Timeout value for registry requests. Provide an integer value representing the period of time in seconds that the API server will wait for a registry request to complete. The default value is 600 seconds. A value of 0 implies that a request will never timeout. Possible values: Zero Positive integer Related options: None registry_host = 0.0.0.0 host address value Address the registry server is hosted on. Possible values: A valid IP or hostname Related options: None registry_port = 9191 port value Port the registry server is listening on. Possible values: A valid port number Related options: None show_image_direct_url = False boolean value Show direct image location when returning an image. This configuration option indicates whether to show the direct image location when returning image details to the user. The direct image location is where the image data is stored in backend storage. This image location is shown under the image property direct_url . When multiple image locations exist for an image, the best location is displayed based on the location strategy indicated by the configuration option location_strategy . NOTES: Revealing image locations can present a GRAVE SECURITY RISK as image locations can sometimes include credentials. Hence, this is set to False by default. Set this to True with EXTREME CAUTION and ONLY IF you know what you are doing! If an operator wishes to avoid showing any image location(s) to the user, then both this option and show_multiple_locations MUST be set to False . Possible values: True False Related options: show_multiple_locations location_strategy show_multiple_locations = False boolean value Show all image locations when returning an image. This configuration option indicates whether to show all the image locations when returning image details to the user. When multiple image locations exist for an image, the locations are ordered based on the location strategy indicated by the configuration opt location_strategy . The image locations are shown under the image property locations . NOTES: Revealing image locations can present a GRAVE SECURITY RISK as image locations can sometimes include credentials. Hence, this is set to False by default. Set this to True with EXTREME CAUTION and ONLY IF you know what you are doing! See https://wiki.openstack.org/wiki/OSSN/OSSN-0065 for more information. If an operator wishes to avoid showing any image location(s) to the user, then both this option and show_image_direct_url MUST be set to False . Possible values: True False Related options: show_image_direct_url location_strategy syslog-log-facility = LOG_USER string value Syslog facility to receive log lines. This option is ignored if log_config_append is set. use-journal = False boolean value Enable journald for logging. If running in a systemd environment you may wish to enable journal support. Doing so will use the journal native protocol which includes structured metadata in addition to log messages.This option is ignored if log_config_append is set. use-json = False boolean value Use JSON formatting for logging. This option is ignored if log_config_append is set. use-syslog = False boolean value Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to honor RFC5424. This option is ignored if log_config_append is set. use_eventlog = False boolean value Log output to Windows Event Log. use_stderr = False boolean value Log output to standard error. This option is ignored if log_config_append is set. use_user_token = True boolean value Whether to pass through the user token when making requests to the registry. To prevent failures with token expiration during big files upload, it is recommended to set this parameter to False.If "use_user_token" is not in effect, then admin credentials can be specified. user_storage_quota = 0 string value Maximum amount of image storage per tenant. This enforces an upper limit on the cumulative storage consumed by all images of a tenant across all stores. This is a per-tenant limit. The default unit for this configuration option is Bytes. However, storage units can be specified using case-sensitive literals B , KB , MB , GB and TB representing Bytes, KiloBytes, MegaBytes, GigaBytes and TeraBytes respectively. Note that there should not be any space between the value and unit. Value 0 signifies no quota enforcement. Negative values are invalid and result in errors. Possible values: A string that is a valid concatenation of a non-negative integer representing the storage value and an optional string literal representing storage units as mentioned above. Related options: None watch-log-file = False boolean value Uses logging handler designed to watch file system. When log file is moved or removed this handler will open a new log file with specified path instantaneously. It makes sense only if log_file option is specified and Linux platform is used. This option is ignored if log_config_append is set. 3.4.2. glance_store The following table outlines the options available under the [glance_store] group in the /etc/glance/glance-cache.conf file. Table 3.38. glance_store Configuration option = Default value Type Description cinder_api_insecure = False boolean value Allow to perform insecure SSL requests to cinder. If this option is set to True, HTTPS endpoint connection is verified using the CA certificates file specified by cinder_ca_certificates_file option. Possible values: True False Related options: cinder_ca_certificates_file cinder_ca_certificates_file = None string value Location of a CA certificates file used for cinder client requests. The specified CA certificates file, if set, is used to verify cinder connections via HTTPS endpoint. If the endpoint is HTTP, this value is ignored. cinder_api_insecure must be set to True to enable the verification. Possible values: Path to a ca certificates file Related options: cinder_api_insecure cinder_catalog_info = volumev2::publicURL string value Information to match when looking for cinder in the service catalog. When the cinder_endpoint_template is not set and any of cinder_store_auth_address , cinder_store_user_name , cinder_store_project_name , cinder_store_password is not set, cinder store uses this information to lookup cinder endpoint from the service catalog in the current context. cinder_os_region_name , if set, is taken into consideration to fetch the appropriate endpoint. The service catalog can be listed by the openstack catalog list command. Possible values: A string of of the following form: <service_type>:<service_name>:<interface> At least service_type and interface should be specified. service_name can be omitted. Related options: cinder_os_region_name cinder_endpoint_template cinder_store_auth_address cinder_store_user_name cinder_store_project_name cinder_store_password cinder_endpoint_template = None string value Override service catalog lookup with template for cinder endpoint. When this option is set, this value is used to generate cinder endpoint, instead of looking up from the service catalog. This value is ignored if cinder_store_auth_address , cinder_store_user_name , cinder_store_project_name , and cinder_store_password are specified. If this configuration option is set, cinder_catalog_info will be ignored. Possible values: URL template string for cinder endpoint, where %%(tenant)s is replaced with the current tenant (project) name. For example: http://cinder.openstack.example.org/v2/%%(tenant)s Related options: cinder_store_auth_address cinder_store_user_name cinder_store_project_name cinder_store_password cinder_catalog_info cinder_http_retries = 3 integer value Number of cinderclient retries on failed http calls. When a call failed by any errors, cinderclient will retry the call up to the specified times after sleeping a few seconds. Possible values: A positive integer Related options: None cinder_os_region_name = None string value Region name to lookup cinder service from the service catalog. This is used only when cinder_catalog_info is used for determining the endpoint. If set, the lookup for cinder endpoint by this node is filtered to the specified region. It is useful when multiple regions are listed in the catalog. If this is not set, the endpoint is looked up from every region. Possible values: A string that is a valid region name. Related options: cinder_catalog_info cinder_state_transition_timeout = 300 integer value Time period, in seconds, to wait for a cinder volume transition to complete. When the cinder volume is created, deleted, or attached to the glance node to read/write the volume data, the volume's state is changed. For example, the newly created volume status changes from creating to available after the creation process is completed. This specifies the maximum time to wait for the status change. If a timeout occurs while waiting, or the status is changed to an unexpected value (e.g. error ), the image creation fails. Possible values: A positive integer Related options: None cinder_store_auth_address = None string value The address where the cinder authentication service is listening. When all of cinder_store_auth_address , cinder_store_user_name , cinder_store_project_name , and cinder_store_password options are specified, the specified values are always used for the authentication. This is useful to hide the image volumes from users by storing them in a project/tenant specific to the image service. It also enables users to share the image volume among other projects under the control of glance's ACL. If either of these options are not set, the cinder endpoint is looked up from the service catalog, and current context's user and project are used. Possible values: A valid authentication service address, for example: http://openstack.example.org/identity/v2.0 Related options: cinder_store_user_name cinder_store_password cinder_store_project_name cinder_store_password = None string value Password for the user authenticating against cinder. This must be used with all the following related options. If any of these are not specified, the user of the current context is used. Possible values: A valid password for the user specified by cinder_store_user_name Related options: cinder_store_auth_address cinder_store_user_name cinder_store_project_name cinder_store_project_name = None string value Project name where the image volume is stored in cinder. If this configuration option is not set, the project in current context is used. This must be used with all the following related options. If any of these are not specified, the project of the current context is used. Possible values: A valid project name Related options: cinder_store_auth_address cinder_store_user_name cinder_store_password cinder_store_user_name = None string value User name to authenticate against cinder. This must be used with all the following related options. If any of these are not specified, the user of the current context is used. Possible values: A valid user name Related options: cinder_store_auth_address cinder_store_password cinder_store_project_name cinder_volume_type = None string value Volume type that will be used for volume creation in cinder. Some cinder backends can have several volume types to optimize storage usage. Adding this option allows an operator to choose a specific volume type in cinder that can be optimized for images. If this is not set, then the default volume type specified in the cinder configuration will be used for volume creation. Possible values: A valid volume type from cinder Related options: None default_store = file string value The default scheme to use for storing images. Provide a string value representing the default scheme to use for storing images. If not set, Glance uses file as the default scheme to store images with the file store. Note The value given for this configuration option must be a valid scheme for a store registered with the stores configuration option. Possible values: file filesystem http https swift swift+http swift+https swift+config rbd sheepdog cinder vsphere Related Options: stores default_swift_reference = ref1 string value Reference to default Swift account/backing store parameters. Provide a string value representing a reference to the default set of parameters required for using swift account/backing store for image storage. The default reference value for this configuration option is ref1 . This configuration option dereferences the parameters and facilitates image storage in Swift storage backend every time a new image is added. Possible values: A valid string value Related options: None filesystem_store_chunk_size = 65536 integer value Chunk size, in bytes. The chunk size used when reading or writing image files. Raising this value may improve the throughput but it may also slightly increase the memory usage when handling a large number of requests. Possible Values: Any positive integer value Related options: None filesystem_store_datadir = /var/lib/glance/images string value Directory to which the filesystem backend store writes images. Upon start up, Glance creates the directory if it doesn't already exist and verifies write access to the user under which glance-api runs. If the write access isn't available, a BadStoreConfiguration exception is raised and the filesystem store may not be available for adding new images. Note This directory is used only when filesystem store is used as a storage backend. Either filesystem_store_datadir or filesystem_store_datadirs option must be specified in glance-api.conf . If both options are specified, a BadStoreConfiguration will be raised and the filesystem store may not be available for adding new images. Possible values: A valid path to a directory Related options: filesystem_store_datadirs filesystem_store_file_perm filesystem_store_datadirs = None multi valued List of directories and their priorities to which the filesystem backend store writes images. The filesystem store can be configured to store images in multiple directories as opposed to using a single directory specified by the filesystem_store_datadir configuration option. When using multiple directories, each directory can be given an optional priority to specify the preference order in which they should be used. Priority is an integer that is concatenated to the directory path with a colon where a higher value indicates higher priority. When two directories have the same priority, the directory with most free space is used. When no priority is specified, it defaults to zero. More information on configuring filesystem store with multiple store directories can be found at https://docs.openstack.org/glance/latest/configuration/configuring.html Note This directory is used only when filesystem store is used as a storage backend. Either filesystem_store_datadir or filesystem_store_datadirs option must be specified in glance-api.conf . If both options are specified, a BadStoreConfiguration will be raised and the filesystem store may not be available for adding new images. Possible values: List of strings of the following form: <a valid directory path>:<optional integer priority> Related options: filesystem_store_datadir filesystem_store_file_perm filesystem_store_file_perm = 0 integer value File access permissions for the image files. Set the intended file access permissions for image data. This provides a way to enable other services, e.g. Nova, to consume images directly from the filesystem store. The users running the services that are intended to be given access to could be made a member of the group that owns the files created. Assigning a value less then or equal to zero for this configuration option signifies that no changes be made to the default permissions. This value will be decoded as an octal digit. For more information, please refer the documentation at https://docs.openstack.org/glance/latest/configuration/configuring.html Possible values: A valid file access permission Zero Any negative integer Related options: None filesystem_store_metadata_file = None string value Filesystem store metadata file. The path to a file which contains the metadata to be returned with any location associated with the filesystem store. The file must contain a valid JSON object. The object should contain the keys id and mountpoint . The value for both keys should be a string. Possible values: A valid path to the store metadata file Related options: None http_proxy_information = {} dict value The http/https proxy information to be used to connect to the remote server. This configuration option specifies the http/https proxy information that should be used to connect to the remote server. The proxy information should be a key value pair of the scheme and proxy, for example, http:10.0.0.1:3128. You can also specify proxies for multiple schemes by separating the key value pairs with a comma, for example, http:10.0.0.1:3128, https:10.0.0.1:1080. Possible values: A comma separated list of scheme:proxy pairs as described above Related options: None https_ca_certificates_file = None string value Path to the CA bundle file. This configuration option enables the operator to use a custom Certificate Authority file to verify the remote server certificate. If this option is set, the https_insecure option will be ignored and the CA file specified will be used to authenticate the server certificate and establish a secure connection to the server. Possible values: A valid path to a CA file Related options: https_insecure https_insecure = True boolean value Set verification of the remote server certificate. This configuration option takes in a boolean value to determine whether or not to verify the remote server certificate. If set to True, the remote server certificate is not verified. If the option is set to False, then the default CA truststore is used for verification. This option is ignored if https_ca_certificates_file is set. The remote server certificate will then be verified using the file specified using the https_ca_certificates_file option. Possible values: True False Related options: https_ca_certificates_file rados_connect_timeout = 0 integer value Timeout value for connecting to Ceph cluster. This configuration option takes in the timeout value in seconds used when connecting to the Ceph cluster i.e. it sets the time to wait for glance-api before closing the connection. This prevents glance-api hangups during the connection to RBD. If the value for this option is set to less than or equal to 0, no timeout is set and the default librados value is used. Possible Values: Any integer value Related options: None `rbd_store_ceph_conf = ` string value Ceph configuration file path. This configuration option specifies the path to the Ceph configuration file to be used. If the value for this option is not set by the user or is set to the empty string, librados will read the standard ceph.conf file by searching the default Ceph configuration file locations in sequential order. See the Ceph documentation for details. Note If using Cephx authentication, this file should include a reference to the right keyring in a client.<USER> section NOTE 2: If you leave this option empty (the default), the actual Ceph configuration file used may change depending on what version of librados is being used. If it is important for you to know exactly which configuration file is in effect, you may specify that file here using this option. Possible Values: A valid path to a configuration file Related options: rbd_store_user rbd_store_chunk_size = 8 integer value Size, in megabytes, to chunk RADOS images into. Provide an integer value representing the size in megabytes to chunk Glance images into. The default chunk size is 8 megabytes. For optimal performance, the value should be a power of two. When Ceph's RBD object storage system is used as the storage backend for storing Glance images, the images are chunked into objects of the size set using this option. These chunked objects are then stored across the distributed block data store to use for Glance. Possible Values: Any positive integer value Related options: None rbd_store_pool = images string value RADOS pool in which images are stored. When RBD is used as the storage backend for storing Glance images, the images are stored by means of logical grouping of the objects (chunks of images) into a pool . Each pool is defined with the number of placement groups it can contain. The default pool that is used is images . More information on the RBD storage backend can be found here: http://ceph.com/planet/how-data-is-stored-in-ceph-cluster/ Possible Values: A valid pool name Related options: None rbd_store_user = None string value RADOS user to authenticate as. This configuration option takes in the RADOS user to authenticate as. This is only needed when RADOS authentication is enabled and is applicable only if the user is using Cephx authentication. If the value for this option is not set by the user or is set to None, a default value will be chosen, which will be based on the client. section in rbd_store_ceph_conf. Possible Values: A valid RADOS user Related options: rbd_store_ceph_conf rootwrap_config = /etc/glance/rootwrap.conf string value Path to the rootwrap configuration file to use for running commands as root. The cinder store requires root privileges to operate the image volumes (for connecting to iSCSI/FC volumes and reading/writing the volume data, etc.). The configuration file should allow the required commands by cinder store and os-brick library. Possible values: Path to the rootwrap config file Related options: None sheepdog_store_address = 127.0.0.1 host address value Address to bind the Sheepdog daemon to. Provide a string value representing the address to bind the Sheepdog daemon to. The default address set for the sheep is 127.0.0.1. The Sheepdog daemon, also called sheep , manages the storage in the distributed cluster by writing objects across the storage network. It identifies and acts on the messages directed to the address set using sheepdog_store_address option to store chunks of Glance images. Possible values: A valid IPv4 address A valid IPv6 address A valid hostname Related Options: sheepdog_store_port sheepdog_store_chunk_size = 64 integer value Chunk size for images to be stored in Sheepdog data store. Provide an integer value representing the size in mebibyte (1048576 bytes) to chunk Glance images into. The default chunk size is 64 mebibytes. When using Sheepdog distributed storage system, the images are chunked into objects of this size and then stored across the distributed data store to use for Glance. Chunk sizes, if a power of two, help avoid fragmentation and enable improved performance. Possible values: Positive integer value representing size in mebibytes. Related Options: None sheepdog_store_port = 7000 port value Port number on which the sheep daemon will listen. Provide an integer value representing a valid port number on which you want the Sheepdog daemon to listen on. The default port is 7000. The Sheepdog daemon, also called sheep , manages the storage in the distributed cluster by writing objects across the storage network. It identifies and acts on the messages it receives on the port number set using sheepdog_store_port option to store chunks of Glance images. Possible values: A valid port number (0 to 65535) Related Options: sheepdog_store_address stores = ['file', 'http'] list value List of enabled Glance stores. Register the storage backends to use for storing disk images as a comma separated list. The default stores enabled for storing disk images with Glance are file and http . Possible values: A comma separated list that could include: file http swift rbd sheepdog cinder vmware Related Options: default_store swift_buffer_on_upload = False boolean value Buffer image segments before upload to Swift. Provide a boolean value to indicate whether or not Glance should buffer image data to disk while uploading to swift. This enables Glance to resume uploads on error. NOTES: When enabling this option, one should take great care as this increases disk usage on the API node. Be aware that depending upon how the file system is configured, the disk space used for buffering may decrease the actual disk space available for the glance image cache. Disk utilization will cap according to the following equation: ( swift_store_large_object_chunk_size * workers * 1000) Possible values: True False Related options: swift_upload_buffer_dir swift_store_admin_tenants = [] list value List of tenants that will be granted admin access. This is a list of tenants that will be granted read/write access on all Swift containers created by Glance in multi-tenant mode. The default value is an empty list. Possible values: A comma separated list of strings representing UUIDs of Keystone projects/tenants Related options: None swift_store_auth_address = None string value The address where the Swift authentication service is listening. swift_store_auth_insecure = False boolean value Set verification of the server certificate. This boolean determines whether or not to verify the server certificate. If this option is set to True, swiftclient won't check for a valid SSL certificate when authenticating. If the option is set to False, then the default CA truststore is used for verification. Possible values: True False Related options: swift_store_cacert swift_store_auth_version = 2 string value Version of the authentication service to use. Valid versions are 2 and 3 for keystone and 1 (deprecated) for swauth and rackspace. swift_store_cacert = None string value Path to the CA bundle file. This configuration option enables the operator to specify the path to a custom Certificate Authority file for SSL verification when connecting to Swift. Possible values: A valid path to a CA file Related options: swift_store_auth_insecure swift_store_config_file = None string value Absolute path to the file containing the swift account(s) configurations. Include a string value representing the path to a configuration file that has references for each of the configured Swift account(s)/backing stores. By default, no file path is specified and customized Swift referencing is disabled. Configuring this option is highly recommended while using Swift storage backend for image storage as it avoids storage of credentials in the database. Note Please do not configure this option if you have set swift_store_multi_tenant to True . Possible values: String value representing an absolute path on the glance-api node Related options: swift_store_multi_tenant swift_store_container = glance string value Name of single container to store images/name prefix for multiple containers When a single container is being used to store images, this configuration option indicates the container within the Glance account to be used for storing all images. When multiple containers are used to store images, this will be the name prefix for all containers. Usage of single/multiple containers can be controlled using the configuration option swift_store_multiple_containers_seed . When using multiple containers, the containers will be named after the value set for this configuration option with the first N chars of the image UUID as the suffix delimited by an underscore (where N is specified by swift_store_multiple_containers_seed ). Example: if the seed is set to 3 and swift_store_container = glance , then an image with UUID fdae39a1-bac5-4238-aba4-69bcc726e848 would be placed in the container glance_fda . All dashes in the UUID are included when creating the container name but do not count toward the character limit, so when N=10 the container name would be glance_fdae39a1-ba. Possible values: If using single container, this configuration option can be any string that is a valid swift container name in Glance's Swift account If using multiple containers, this configuration option can be any string as long as it satisfies the container naming rules enforced by Swift. The value of swift_store_multiple_containers_seed should be taken into account as well. Related options: swift_store_multiple_containers_seed swift_store_multi_tenant swift_store_create_container_on_put swift_store_create_container_on_put = False boolean value Create container, if it doesn't already exist, when uploading image. At the time of uploading an image, if the corresponding container doesn't exist, it will be created provided this configuration option is set to True. By default, it won't be created. This behavior is applicable for both single and multiple containers mode. Possible values: True False Related options: None swift_store_endpoint = None string value The URL endpoint to use for Swift backend storage. Provide a string value representing the URL endpoint to use for storing Glance images in Swift store. By default, an endpoint is not set and the storage URL returned by auth is used. Setting an endpoint with swift_store_endpoint overrides the storage URL and is used for Glance image storage. Note The URL should include the path up to, but excluding the container. The location of an object is obtained by appending the container and object to the configured URL. Possible values: String value representing a valid URL path up to a Swift container Related Options: None swift_store_endpoint_type = publicURL string value Endpoint Type of Swift service. This string value indicates the endpoint type to use to fetch the Swift endpoint. The endpoint type determines the actions the user will be allowed to perform, for instance, reading and writing to the Store. This setting is only used if swift_store_auth_version is greater than 1. Possible values: publicURL adminURL internalURL Related options: swift_store_endpoint swift_store_expire_soon_interval = 60 integer value Time in seconds defining the size of the window in which a new token may be requested before the current token is due to expire. Typically, the Swift storage driver fetches a new token upon the expiration of the current token to ensure continued access to Swift. However, some Swift transactions (like uploading image segments) may not recover well if the token expires on the fly. Hence, by fetching a new token before the current token expiration, we make sure that the token does not expire or is close to expiry before a transaction is attempted. By default, the Swift storage driver requests for a new token 60 seconds or less before the current token expiration. Possible values: Zero Positive integer value Related Options: None swift_store_key = None string value Auth key for the user authenticating against the Swift authentication service. swift_store_large_object_chunk_size = 200 integer value The maximum size, in MB, of the segments when image data is segmented. When image data is segmented to upload images that are larger than the limit enforced by the Swift cluster, image data is broken into segments that are no bigger than the size specified by this configuration option. Refer to swift_store_large_object_size for more detail. For example: if swift_store_large_object_size is 5GB and swift_store_large_object_chunk_size is 1GB, an image of size 6.2GB will be segmented into 7 segments where the first six segments will be 1GB in size and the seventh segment will be 0.2GB. Possible values: A positive integer that is less than or equal to the large object limit enforced by Swift cluster in consideration. Related options: swift_store_large_object_size swift_store_large_object_size = 5120 integer value The size threshold, in MB, after which Glance will start segmenting image data. Swift has an upper limit on the size of a single uploaded object. By default, this is 5GB. To upload objects bigger than this limit, objects are segmented into multiple smaller objects that are tied together with a manifest file. For more detail, refer to https://docs.openstack.org/swift/latest/overview_large_objects.html This configuration option specifies the size threshold over which the Swift driver will start segmenting image data into multiple smaller files. Currently, the Swift driver only supports creating Dynamic Large Objects. Note This should be set by taking into account the large object limit enforced by the Swift cluster in consideration. Possible values: A positive integer that is less than or equal to the large object limit enforced by the Swift cluster in consideration. Related options: swift_store_large_object_chunk_size swift_store_multi_tenant = False boolean value Store images in tenant's Swift account. This enables multi-tenant storage mode which causes Glance images to be stored in tenant specific Swift accounts. If this is disabled, Glance stores all images in its own account. More details multi-tenant store can be found at https://wiki.openstack.org/wiki/GlanceSwiftTenantSpecificStorage Note If using multi-tenant swift store, please make sure that you do not set a swift configuration file with the swift_store_config_file option. Possible values: True False Related options: swift_store_config_file swift_store_multiple_containers_seed = 0 integer value Seed indicating the number of containers to use for storing images. When using a single-tenant store, images can be stored in one or more than one containers. When set to 0, all images will be stored in one single container. When set to an integer value between 1 and 32, multiple containers will be used to store images. This configuration option will determine how many containers are created. The total number of containers that will be used is equal to 16^N, so if this config option is set to 2, then 16^2=256 containers will be used to store images. Please refer to swift_store_container for more detail on the naming convention. More detail about using multiple containers can be found at https://specs.openstack.org/openstack/glance-specs/specs/kilo/swift-store-multiple-containers.html Note This is used only when swift_store_multi_tenant is disabled. Possible values: A non-negative integer less than or equal to 32 Related options: swift_store_container swift_store_multi_tenant swift_store_create_container_on_put swift_store_region = None string value The region of Swift endpoint to use by Glance. Provide a string value representing a Swift region where Glance can connect to for image storage. By default, there is no region set. When Glance uses Swift as the storage backend to store images for a specific tenant that has multiple endpoints, setting of a Swift region with swift_store_region allows Glance to connect to Swift in the specified region as opposed to a single region connectivity. This option can be configured for both single-tenant and multi-tenant storage. Note Setting the region with swift_store_region is tenant-specific and is necessary only if the tenant has multiple endpoints across different regions. Possible values: A string value representing a valid Swift region. Related Options: None swift_store_retry_get_count = 0 integer value The number of times a Swift download will be retried before the request fails. Provide an integer value representing the number of times an image download must be retried before erroring out. The default value is zero (no retry on a failed image download). When set to a positive integer value, swift_store_retry_get_count ensures that the download is attempted this many more times upon a download failure before sending an error message. Possible values: Zero Positive integer value Related Options: None swift_store_service_type = object-store string value Type of Swift service to use. Provide a string value representing the service type to use for storing images while using Swift backend storage. The default service type is set to object-store . Note If swift_store_auth_version is set to 2, the value for this configuration option needs to be object-store . If using a higher version of Keystone or a different auth scheme, this option may be modified. Possible values: A string representing a valid service type for Swift storage. Related Options: None swift_store_ssl_compression = True boolean value SSL layer compression for HTTPS Swift requests. Provide a boolean value to determine whether or not to compress HTTPS Swift requests for images at the SSL layer. By default, compression is enabled. When using Swift as the backend store for Glance image storage, SSL layer compression of HTTPS Swift requests can be set using this option. If set to False, SSL layer compression of HTTPS Swift requests is disabled. Disabling this option may improve performance for images which are already in a compressed format, for example, qcow2. Possible values: True False Related Options: None swift_store_use_trusts = True boolean value Use trusts for multi-tenant Swift store. This option instructs the Swift store to create a trust for each add/get request when the multi-tenant store is in use. Using trusts allows the Swift store to avoid problems that can be caused by an authentication token expiring during the upload or download of data. By default, swift_store_use_trusts is set to True (use of trusts is enabled). If set to False , a user token is used for the Swift connection instead, eliminating the overhead of trust creation. Note This option is considered only when swift_store_multi_tenant is set to True Possible values: True False Related options: swift_store_multi_tenant swift_store_user = None string value The user to authenticate against the Swift authentication service. swift_upload_buffer_dir = None string value Directory to buffer image segments before upload to Swift. Provide a string value representing the absolute path to the directory on the glance node where image segments will be buffered briefly before they are uploaded to swift. NOTES: * This is required only when the configuration option swift_buffer_on_upload is set to True. * This directory should be provisioned keeping in mind the swift_store_large_object_chunk_size and the maximum number of images that could be uploaded simultaneously by a given glance node. Possible values: String value representing an absolute directory path Related options: swift_buffer_on_upload swift_store_large_object_chunk_size vmware_api_retry_count = 10 integer value The number of VMware API retries. This configuration option specifies the number of times the VMware ESX/VC server API must be retried upon connection related issues or server API call overload. It is not possible to specify retry forever . Possible Values: Any positive integer value Related options: None vmware_ca_file = None string value Absolute path to the CA bundle file. This configuration option enables the operator to use a custom Cerificate Authority File to verify the ESX/vCenter certificate. If this option is set, the "vmware_insecure" option will be ignored and the CA file specified will be used to authenticate the ESX/vCenter server certificate and establish a secure connection to the server. Possible Values: Any string that is a valid absolute path to a CA file Related options: vmware_insecure vmware_datastores = None multi valued The datastores where the image can be stored. This configuration option specifies the datastores where the image can be stored in the VMWare store backend. This option may be specified multiple times for specifying multiple datastores. The datastore name should be specified after its datacenter path, separated by ":". An optional weight may be given after the datastore name, separated again by ":" to specify the priority. Thus, the required format becomes <datacenter_path>:<datastore_name>:<optional_weight>. When adding an image, the datastore with highest weight will be selected, unless there is not enough free space available in cases where the image size is already known. If no weight is given, it is assumed to be zero and the directory will be considered for selection last. If multiple datastores have the same weight, then the one with the most free space available is selected. Possible Values: Any string of the format: <datacenter_path>:<datastore_name>:<optional_weight> Related options: * None vmware_insecure = False boolean value Set verification of the ESX/vCenter server certificate. This configuration option takes a boolean value to determine whether or not to verify the ESX/vCenter server certificate. If this option is set to True, the ESX/vCenter server certificate is not verified. If this option is set to False, then the default CA truststore is used for verification. This option is ignored if the "vmware_ca_file" option is set. In that case, the ESX/vCenter server certificate will then be verified using the file specified using the "vmware_ca_file" option . Possible Values: True False Related options: vmware_ca_file vmware_server_host = None host address value Address of the ESX/ESXi or vCenter Server target system. This configuration option sets the address of the ESX/ESXi or vCenter Server target system. This option is required when using the VMware storage backend. The address can contain an IP address (127.0.0.1) or a DNS name (www.my-domain.com). Possible Values: A valid IPv4 or IPv6 address A valid DNS name Related options: vmware_server_username vmware_server_password vmware_server_password = None string value Server password. This configuration option takes the password for authenticating with the VMware ESX/ESXi or vCenter Server. This option is required when using the VMware storage backend. Possible Values: Any string that is a password corresponding to the username specified using the "vmware_server_username" option Related options: vmware_server_host vmware_server_username vmware_server_username = None string value Server username. This configuration option takes the username for authenticating with the VMware ESX/ESXi or vCenter Server. This option is required when using the VMware storage backend. Possible Values: Any string that is the username for a user with appropriate privileges Related options: vmware_server_host vmware_server_password vmware_store_image_dir = /openstack_glance string value The directory where the glance images will be stored in the datastore. This configuration option specifies the path to the directory where the glance images will be stored in the VMware datastore. If this option is not set, the default directory where the glance images are stored is openstack_glance. Possible Values: Any string that is a valid path to a directory Related options: None vmware_task_poll_interval = 5 integer value Interval in seconds used for polling remote tasks invoked on VMware ESX/VC server. This configuration option takes in the sleep time in seconds for polling an on-going async task as part of the VMWare ESX/VC server API call. Possible Values: Any positive integer value Related options: None 3.4.3. oslo_policy The following table outlines the options available under the [oslo_policy] group in the /etc/glance/glance-cache.conf file. Table 3.39. oslo_policy Configuration option = Default value Type Description enforce_scope = False boolean value This option controls whether or not to enforce scope when evaluating policies. If True , the scope of the token used in the request is compared to the scope_types of the policy being enforced. If the scopes do not match, an InvalidScope exception will be raised. If False , a message will be logged informing operators that policies are being invoked with mismatching scope. policy_default_rule = default string value Default rule. Enforced when a requested rule is not found. policy_dirs = ['policy.d'] multi valued Directories where policy configuration files are stored. They can be relative to any directory in the search path defined by the config_dir option, or absolute paths. The file defined by policy_file must exist for these directories to be searched. Missing or empty directories are ignored. policy_file = policy.json string value The relative or absolute path of a file that maps roles to permissions for a given service. Relative paths must be specified in relation to the configuration file setting this option. remote_content_type = application/x-www-form-urlencoded string value Content Type to send and receive data for REST based policy check remote_ssl_ca_crt_file = None string value Absolute path to ca cert file for REST based policy check remote_ssl_client_crt_file = None string value Absolute path to client cert for REST based policy check remote_ssl_client_key_file = None string value Absolute path client key file REST based policy check remote_ssl_verify_server_crt = False boolean value server identity verification for REST based policy check
|
[
"The values must be specified as: <group_name>.<event_name> For example: image.create,task.success,metadef_tag"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/configuration_reference/glance
|
Chapter 3. Event sinks
|
Chapter 3. Event sinks 3.1. Event sinks When you create an event source, you can specify an event sink where events are sent to from the source. An event sink is an addressable or a callable resource that can receive incoming events from other resources. Knative services, channels, and brokers are all examples of event sinks. There is also a specific Apache Kafka sink type available. Addressable objects receive and acknowledge an event delivered over HTTP to an address defined in their status.address.url field. As a special case, the core Kubernetes Service object also fulfills the addressable interface. Callable objects are able to receive an event delivered over HTTP and transform the event, returning 0 or 1 new events in the HTTP response. These returned events may be further processed in the same way that events from an external event source are processed. 3.1.1. Knative CLI sink flag When you create an event source by using the Knative ( kn ) CLI, you can specify a sink where events are sent to from that resource by using the --sink flag. The sink can be any addressable or callable resource that can receive incoming events from other resources. The following example creates a sink binding that uses a service, http://event-display.svc.cluster.local , as the sink: Example command using the sink flag USD kn source binding create bind-heartbeat \ --namespace sinkbinding-example \ --subject "Job:batch/v1:app=heartbeat-cron" \ --sink http://event-display.svc.cluster.local \ 1 --ce-override "sink=bound" 1 svc in http://event-display.svc.cluster.local determines that the sink is a Knative service. Other default sink prefixes include channel , and broker . Tip You can configure which CRs can be used with the --sink flag for Knative ( kn ) CLI commands by Customizing kn . 3.2. Creating event sinks When you create an event source, you can specify an event sink where events are sent to from the source. An event sink is an addressable or a callable resource that can receive incoming events from other resources. Knative services, channels, and brokers are all examples of event sinks. There is also a specific Apache Kafka sink type available. For information about creating resources that can be used as event sinks, see the following documentation: Serverless applications Creating brokers Creating channels Kafka sink 3.3. Sink for Apache Kafka Apache Kafka sinks are a type of event sink that are available if a cluster administrator has enabled Apache Kafka on your cluster. You can send events directly from an event source to a Kafka topic by using a Kafka sink. 3.3.1. Creating an Apache Kafka sink by using YAML You can create a Kafka sink that sends events to a Kafka topic. By default, a Kafka sink uses the binary content mode, which is more efficient than the structured mode. To create a Kafka sink by using YAML, you must create a YAML file that defines a KafkaSink object, then apply it by using the oc apply command. Prerequisites The OpenShift Serverless Operator, Knative Eventing, and the KnativeKafka custom resource (CR) are installed on your cluster. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. You have access to a Red Hat AMQ Streams (Kafka) cluster that produces the Kafka messages you want to import. Install the OpenShift CLI ( oc ). Procedure Create a KafkaSink object definition as a YAML file: Kafka sink YAML apiVersion: eventing.knative.dev/v1alpha1 kind: KafkaSink metadata: name: <sink-name> namespace: <namespace> spec: topic: <topic-name> bootstrapServers: - <bootstrap-server> To create the Kafka sink, apply the KafkaSink YAML file: USD oc apply -f <filename> Configure an event source so that the sink is specified in its spec: Example of a Kafka sink connected to an API server source apiVersion: sources.knative.dev/v1alpha2 kind: ApiServerSource metadata: name: <source-name> 1 namespace: <namespace> 2 spec: serviceAccountName: <service-account-name> 3 mode: Resource resources: - apiVersion: v1 kind: Event sink: ref: apiVersion: eventing.knative.dev/v1alpha1 kind: KafkaSink name: <sink-name> 4 1 The name of the event source. 2 The namespace of the event source. 3 The service account for the event source. 4 The Kafka sink name. 3.3.2. Creating an event sink for Apache Kafka by using the OpenShift Container Platform web console You can create a Kafka sink that sends events to a Kafka topic by using the Developer perspective in the OpenShift Container Platform web console. By default, a Kafka sink uses the binary content mode, which is more efficient than the structured mode. As a developer, you can create an event sink to receive events from a particular source and send them to a Kafka topic. Prerequisites You have installed the OpenShift Serverless Operator, with Knative Serving, Knative Eventing, and Knative broker for Apache Kafka APIs, from the OperatorHub. You have created a Kafka topic in your Kafka environment. Procedure In the Developer perspective, navigate to the +Add view. Click Event Sink in the Eventing catalog . Search for KafkaSink in the catalog items and click it. Click Create Event Sink . In the form view, type the URL of the bootstrap server, which is a combination of host name and port. Type the name of the topic to send event data. Type the name of the event sink. Click Create . Verification In the Developer perspective, navigate to the Topology view. Click the created event sink to view its details in the right panel. 3.3.3. Configuring security for Apache Kafka sinks Transport Layer Security (TLS) is used by Apache Kafka clients and servers to encrypt traffic between Knative and Kafka, as well as for authentication. TLS is the only supported method of traffic encryption for the Knative broker implementation for Apache Kafka. Simple Authentication and Security Layer (SASL) is used by Apache Kafka for authentication. If you use SASL authentication on your cluster, users must provide credentials to Knative for communicating with the Kafka cluster; otherwise events cannot be produced or consumed. Prerequisites The OpenShift Serverless Operator, Knative Eventing, and the KnativeKafka custom resources (CRs) are installed on your OpenShift Container Platform cluster. Kafka sink is enabled in the KnativeKafka CR. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. You have a Kafka cluster CA certificate stored as a .pem file. You have a Kafka cluster client certificate and a key stored as .pem files. You have installed the OpenShift ( oc ) CLI. You have chosen the SASL mechanism to use, for example, PLAIN , SCRAM-SHA-256 , or SCRAM-SHA-512 . Procedure Create the certificate files as a secret in the same namespace as your KafkaSink object: Important Certificates and keys must be in PEM format. For authentication using SASL without encryption: USD oc create secret -n <namespace> generic <secret_name> \ --from-literal=protocol=SASL_PLAINTEXT \ --from-literal=sasl.mechanism=<sasl_mechanism> \ --from-literal=user=<username> \ --from-literal=password=<password> For authentication using SASL and encryption using TLS: USD oc create secret -n <namespace> generic <secret_name> \ --from-literal=protocol=SASL_SSL \ --from-literal=sasl.mechanism=<sasl_mechanism> \ --from-file=ca.crt=<my_caroot.pem_file_path> \ 1 --from-literal=user=<username> \ --from-literal=password=<password> 1 The ca.crt can be omitted to use the system's root CA set if you are using a public cloud managed Kafka service. For authentication and encryption using TLS: USD oc create secret -n <namespace> generic <secret_name> \ --from-literal=protocol=SSL \ --from-file=ca.crt=<my_caroot.pem_file_path> \ 1 --from-file=user.crt=<my_cert.pem_file_path> \ --from-file=user.key=<my_key.pem_file_path> 1 The ca.crt can be omitted to use the system's root CA set if you are using a public cloud managed Kafka service. Create or modify a KafkaSink object and add a reference to your secret in the auth spec: apiVersion: eventing.knative.dev/v1alpha1 kind: KafkaSink metadata: name: <sink_name> namespace: <namespace> spec: ... auth: secret: ref: name: <secret_name> ... Apply the KafkaSink object: USD oc apply -f <filename>
|
[
"kn source binding create bind-heartbeat --namespace sinkbinding-example --subject \"Job:batch/v1:app=heartbeat-cron\" --sink http://event-display.svc.cluster.local \\ 1 --ce-override \"sink=bound\"",
"apiVersion: eventing.knative.dev/v1alpha1 kind: KafkaSink metadata: name: <sink-name> namespace: <namespace> spec: topic: <topic-name> bootstrapServers: - <bootstrap-server>",
"oc apply -f <filename>",
"apiVersion: sources.knative.dev/v1alpha2 kind: ApiServerSource metadata: name: <source-name> 1 namespace: <namespace> 2 spec: serviceAccountName: <service-account-name> 3 mode: Resource resources: - apiVersion: v1 kind: Event sink: ref: apiVersion: eventing.knative.dev/v1alpha1 kind: KafkaSink name: <sink-name> 4",
"oc create secret -n <namespace> generic <secret_name> --from-literal=protocol=SASL_PLAINTEXT --from-literal=sasl.mechanism=<sasl_mechanism> --from-literal=user=<username> --from-literal=password=<password>",
"oc create secret -n <namespace> generic <secret_name> --from-literal=protocol=SASL_SSL --from-literal=sasl.mechanism=<sasl_mechanism> --from-file=ca.crt=<my_caroot.pem_file_path> \\ 1 --from-literal=user=<username> --from-literal=password=<password>",
"oc create secret -n <namespace> generic <secret_name> --from-literal=protocol=SSL --from-file=ca.crt=<my_caroot.pem_file_path> \\ 1 --from-file=user.crt=<my_cert.pem_file_path> --from-file=user.key=<my_key.pem_file_path>",
"apiVersion: eventing.knative.dev/v1alpha1 kind: KafkaSink metadata: name: <sink_name> namespace: <namespace> spec: auth: secret: ref: name: <secret_name>",
"oc apply -f <filename>"
] |
https://docs.redhat.com/en/documentation/red_hat_openshift_serverless/1.33/html/eventing/event-sinks
|
Release notes for Red Hat build of OpenJDK 11.0.23
|
Release notes for Red Hat build of OpenJDK 11.0.23 Red Hat build of OpenJDK 11 Red Hat Customer Content Services
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_red_hat_build_of_openjdk_11.0.23/index
|
Chapter 2. Installing automation content navigator on RHEL
|
Chapter 2. Installing automation content navigator on RHEL As a content creator, you can install automation content navigator on Red Hat Enterprise Linux (RHEL) 8.6 or later. 2.1. Installing automation content navigator on RHEL from an RPM You can install automation content navigator on Red Hat Enterprise Linux (RHEL) from an RPM. Prerequisites You have installed Python 3.10 or later. You have installed RHEL 8.6 or later. You registered your system with Red Hat Subscription Manager. Note Ensure that you only install the navigator matching your current Red Hat Ansible Automation Platform environment. Procedure Attach the Red Hat Ansible Automation Platform SKU: USD subscription-manager attach --pool=<sku-pool-id> Install automation content navigator with the following command: v.2.4 for RHEL 8 for x86_64 USD sudo dnf install --enablerepo=ansible-automation-platform-2.4-for-rhel-8-x86_64-rpms ansible-navigator v.2.4 for RHEL 9 for x86-64 USD sudo dnf install --enablerepo=ansible-automation-platform-2.4-for-rhel-9-x86_64-rpms ansible-navigator Verification Verify your automation content navigator installation: USD ansible-navigator --help The following example demonstrates a successful installation:
|
[
"subscription-manager attach --pool=<sku-pool-id>",
"sudo dnf install --enablerepo=ansible-automation-platform-2.4-for-rhel-8-x86_64-rpms ansible-navigator",
"sudo dnf install --enablerepo=ansible-automation-platform-2.4-for-rhel-9-x86_64-rpms ansible-navigator",
"ansible-navigator --help"
] |
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/automation_content_navigator_creator_guide/assembly-installing_on_rhel_ansible-navigator
|
Making open source more inclusive
|
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/release_notes_for_red_hat_build_of_openjdk_8.0.352/making-open-source-more-inclusive
|
Chapter 12. Troubleshooting
|
Chapter 12. Troubleshooting This section describes resources for troubleshooting the Migration Toolkit for Containers (MTC). For known issues, see the MTC release notes . 12.1. MTC workflow You can migrate Kubernetes resources, persistent volume data, and internal container images to OpenShift Container Platform 4.10 by using the Migration Toolkit for Containers (MTC) web console or the Kubernetes API. MTC migrates the following resources: A namespace specified in a migration plan. Namespace-scoped resources: When the MTC migrates a namespace, it migrates all the objects and resources associated with that namespace, such as services or pods. Additionally, if a resource that exists in the namespace but not at the cluster level depends on a resource that exists at the cluster level, the MTC migrates both resources. For example, a security context constraint (SCC) is a resource that exists at the cluster level and a service account (SA) is a resource that exists at the namespace level. If an SA exists in a namespace that the MTC migrates, the MTC automatically locates any SCCs that are linked to the SA and also migrates those SCCs. Similarly, the MTC migrates persistent volumes that are linked to the persistent volume claims of the namespace. Note Cluster-scoped resources might have to be migrated manually, depending on the resource. Custom resources (CRs) and custom resource definitions (CRDs): MTC automatically migrates CRs and CRDs at the namespace level. Migrating an application with the MTC web console involves the following steps: Install the Migration Toolkit for Containers Operator on all clusters. You can install the Migration Toolkit for Containers Operator in a restricted environment with limited or no internet access. The source and target clusters must have network access to each other and to a mirror registry. Configure the replication repository, an intermediate object storage that MTC uses to migrate data. The source and target clusters must have network access to the replication repository during migration. If you are using a proxy server, you must configure it to allow network traffic between the replication repository and the clusters. Add the source cluster to the MTC web console. Add the replication repository to the MTC web console. Create a migration plan, with one of the following data migration options: Copy : MTC copies the data from the source cluster to the replication repository, and from the replication repository to the target cluster. Note If you are using direct image migration or direct volume migration, the images or volumes are copied directly from the source cluster to the target cluster. Move : MTC unmounts a remote volume, for example, NFS, from the source cluster, creates a PV resource on the target cluster pointing to the remote volume, and then mounts the remote volume on the target cluster. Applications running on the target cluster use the same remote volume that the source cluster was using. The remote volume must be accessible to the source and target clusters. Note Although the replication repository does not appear in this diagram, it is required for migration. Run the migration plan, with one of the following options: Stage copies data to the target cluster without stopping the application. A stage migration can be run multiple times so that most of the data is copied to the target before migration. Running one or more stage migrations reduces the duration of the cutover migration. Cutover stops the application on the source cluster and moves the resources to the target cluster. Optional: You can clear the Halt transactions on the source cluster during migration checkbox. About MTC custom resources The Migration Toolkit for Containers (MTC) creates the following custom resources (CRs): MigCluster (configuration, MTC cluster): Cluster definition MigStorage (configuration, MTC cluster): Storage definition MigPlan (configuration, MTC cluster): Migration plan The MigPlan CR describes the source and target clusters, replication repository, and namespaces being migrated. It is associated with 0, 1, or many MigMigration CRs. Note Deleting a MigPlan CR deletes the associated MigMigration CRs. BackupStorageLocation (configuration, MTC cluster): Location of Velero backup objects VolumeSnapshotLocation (configuration, MTC cluster): Location of Velero volume snapshots MigMigration (action, MTC cluster): Migration, created every time you stage or migrate data. Each MigMigration CR is associated with a MigPlan CR. Backup (action, source cluster): When you run a migration plan, the MigMigration CR creates two Velero backup CRs on each source cluster: Backup CR #1 for Kubernetes objects Backup CR #2 for PV data Restore (action, target cluster): When you run a migration plan, the MigMigration CR creates two Velero restore CRs on the target cluster: Restore CR #1 (using Backup CR #2) for PV data Restore CR #2 (using Backup CR #1) for Kubernetes objects 12.2. MTC custom resource manifests Migration Toolkit for Containers (MTC) uses the following custom resource (CR) manifests for migrating applications. 12.2.1. DirectImageMigration The DirectImageMigration CR copies images directly from the source cluster to the destination cluster. apiVersion: migration.openshift.io/v1alpha1 kind: DirectImageMigration metadata: labels: controller-tools.k8s.io: "1.0" name: <direct_image_migration> spec: srcMigClusterRef: name: <source_cluster> namespace: openshift-migration destMigClusterRef: name: <destination_cluster> namespace: openshift-migration namespaces: 1 - <source_namespace_1> - <source_namespace_2>:<destination_namespace_3> 2 1 One or more namespaces containing images to be migrated. By default, the destination namespace has the same name as the source namespace. 2 Source namespace mapped to a destination namespace with a different name. 12.2.2. DirectImageStreamMigration The DirectImageStreamMigration CR copies image stream references directly from the source cluster to the destination cluster. apiVersion: migration.openshift.io/v1alpha1 kind: DirectImageStreamMigration metadata: labels: controller-tools.k8s.io: "1.0" name: <direct_image_stream_migration> spec: srcMigClusterRef: name: <source_cluster> namespace: openshift-migration destMigClusterRef: name: <destination_cluster> namespace: openshift-migration imageStreamRef: name: <image_stream> namespace: <source_image_stream_namespace> destNamespace: <destination_image_stream_namespace> 12.2.3. DirectVolumeMigration The DirectVolumeMigration CR copies persistent volumes (PVs) directly from the source cluster to the destination cluster. apiVersion: migration.openshift.io/v1alpha1 kind: DirectVolumeMigration metadata: name: <direct_volume_migration> namespace: openshift-migration spec: createDestinationNamespaces: false 1 deleteProgressReportingCRs: false 2 destMigClusterRef: name: <host_cluster> 3 namespace: openshift-migration persistentVolumeClaims: - name: <pvc> 4 namespace: <pvc_namespace> srcMigClusterRef: name: <source_cluster> namespace: openshift-migration 1 Set to true to create namespaces for the PVs on the destination cluster. 2 Set to true to delete DirectVolumeMigrationProgress CRs after migration. The default is false so that DirectVolumeMigrationProgress CRs are retained for troubleshooting. 3 Update the cluster name if the destination cluster is not the host cluster. 4 Specify one or more PVCs to be migrated. 12.2.4. DirectVolumeMigrationProgress The DirectVolumeMigrationProgress CR shows the progress of the DirectVolumeMigration CR. apiVersion: migration.openshift.io/v1alpha1 kind: DirectVolumeMigrationProgress metadata: labels: controller-tools.k8s.io: "1.0" name: <direct_volume_migration_progress> spec: clusterRef: name: <source_cluster> namespace: openshift-migration podRef: name: <rsync_pod> namespace: openshift-migration 12.2.5. MigAnalytic The MigAnalytic CR collects the number of images, Kubernetes resources, and the persistent volume (PV) capacity from an associated MigPlan CR. You can configure the data that it collects. apiVersion: migration.openshift.io/v1alpha1 kind: MigAnalytic metadata: annotations: migplan: <migplan> name: <miganalytic> namespace: openshift-migration labels: migplan: <migplan> spec: analyzeImageCount: true 1 analyzeK8SResources: true 2 analyzePVCapacity: true 3 listImages: false 4 listImagesLimit: 50 5 migPlanRef: name: <migplan> namespace: openshift-migration 1 Optional: Returns the number of images. 2 Optional: Returns the number, kind, and API version of the Kubernetes resources. 3 Optional: Returns the PV capacity. 4 Returns a list of image names. The default is false so that the output is not excessively long. 5 Optional: Specify the maximum number of image names to return if listImages is true . 12.2.6. MigCluster The MigCluster CR defines a host, local, or remote cluster. apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: labels: controller-tools.k8s.io: "1.0" name: <host_cluster> 1 namespace: openshift-migration spec: isHostCluster: true 2 # The 'azureResourceGroup' parameter is relevant only for Microsoft Azure. azureResourceGroup: <azure_resource_group> 3 caBundle: <ca_bundle_base64> 4 insecure: false 5 refresh: false 6 # The 'restartRestic' parameter is relevant for a source cluster. restartRestic: true 7 # The following parameters are relevant for a remote cluster. exposedRegistryPath: <registry_route> 8 url: <destination_cluster_url> 9 serviceAccountSecretRef: name: <source_secret> 10 namespace: openshift-config 1 Update the cluster name if the migration-controller pod is not running on this cluster. 2 The migration-controller pod runs on this cluster if true . 3 Microsoft Azure only: Specify the resource group. 4 Optional: If you created a certificate bundle for self-signed CA certificates and if the insecure parameter value is false , specify the base64-encoded certificate bundle. 5 Set to true to disable SSL verification. 6 Set to true to validate the cluster. 7 Set to true to restart the Restic pods on the source cluster after the Stage pods are created. 8 Remote cluster and direct image migration only: Specify the exposed secure registry path. 9 Remote cluster only: Specify the URL. 10 Remote cluster only: Specify the name of the Secret object. 12.2.7. MigHook The MigHook CR defines a migration hook that runs custom code at a specified stage of the migration. You can create up to four migration hooks. Each hook runs during a different phase of the migration. You can configure the hook name, runtime duration, a custom image, and the cluster where the hook will run. The migration phases and namespaces of the hooks are configured in the MigPlan CR. apiVersion: migration.openshift.io/v1alpha1 kind: MigHook metadata: generateName: <hook_name_prefix> 1 name: <mighook> 2 namespace: openshift-migration spec: activeDeadlineSeconds: 1800 3 custom: false 4 image: <hook_image> 5 playbook: <ansible_playbook_base64> 6 targetCluster: source 7 1 Optional: A unique hash is appended to the value for this parameter so that each migration hook has a unique name. You do not need to specify the value of the name parameter. 2 Specify the migration hook name, unless you specify the value of the generateName parameter. 3 Optional: Specify the maximum number of seconds that a hook can run. The default is 1800 . 4 The hook is a custom image if true . The custom image can include Ansible or it can be written in a different programming language. 5 Specify the custom image, for example, quay.io/konveyor/hook-runner:latest . Required if custom is true . 6 Base64-encoded Ansible playbook. Required if custom is false . 7 Specify the cluster on which the hook will run. Valid values are source or destination . 12.2.8. MigMigration The MigMigration CR runs a MigPlan CR. You can configure a Migmigration CR to run a stage or incremental migration, to cancel a migration in progress, or to roll back a completed migration. apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: labels: controller-tools.k8s.io: "1.0" name: <migmigration> namespace: openshift-migration spec: canceled: false 1 rollback: false 2 stage: false 3 quiescePods: true 4 keepAnnotations: true 5 verify: false 6 migPlanRef: name: <migplan> namespace: openshift-migration 1 Set to true to cancel a migration in progress. 2 Set to true to roll back a completed migration. 3 Set to true to run a stage migration. Data is copied incrementally and the pods on the source cluster are not stopped. 4 Set to true to stop the application during migration. The pods on the source cluster are scaled to 0 after the Backup stage. 5 Set to true to retain the labels and annotations applied during the migration. 6 Set to true to check the status of the migrated pods on the destination cluster are checked and to return the names of pods that are not in a Running state. 12.2.9. MigPlan The MigPlan CR defines the parameters of a migration plan. You can configure destination namespaces, hook phases, and direct or indirect migration. Note By default, a destination namespace has the same name as the source namespace. If you configure a different destination namespace, you must ensure that the namespaces are not duplicated on the source or the destination clusters because the UID and GID ranges are copied during migration. apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: labels: controller-tools.k8s.io: "1.0" name: <migplan> namespace: openshift-migration spec: closed: false 1 srcMigClusterRef: name: <source_cluster> namespace: openshift-migration destMigClusterRef: name: <destination_cluster> namespace: openshift-migration hooks: 2 - executionNamespace: <namespace> 3 phase: <migration_phase> 4 reference: name: <hook> 5 namespace: <hook_namespace> 6 serviceAccount: <service_account> 7 indirectImageMigration: true 8 indirectVolumeMigration: false 9 migStorageRef: name: <migstorage> namespace: openshift-migration namespaces: - <source_namespace_1> 10 - <source_namespace_2> - <source_namespace_3>:<destination_namespace_4> 11 refresh: false 12 1 The migration has completed if true . You cannot create another MigMigration CR for this MigPlan CR. 2 Optional: You can specify up to four migration hooks. Each hook must run during a different migration phase. 3 Optional: Specify the namespace in which the hook will run. 4 Optional: Specify the migration phase during which a hook runs. One hook can be assigned to one phase. Valid values are PreBackup , PostBackup , PreRestore , and PostRestore . 5 Optional: Specify the name of the MigHook CR. 6 Optional: Specify the namespace of MigHook CR. 7 Optional: Specify a service account with cluster-admin privileges. 8 Direct image migration is disabled if true . Images are copied from the source cluster to the replication repository and from the replication repository to the destination cluster. 9 Direct volume migration is disabled if true . PVs are copied from the source cluster to the replication repository and from the replication repository to the destination cluster. 10 Specify one or more source namespaces. If you specify only the source namespace, the destination namespace is the same. 11 Specify the destination namespace if it is different from the source namespace. 12 The MigPlan CR is validated if true . 12.2.10. MigStorage The MigStorage CR describes the object storage for the replication repository. Amazon Web Services (AWS), Microsoft Azure, Google Cloud Storage, Multi-Cloud Object Gateway, and generic S3-compatible cloud storage are supported. AWS and the snapshot copy method have additional parameters. apiVersion: migration.openshift.io/v1alpha1 kind: MigStorage metadata: labels: controller-tools.k8s.io: "1.0" name: <migstorage> namespace: openshift-migration spec: backupStorageProvider: <backup_storage_provider> 1 volumeSnapshotProvider: <snapshot_storage_provider> 2 backupStorageConfig: awsBucketName: <bucket> 3 awsRegion: <region> 4 credsSecretRef: namespace: openshift-config name: <storage_secret> 5 awsKmsKeyId: <key_id> 6 awsPublicUrl: <public_url> 7 awsSignatureVersion: <signature_version> 8 volumeSnapshotConfig: awsRegion: <region> 9 credsSecretRef: namespace: openshift-config name: <storage_secret> 10 refresh: false 11 1 Specify the storage provider. 2 Snapshot copy method only: Specify the storage provider. 3 AWS only: Specify the bucket name. 4 AWS only: Specify the bucket region, for example, us-east-1 . 5 Specify the name of the Secret object that you created for the storage. 6 AWS only: If you are using the AWS Key Management Service, specify the unique identifier of the key. 7 AWS only: If you granted public access to the AWS bucket, specify the bucket URL. 8 AWS only: Specify the AWS signature version for authenticating requests to the bucket, for example, 4 . 9 Snapshot copy method only: Specify the geographical region of the clusters. 10 Snapshot copy method only: Specify the name of the Secret object that you created for the storage. 11 Set to true to validate the cluster. 12.3. Logs and debugging tools This section describes logs and debugging tools that you can use for troubleshooting. 12.3.1. Viewing migration plan resources You can view migration plan resources to monitor a running migration or to troubleshoot a failed migration by using the MTC web console and the command line interface (CLI). Procedure In the MTC web console, click Migration Plans . Click the Migrations number to a migration plan to view the Migrations page. Click a migration to view the Migration details . Expand Migration resources to view the migration resources and their status in a tree view. Note To troubleshoot a failed migration, start with a high-level resource that has failed and then work down the resource tree towards the lower-level resources. Click the Options menu to a resource and select one of the following options: Copy oc describe command copies the command to your clipboard. Log in to the relevant cluster and then run the command. The conditions and events of the resource are displayed in YAML format. Copy oc logs command copies the command to your clipboard. Log in to the relevant cluster and then run the command. If the resource supports log filtering, a filtered log is displayed. View JSON displays the resource data in JSON format in a web browser. The data is the same as the output for the oc get <resource> command. 12.3.2. Viewing a migration plan log You can view an aggregated log for a migration plan. You use the MTC web console to copy a command to your clipboard and then run the command from the command line interface (CLI). The command displays the filtered logs of the following pods: Migration Controller Velero Restic Rsync Stunnel Registry Procedure In the MTC web console, click Migration Plans . Click the Migrations number to a migration plan. Click View logs . Click the Copy icon to copy the oc logs command to your clipboard. Log in to the relevant cluster and enter the command on the CLI. The aggregated log for the migration plan is displayed. 12.3.3. Using the migration log reader You can use the migration log reader to display a single filtered view of all the migration logs. Procedure Get the mig-log-reader pod: USD oc -n openshift-migration get pods | grep log Enter the following command to display a single migration log: USD oc -n openshift-migration logs -f <mig-log-reader-pod> -c color 1 1 The -c plain option displays the log without colors. 12.3.4. Accessing performance metrics The MigrationController custom resource (CR) records metrics and pulls them into on-cluster monitoring storage. You can query the metrics by using Prometheus Query Language (PromQL) to diagnose migration performance issues. All metrics are reset when the Migration Controller pod restarts. You can access the performance metrics and run queries by using the OpenShift Container Platform web console. Procedure In the OpenShift Container Platform web console, click Observe Metrics . Enter a PromQL query, select a time window to display, and click Run Queries . If your web browser does not display all the results, use the Prometheus console. 12.3.4.1. Provided metrics The MigrationController custom resource (CR) provides metrics for the MigMigration CR count and for its API requests. 12.3.4.1.1. cam_app_workload_migrations This metric is a count of MigMigration CRs over time. It is useful for viewing alongside the mtc_client_request_count and mtc_client_request_elapsed metrics to collate API request information with migration status changes. This metric is included in Telemetry. Table 12.1. cam_app_workload_migrations metric Queryable label name Sample label values Label description status running , idle , failed , completed Status of the MigMigration CR type stage, final Type of the MigMigration CR 12.3.4.1.2. mtc_client_request_count This metric is a cumulative count of Kubernetes API requests that MigrationController issued. It is not included in Telemetry. Table 12.2. mtc_client_request_count metric Queryable label name Sample label values Label description cluster https://migcluster-url:443 Cluster that the request was issued against component MigPlan , MigCluster Sub-controller API that issued request function (*ReconcileMigPlan).Reconcile Function that the request was issued from kind SecretList , Deployment Kubernetes kind the request was issued for 12.3.4.1.3. mtc_client_request_elapsed This metric is a cumulative latency, in milliseconds, of Kubernetes API requests that MigrationController issued. It is not included in Telemetry. Table 12.3. mtc_client_request_elapsed metric Queryable label name Sample label values Label description cluster https://cluster-url.com:443 Cluster that the request was issued against component migplan , migcluster Sub-controller API that issued request function (*ReconcileMigPlan).Reconcile Function that the request was issued from kind SecretList , Deployment Kubernetes resource that the request was issued for 12.3.4.1.4. Useful queries The table lists some helpful queries that can be used for monitoring performance. Table 12.4. Useful queries Query Description mtc_client_request_count Number of API requests issued, sorted by request type sum(mtc_client_request_count) Total number of API requests issued mtc_client_request_elapsed API request latency, sorted by request type sum(mtc_client_request_elapsed) Total latency of API requests sum(mtc_client_request_elapsed) / sum(mtc_client_request_count) Average latency of API requests mtc_client_request_elapsed / mtc_client_request_count Average latency of API requests, sorted by request type cam_app_workload_migrations{status="running"} * 100 Count of running migrations, multiplied by 100 for easier viewing alongside request counts 12.3.5. Using the must-gather tool You can collect logs, metrics, and information about MTC custom resources by using the must-gather tool. The must-gather data must be attached to all customer cases. You can collect data for a one-hour or a 24-hour period and view the data with the Prometheus console. Prerequisites You must be logged in to the OpenShift Container Platform cluster as a user with the cluster-admin role. You must have the OpenShift CLI ( oc ) installed. Procedure Navigate to the directory where you want to store the must-gather data. Run the oc adm must-gather command for one of the following data collection options: To collect data for the past hour: USD oc adm must-gather --image=registry.redhat.io/rhmtc/openshift-migration-must-gather-rhel8:v1.7 The data is saved as must-gather/must-gather.tar.gz . You can upload this file to a support case on the Red Hat Customer Portal . To collect data for the past 24 hours: USD oc adm must-gather --image=registry.redhat.io/rhmtc/openshift-migration-must-gather-rhel8:v1.7 \ -- /usr/bin/gather_metrics_dump This operation can take a long time. The data is saved as must-gather/metrics/prom_data.tar.gz . Viewing metrics data with the Prometheus console You can view the metrics data with the Prometheus console. Procedure Decompress the prom_data.tar.gz file: USD tar -xvzf must-gather/metrics/prom_data.tar.gz Create a local Prometheus instance: USD make prometheus-run The command outputs the Prometheus URL. Output Started Prometheus on http://localhost:9090 Launch a web browser and navigate to the URL to view the data by using the Prometheus web console. After you have viewed the data, delete the Prometheus instance and data: USD make prometheus-cleanup 12.3.6. Debugging Velero resources with the Velero CLI tool You can debug Backup and Restore custom resources (CRs) and retrieve logs with the Velero CLI tool. The Velero CLI tool provides more detailed information than the OpenShift CLI tool. Syntax Use the oc exec command to run a Velero CLI command: USD oc -n openshift-migration exec deployment/velero -c velero -- ./velero \ <backup_restore_cr> <command> <cr_name> Example USD oc -n openshift-migration exec deployment/velero -c velero -- ./velero \ backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql Help option Use the velero --help option to list all Velero CLI commands: USD oc -n openshift-migration exec deployment/velero -c velero -- ./velero \ --help Describe command Use the velero describe command to retrieve a summary of warnings and errors associated with a Backup or Restore CR: USD oc -n openshift-migration exec deployment/velero -c velero -- ./velero \ <backup_restore_cr> describe <cr_name> Example USD oc -n openshift-migration exec deployment/velero -c velero -- ./velero \ backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql Logs command Use the velero logs command to retrieve the logs of a Backup or Restore CR: USD oc -n openshift-migration exec deployment/velero -c velero -- ./velero \ <backup_restore_cr> logs <cr_name> Example USD oc -n openshift-migration exec deployment/velero -c velero -- ./velero \ restore logs ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf 12.3.7. Debugging a partial migration failure You can debug a partial migration failure warning message by using the Velero CLI to examine the Restore custom resource (CR) logs. A partial failure occurs when Velero encounters an issue that does not cause a migration to fail. For example, if a custom resource definition (CRD) is missing or if there is a discrepancy between CRD versions on the source and target clusters, the migration completes but the CR is not created on the target cluster. Velero logs the issue as a partial failure and then processes the rest of the objects in the Backup CR. Procedure Check the status of a MigMigration CR: USD oc get migmigration <migmigration> -o yaml Example output status: conditions: - category: Warn durable: true lastTransitionTime: "2021-01-26T20:48:40Z" message: 'Final Restore openshift-migration/ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf: partially failed on destination cluster' status: "True" type: VeleroFinalRestorePartiallyFailed - category: Advisory durable: true lastTransitionTime: "2021-01-26T20:48:42Z" message: The migration has completed with warnings, please look at `Warn` conditions. reason: Completed status: "True" type: SucceededWithWarnings Check the status of the Restore CR by using the Velero describe command: USD oc -n {namespace} exec deployment/velero -c velero -- ./velero \ restore describe <restore> Example output Phase: PartiallyFailed (run 'velero restore logs ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf' for more information) Errors: Velero: <none> Cluster: <none> Namespaces: migration-example: error restoring example.com/migration-example/migration-example: the server could not find the requested resource Check the Restore CR logs by using the Velero logs command: USD oc -n {namespace} exec deployment/velero -c velero -- ./velero \ restore logs <restore> Example output time="2021-01-26T20:48:37Z" level=info msg="Attempting to restore migration-example: migration-example" logSource="pkg/restore/restore.go:1107" restore=openshift-migration/ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf time="2021-01-26T20:48:37Z" level=info msg="error restoring migration-example: the server could not find the requested resource" logSource="pkg/restore/restore.go:1170" restore=openshift-migration/ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf The Restore CR log error message, the server could not find the requested resource , indicates the cause of the partially failed migration. 12.3.8. Using MTC custom resources for troubleshooting You can check the following Migration Toolkit for Containers (MTC) custom resources (CRs) to troubleshoot a failed migration: MigCluster MigStorage MigPlan BackupStorageLocation The BackupStorageLocation CR contains a migrationcontroller label to identify the MTC instance that created the CR: labels: migrationcontroller: ebe13bee-c803-47d0-a9e9-83f380328b93 VolumeSnapshotLocation The VolumeSnapshotLocation CR contains a migrationcontroller label to identify the MTC instance that created the CR: labels: migrationcontroller: ebe13bee-c803-47d0-a9e9-83f380328b93 MigMigration Backup MTC changes the reclaim policy of migrated persistent volumes (PVs) to Retain on the target cluster. The Backup CR contains an openshift.io/orig-reclaim-policy annotation that indicates the original reclaim policy. You can manually restore the reclaim policy of the migrated PVs. Restore Procedure List the MigMigration CRs in the openshift-migration namespace: USD oc get migmigration -n openshift-migration Example output NAME AGE 88435fe0-c9f8-11e9-85e6-5d593ce65e10 6m42s Inspect the MigMigration CR: USD oc describe migmigration 88435fe0-c9f8-11e9-85e6-5d593ce65e10 -n openshift-migration The output is similar to the following examples. MigMigration example output name: 88435fe0-c9f8-11e9-85e6-5d593ce65e10 namespace: openshift-migration labels: <none> annotations: touch: 3b48b543-b53e-4e44-9d34-33563f0f8147 apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: creationTimestamp: 2019-08-29T01:01:29Z generation: 20 resourceVersion: 88179 selfLink: /apis/migration.openshift.io/v1alpha1/namespaces/openshift-migration/migmigrations/88435fe0-c9f8-11e9-85e6-5d593ce65e10 uid: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 spec: migPlanRef: name: socks-shop-mig-plan namespace: openshift-migration quiescePods: true stage: false status: conditions: category: Advisory durable: True lastTransitionTime: 2019-08-29T01:03:40Z message: The migration has completed successfully. reason: Completed status: True type: Succeeded phase: Completed startTimestamp: 2019-08-29T01:01:29Z events: <none> Velero backup CR #2 example output that describes the PV data apiVersion: velero.io/v1 kind: Backup metadata: annotations: openshift.io/migrate-copy-phase: final openshift.io/migrate-quiesce-pods: "true" openshift.io/migration-registry: 172.30.105.179:5000 openshift.io/migration-registry-dir: /socks-shop-mig-plan-registry-44dd3bd5-c9f8-11e9-95ad-0205fe66cbb6 openshift.io/orig-reclaim-policy: delete creationTimestamp: "2019-08-29T01:03:15Z" generateName: 88435fe0-c9f8-11e9-85e6-5d593ce65e10- generation: 1 labels: app.kubernetes.io/part-of: migration migmigration: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 migration-stage-backup: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 velero.io/storage-location: myrepo-vpzq9 name: 88435fe0-c9f8-11e9-85e6-5d593ce65e10-59gb7 namespace: openshift-migration resourceVersion: "87313" selfLink: /apis/velero.io/v1/namespaces/openshift-migration/backups/88435fe0-c9f8-11e9-85e6-5d593ce65e10-59gb7 uid: c80dbbc0-c9f8-11e9-95ad-0205fe66cbb6 spec: excludedNamespaces: [] excludedResources: [] hooks: resources: [] includeClusterResources: null includedNamespaces: - sock-shop includedResources: - persistentvolumes - persistentvolumeclaims - namespaces - imagestreams - imagestreamtags - secrets - configmaps - pods labelSelector: matchLabels: migration-included-stage-backup: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 storageLocation: myrepo-vpzq9 ttl: 720h0m0s volumeSnapshotLocations: - myrepo-wv6fx status: completionTimestamp: "2019-08-29T01:02:36Z" errors: 0 expiration: "2019-09-28T01:02:35Z" phase: Completed startTimestamp: "2019-08-29T01:02:35Z" validationErrors: null version: 1 volumeSnapshotsAttempted: 0 volumeSnapshotsCompleted: 0 warnings: 0 Velero restore CR #2 example output that describes the Kubernetes resources apiVersion: velero.io/v1 kind: Restore metadata: annotations: openshift.io/migrate-copy-phase: final openshift.io/migrate-quiesce-pods: "true" openshift.io/migration-registry: 172.30.90.187:5000 openshift.io/migration-registry-dir: /socks-shop-mig-plan-registry-36f54ca7-c925-11e9-825a-06fa9fb68c88 creationTimestamp: "2019-08-28T00:09:49Z" generateName: e13a1b60-c927-11e9-9555-d129df7f3b96- generation: 3 labels: app.kubernetes.io/part-of: migration migmigration: e18252c9-c927-11e9-825a-06fa9fb68c88 migration-final-restore: e18252c9-c927-11e9-825a-06fa9fb68c88 name: e13a1b60-c927-11e9-9555-d129df7f3b96-gb8nx namespace: openshift-migration resourceVersion: "82329" selfLink: /apis/velero.io/v1/namespaces/openshift-migration/restores/e13a1b60-c927-11e9-9555-d129df7f3b96-gb8nx uid: 26983ec0-c928-11e9-825a-06fa9fb68c88 spec: backupName: e13a1b60-c927-11e9-9555-d129df7f3b96-sz24f excludedNamespaces: null excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io includedNamespaces: null includedResources: null namespaceMapping: null restorePVs: true status: errors: 0 failureReason: "" phase: Completed validationErrors: null warnings: 15 12.4. Common issues and concerns This section describes common issues and concerns that can cause issues during migration. 12.4.1. Updating deprecated internal images If your application uses images from the openshift namespace, the required versions of the images must be present on the target cluster. If an OpenShift Container Platform 3 image is deprecated in OpenShift Container Platform 4.10, you can manually update the image stream tag by using podman . Prerequisites You must have podman installed. You must be logged in as a user with cluster-admin privileges. If you are using insecure registries, add your registry host values to the [registries.insecure] section of /etc/container/registries.conf to ensure that podman does not encounter a TLS verification error. The internal registries must be exposed on the source and target clusters. Procedure Ensure that the internal registries are exposed on the OpenShift Container Platform 3 and 4 clusters. The OpenShift image registry is exposed by default on OpenShift Container Platform 4. If you are using insecure registries, add your registry host values to the [registries.insecure] section of /etc/container/registries.conf to ensure that podman does not encounter a TLS verification error. Log in to the OpenShift Container Platform 3 registry: USD podman login -u USD(oc whoami) -p USD(oc whoami -t) --tls-verify=false <registry_url>:<port> Log in to the OpenShift Container Platform 4 registry: USD podman login -u USD(oc whoami) -p USD(oc whoami -t) --tls-verify=false <registry_url>:<port> Pull the OpenShift Container Platform 3 image: USD podman pull <registry_url>:<port>/openshift/<image> Tag the OpenShift Container Platform 3 image for the OpenShift Container Platform 4 registry: USD podman tag <registry_url>:<port>/openshift/<image> \ 1 <registry_url>:<port>/openshift/<image> 2 1 Specify the registry URL and port for the OpenShift Container Platform 3 cluster. 2 Specify the registry URL and port for the OpenShift Container Platform 4 cluster. Push the image to the OpenShift Container Platform 4 registry: USD podman push <registry_url>:<port>/openshift/<image> 1 1 Specify the OpenShift Container Platform 4 cluster. Verify that the image has a valid image stream: USD oc get imagestream -n openshift | grep <image> Example output NAME IMAGE REPOSITORY TAGS UPDATED my_image image-registry.openshift-image-registry.svc:5000/openshift/my_image latest 32 seconds ago 12.4.2. Direct volume migration does not complete If direct volume migration does not complete, the target cluster might not have the same node-selector annotations as the source cluster. Migration Toolkit for Containers (MTC) migrates namespaces with all annotations to preserve security context constraints and scheduling requirements. During direct volume migration, MTC creates Rsync transfer pods on the target cluster in the namespaces that were migrated from the source cluster. If a target cluster namespace does not have the same annotations as the source cluster namespace, the Rsync transfer pods cannot be scheduled. The Rsync pods remain in a Pending state. You can identify and fix this issue by performing the following procedure. Procedure Check the status of the MigMigration CR: USD oc describe migmigration <pod> -n openshift-migration The output includes the following status message: Example output Some or all transfer pods are not running for more than 10 mins on destination cluster On the source cluster, obtain the details of a migrated namespace: USD oc get namespace <namespace> -o yaml 1 1 Specify the migrated namespace. On the target cluster, edit the migrated namespace: USD oc edit namespace <namespace> Add the missing openshift.io/node-selector annotations to the migrated namespace as in the following example: apiVersion: v1 kind: Namespace metadata: annotations: openshift.io/node-selector: "region=east" ... Run the migration plan again. 12.4.3. Error messages and resolutions This section describes common error messages you might encounter with the Migration Toolkit for Containers (MTC) and how to resolve their underlying causes. 12.4.3.1. CA certificate error displayed when accessing the MTC console for the first time If a CA certificate error message is displayed the first time you try to access the MTC console, the likely cause is the use of self-signed CA certificates in one of the clusters. To resolve this issue, navigate to the oauth-authorization-server URL displayed in the error message and accept the certificate. To resolve this issue permanently, add the certificate to the trust store of your web browser. If an Unauthorized message is displayed after you have accepted the certificate, navigate to the MTC console and refresh the web page. 12.4.3.2. OAuth timeout error in the MTC console If a connection has timed out message is displayed in the MTC console after you have accepted a self-signed certificate, the causes are likely to be the following: Interrupted network access to the OAuth server Interrupted network access to the OpenShift Container Platform console Proxy configuration that blocks access to the oauth-authorization-server URL. See MTC console inaccessible because of OAuth timeout error for details. To determine the cause of the timeout: Inspect the MTC console web page with a browser web inspector. Check the Migration UI pod log for errors. 12.4.3.3. Certificate signed by unknown authority error If you use a self-signed certificate to secure a cluster or a replication repository for the Migration Toolkit for Containers (MTC), certificate verification might fail with the following error message: Certificate signed by unknown authority . You can create a custom CA certificate bundle file and upload it in the MTC web console when you add a cluster or a replication repository. Procedure Download a CA certificate from a remote endpoint and save it as a CA bundle file: USD echo -n | openssl s_client -connect <host_FQDN>:<port> \ 1 | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > <ca_bundle.cert> 2 1 Specify the host FQDN and port of the endpoint, for example, api.my-cluster.example.com:6443 . 2 Specify the name of the CA bundle file. 12.4.3.4. Backup storage location errors in the Velero pod log If a Velero Backup custom resource contains a reference to a backup storage location (BSL) that does not exist, the Velero pod log might display the following error messages: USD oc logs <Velero_Pod> -n openshift-migration Example output level=error msg="Error checking repository for stale locks" error="error getting backup storage location: BackupStorageLocation.velero.io \"ts-dpa-1\" not found" error.file="/remote-source/src/github.com/vmware-tanzu/velero/pkg/restic/repository_manager.go:259" You can ignore these error messages. A missing BSL cannot cause a migration to fail. 12.4.3.5. Pod volume backup timeout error in the Velero pod log If a migration fails because Restic times out, the following error is displayed in the Velero pod log. level=error msg="Error backing up item" backup=velero/monitoring error="timed out waiting for all PodVolumeBackups to complete" error.file="/go/src/github.com/heptio/velero/pkg/restic/backupper.go:165" error.function="github.com/heptio/velero/pkg/restic.(*backupper).BackupPodVolumes" group=v1 The default value of restic_timeout is one hour. You can increase this parameter for large migrations, keeping in mind that a higher value may delay the return of error messages. Procedure In the OpenShift Container Platform web console, navigate to Operators Installed Operators . Click Migration Toolkit for Containers Operator . In the MigrationController tab, click migration-controller . In the YAML tab, update the following parameter value: spec: restic_timeout: 1h 1 1 Valid units are h (hours), m (minutes), and s (seconds), for example, 3h30m15s . Click Save . 12.4.3.6. Restic verification errors in the MigMigration custom resource If data verification fails when migrating a persistent volume with the file system data copy method, the following error is displayed in the MigMigration CR. Example output status: conditions: - category: Warn durable: true lastTransitionTime: 2020-04-16T20:35:16Z message: There were verify errors found in 1 Restic volume restores. See restore `<registry-example-migration-rvwcm>` for details 1 status: "True" type: ResticVerifyErrors 2 1 The error message identifies the Restore CR name. 2 ResticVerifyErrors is a general error warning type that includes verification errors. Note A data verification error does not cause the migration process to fail. You can check the Restore CR to identify the source of the data verification error. Procedure Log in to the target cluster. View the Restore CR: USD oc describe <registry-example-migration-rvwcm> -n openshift-migration The output identifies the persistent volume with PodVolumeRestore errors. Example output status: phase: Completed podVolumeRestoreErrors: - kind: PodVolumeRestore name: <registry-example-migration-rvwcm-98t49> namespace: openshift-migration podVolumeRestoreResticErrors: - kind: PodVolumeRestore name: <registry-example-migration-rvwcm-98t49> namespace: openshift-migration View the PodVolumeRestore CR: USD oc describe <migration-example-rvwcm-98t49> The output identifies the Restic pod that logged the errors. Example output completionTimestamp: 2020-05-01T20:49:12Z errors: 1 resticErrors: 1 ... resticPod: <restic-nr2v5> View the Restic pod log to locate the errors: USD oc logs -f <restic-nr2v5> 12.4.3.7. Restic permission error when migrating from NFS storage with root_squash enabled If you are migrating data from NFS storage and root_squash is enabled, Restic maps to nfsnobody and does not have permission to perform the migration. The following error is displayed in the Restic pod log. Example output backup=openshift-migration/<backup_id> controller=pod-volume-backup error="fork/exec /usr/bin/restic: permission denied" error.file="/go/src/github.com/vmware-tanzu/velero/pkg/controller/pod_volume_backup_controller.go:280" error.function="github.com/vmware-tanzu/velero/pkg/controller.(*podVolumeBackupController).processBackup" logSource="pkg/controller/pod_volume_backup_controller.go:280" name=<backup_id> namespace=openshift-migration You can resolve this issue by creating a supplemental group for Restic and adding the group ID to the MigrationController CR manifest. Procedure Create a supplemental group for Restic on the NFS storage. Set the setgid bit on the NFS directories so that group ownership is inherited. Add the restic_supplemental_groups parameter to the MigrationController CR manifest on the source and target clusters: spec: restic_supplemental_groups: <group_id> 1 1 Specify the supplemental group ID. Wait for the Restic pods to restart so that the changes are applied. 12.4.4. Known issues This release has the following known issues: During migration, the Migration Toolkit for Containers (MTC) preserves the following namespace annotations: openshift.io/sa.scc.mcs openshift.io/sa.scc.supplemental-groups openshift.io/sa.scc.uid-range These annotations preserve the UID range, ensuring that the containers retain their file system permissions on the target cluster. There is a risk that the migrated UIDs could duplicate UIDs within an existing or future namespace on the target cluster. ( BZ#1748440 ) Most cluster-scoped resources are not yet handled by MTC. If your applications require cluster-scoped resources, you might have to create them manually on the target cluster. If a migration fails, the migration plan does not retain custom PV settings for quiesced pods. You must manually roll back the migration, delete the migration plan, and create a new migration plan with your PV settings. ( BZ#1784899 ) If a large migration fails because Restic times out, you can increase the restic_timeout parameter value (default: 1h ) in the MigrationController custom resource (CR) manifest. If you select the data verification option for PVs that are migrated with the file system copy method, performance is significantly slower. If you are migrating data from NFS storage and root_squash is enabled, Restic maps to nfsnobody . The migration fails and a permission error is displayed in the Restic pod log. ( BZ#1873641 ) You can resolve this issue by adding supplemental groups for Restic to the MigrationController CR manifest: spec: ... restic_supplemental_groups: - 5555 - 6666 If you perform direct volume migration with nodes that are in different availability zones or availability sets, the migration might fail because the migrated pods cannot access the PVC. ( BZ#1947487 ) 12.5. Rolling back a migration You can roll back a migration by using the MTC web console or the CLI. You can also roll back a migration manually . 12.5.1. Rolling back a migration by using the MTC web console You can roll back a migration by using the Migration Toolkit for Containers (MTC) web console. Note The following resources remain in the migrated namespaces for debugging after a failed direct volume migration (DVM): Config maps (source and destination clusters) Secret objects (source and destination clusters) Rsync CRs (source cluster) These resources do not affect rollback. You can delete them manually. If you later run the same migration plan successfully, the resources from the failed migration are deleted automatically. If your application was stopped during a failed migration, you must roll back the migration to prevent data corruption in the persistent volume. Rollback is not required if the application was not stopped during migration because the original application is still running on the source cluster. Procedure In the MTC web console, click Migration plans . Click the Options menu beside a migration plan and select Rollback under Migration . Click Rollback and wait for rollback to complete. In the migration plan details, Rollback succeeded is displayed. Verify that rollback was successful in the OpenShift Container Platform web console of the source cluster: Click Home Projects . Click the migrated project to view its status. In the Routes section, click Location to verify that the application is functioning, if applicable. Click Workloads Pods to verify that the pods are running in the migrated namespace. Click Storage Persistent volumes to verify that the migrated persistent volume is correctly provisioned. 12.5.2. Rolling back a migration from the command line interface You can roll back a migration by creating a MigMigration custom resource (CR) from the command line interface. Note The following resources remain in the migrated namespaces for debugging after a failed direct volume migration (DVM): Config maps (source and destination clusters) Secret objects (source and destination clusters) Rsync CRs (source cluster) These resources do not affect rollback. You can delete them manually. If you later run the same migration plan successfully, the resources from the failed migration are deleted automatically. If your application was stopped during a failed migration, you must roll back the migration to prevent data corruption in the persistent volume. Rollback is not required if the application was not stopped during migration because the original application is still running on the source cluster. Procedure Create a MigMigration CR based on the following example: USD cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: labels: controller-tools.k8s.io: "1.0" name: <migmigration> namespace: openshift-migration spec: ... rollback: true ... migPlanRef: name: <migplan> 1 namespace: openshift-migration EOF 1 Specify the name of the associated MigPlan CR. In the MTC web console, verify that the migrated project resources have been removed from the target cluster. Verify that the migrated project resources are present in the source cluster and that the application is running. 12.5.3. Rolling back a migration manually You can roll back a failed migration manually by deleting the stage pods and unquiescing the application. If you run the same migration plan successfully, the resources from the failed migration are deleted automatically. Note The following resources remain in the migrated namespaces after a failed direct volume migration (DVM): Config maps (source and destination clusters) Secret objects (source and destination clusters) Rsync CRs (source cluster) These resources do not affect rollback. You can delete them manually. Procedure Delete the stage pods on all clusters: USD oc delete USD(oc get pods -l migration.openshift.io/is-stage-pod -n <namespace>) 1 1 Namespaces specified in the MigPlan CR. Unquiesce the application on the source cluster by scaling the replicas to their premigration number: USD oc scale deployment <deployment> --replicas=<premigration_replicas> The migration.openshift.io/preQuiesceReplicas annotation in the Deployment CR displays the premigration number of replicas: apiVersion: extensions/v1beta1 kind: Deployment metadata: annotations: deployment.kubernetes.io/revision: "1" migration.openshift.io/preQuiesceReplicas: "1" Verify that the application pods are running on the source cluster: USD oc get pod -n <namespace> Additional resources Deleting Operators from a cluster using the web console
|
[
"apiVersion: migration.openshift.io/v1alpha1 kind: DirectImageMigration metadata: labels: controller-tools.k8s.io: \"1.0\" name: <direct_image_migration> spec: srcMigClusterRef: name: <source_cluster> namespace: openshift-migration destMigClusterRef: name: <destination_cluster> namespace: openshift-migration namespaces: 1 - <source_namespace_1> - <source_namespace_2>:<destination_namespace_3> 2",
"apiVersion: migration.openshift.io/v1alpha1 kind: DirectImageStreamMigration metadata: labels: controller-tools.k8s.io: \"1.0\" name: <direct_image_stream_migration> spec: srcMigClusterRef: name: <source_cluster> namespace: openshift-migration destMigClusterRef: name: <destination_cluster> namespace: openshift-migration imageStreamRef: name: <image_stream> namespace: <source_image_stream_namespace> destNamespace: <destination_image_stream_namespace>",
"apiVersion: migration.openshift.io/v1alpha1 kind: DirectVolumeMigration metadata: name: <direct_volume_migration> namespace: openshift-migration spec: createDestinationNamespaces: false 1 deleteProgressReportingCRs: false 2 destMigClusterRef: name: <host_cluster> 3 namespace: openshift-migration persistentVolumeClaims: - name: <pvc> 4 namespace: <pvc_namespace> srcMigClusterRef: name: <source_cluster> namespace: openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: DirectVolumeMigrationProgress metadata: labels: controller-tools.k8s.io: \"1.0\" name: <direct_volume_migration_progress> spec: clusterRef: name: <source_cluster> namespace: openshift-migration podRef: name: <rsync_pod> namespace: openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigAnalytic metadata: annotations: migplan: <migplan> name: <miganalytic> namespace: openshift-migration labels: migplan: <migplan> spec: analyzeImageCount: true 1 analyzeK8SResources: true 2 analyzePVCapacity: true 3 listImages: false 4 listImagesLimit: 50 5 migPlanRef: name: <migplan> namespace: openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: labels: controller-tools.k8s.io: \"1.0\" name: <host_cluster> 1 namespace: openshift-migration spec: isHostCluster: true 2 The 'azureResourceGroup' parameter is relevant only for Microsoft Azure. azureResourceGroup: <azure_resource_group> 3 caBundle: <ca_bundle_base64> 4 insecure: false 5 refresh: false 6 The 'restartRestic' parameter is relevant for a source cluster. restartRestic: true 7 The following parameters are relevant for a remote cluster. exposedRegistryPath: <registry_route> 8 url: <destination_cluster_url> 9 serviceAccountSecretRef: name: <source_secret> 10 namespace: openshift-config",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigHook metadata: generateName: <hook_name_prefix> 1 name: <mighook> 2 namespace: openshift-migration spec: activeDeadlineSeconds: 1800 3 custom: false 4 image: <hook_image> 5 playbook: <ansible_playbook_base64> 6 targetCluster: source 7",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: labels: controller-tools.k8s.io: \"1.0\" name: <migmigration> namespace: openshift-migration spec: canceled: false 1 rollback: false 2 stage: false 3 quiescePods: true 4 keepAnnotations: true 5 verify: false 6 migPlanRef: name: <migplan> namespace: openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: labels: controller-tools.k8s.io: \"1.0\" name: <migplan> namespace: openshift-migration spec: closed: false 1 srcMigClusterRef: name: <source_cluster> namespace: openshift-migration destMigClusterRef: name: <destination_cluster> namespace: openshift-migration hooks: 2 - executionNamespace: <namespace> 3 phase: <migration_phase> 4 reference: name: <hook> 5 namespace: <hook_namespace> 6 serviceAccount: <service_account> 7 indirectImageMigration: true 8 indirectVolumeMigration: false 9 migStorageRef: name: <migstorage> namespace: openshift-migration namespaces: - <source_namespace_1> 10 - <source_namespace_2> - <source_namespace_3>:<destination_namespace_4> 11 refresh: false 12",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigStorage metadata: labels: controller-tools.k8s.io: \"1.0\" name: <migstorage> namespace: openshift-migration spec: backupStorageProvider: <backup_storage_provider> 1 volumeSnapshotProvider: <snapshot_storage_provider> 2 backupStorageConfig: awsBucketName: <bucket> 3 awsRegion: <region> 4 credsSecretRef: namespace: openshift-config name: <storage_secret> 5 awsKmsKeyId: <key_id> 6 awsPublicUrl: <public_url> 7 awsSignatureVersion: <signature_version> 8 volumeSnapshotConfig: awsRegion: <region> 9 credsSecretRef: namespace: openshift-config name: <storage_secret> 10 refresh: false 11",
"oc -n openshift-migration get pods | grep log",
"oc -n openshift-migration logs -f <mig-log-reader-pod> -c color 1",
"oc adm must-gather --image=registry.redhat.io/rhmtc/openshift-migration-must-gather-rhel8:v1.7",
"oc adm must-gather --image=registry.redhat.io/rhmtc/openshift-migration-must-gather-rhel8:v1.7 -- /usr/bin/gather_metrics_dump",
"tar -xvzf must-gather/metrics/prom_data.tar.gz",
"make prometheus-run",
"Started Prometheus on http://localhost:9090",
"make prometheus-cleanup",
"oc -n openshift-migration exec deployment/velero -c velero -- ./velero <backup_restore_cr> <command> <cr_name>",
"oc -n openshift-migration exec deployment/velero -c velero -- ./velero backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql",
"oc -n openshift-migration exec deployment/velero -c velero -- ./velero --help",
"oc -n openshift-migration exec deployment/velero -c velero -- ./velero <backup_restore_cr> describe <cr_name>",
"oc -n openshift-migration exec deployment/velero -c velero -- ./velero backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql",
"oc -n openshift-migration exec deployment/velero -c velero -- ./velero <backup_restore_cr> logs <cr_name>",
"oc -n openshift-migration exec deployment/velero -c velero -- ./velero restore logs ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf",
"oc get migmigration <migmigration> -o yaml",
"status: conditions: - category: Warn durable: true lastTransitionTime: \"2021-01-26T20:48:40Z\" message: 'Final Restore openshift-migration/ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf: partially failed on destination cluster' status: \"True\" type: VeleroFinalRestorePartiallyFailed - category: Advisory durable: true lastTransitionTime: \"2021-01-26T20:48:42Z\" message: The migration has completed with warnings, please look at `Warn` conditions. reason: Completed status: \"True\" type: SucceededWithWarnings",
"oc -n {namespace} exec deployment/velero -c velero -- ./velero restore describe <restore>",
"Phase: PartiallyFailed (run 'velero restore logs ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf' for more information) Errors: Velero: <none> Cluster: <none> Namespaces: migration-example: error restoring example.com/migration-example/migration-example: the server could not find the requested resource",
"oc -n {namespace} exec deployment/velero -c velero -- ./velero restore logs <restore>",
"time=\"2021-01-26T20:48:37Z\" level=info msg=\"Attempting to restore migration-example: migration-example\" logSource=\"pkg/restore/restore.go:1107\" restore=openshift-migration/ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf time=\"2021-01-26T20:48:37Z\" level=info msg=\"error restoring migration-example: the server could not find the requested resource\" logSource=\"pkg/restore/restore.go:1170\" restore=openshift-migration/ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf",
"labels: migrationcontroller: ebe13bee-c803-47d0-a9e9-83f380328b93",
"labels: migrationcontroller: ebe13bee-c803-47d0-a9e9-83f380328b93",
"oc get migmigration -n openshift-migration",
"NAME AGE 88435fe0-c9f8-11e9-85e6-5d593ce65e10 6m42s",
"oc describe migmigration 88435fe0-c9f8-11e9-85e6-5d593ce65e10 -n openshift-migration",
"name: 88435fe0-c9f8-11e9-85e6-5d593ce65e10 namespace: openshift-migration labels: <none> annotations: touch: 3b48b543-b53e-4e44-9d34-33563f0f8147 apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: creationTimestamp: 2019-08-29T01:01:29Z generation: 20 resourceVersion: 88179 selfLink: /apis/migration.openshift.io/v1alpha1/namespaces/openshift-migration/migmigrations/88435fe0-c9f8-11e9-85e6-5d593ce65e10 uid: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 spec: migPlanRef: name: socks-shop-mig-plan namespace: openshift-migration quiescePods: true stage: false status: conditions: category: Advisory durable: True lastTransitionTime: 2019-08-29T01:03:40Z message: The migration has completed successfully. reason: Completed status: True type: Succeeded phase: Completed startTimestamp: 2019-08-29T01:01:29Z events: <none>",
"apiVersion: velero.io/v1 kind: Backup metadata: annotations: openshift.io/migrate-copy-phase: final openshift.io/migrate-quiesce-pods: \"true\" openshift.io/migration-registry: 172.30.105.179:5000 openshift.io/migration-registry-dir: /socks-shop-mig-plan-registry-44dd3bd5-c9f8-11e9-95ad-0205fe66cbb6 openshift.io/orig-reclaim-policy: delete creationTimestamp: \"2019-08-29T01:03:15Z\" generateName: 88435fe0-c9f8-11e9-85e6-5d593ce65e10- generation: 1 labels: app.kubernetes.io/part-of: migration migmigration: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 migration-stage-backup: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 velero.io/storage-location: myrepo-vpzq9 name: 88435fe0-c9f8-11e9-85e6-5d593ce65e10-59gb7 namespace: openshift-migration resourceVersion: \"87313\" selfLink: /apis/velero.io/v1/namespaces/openshift-migration/backups/88435fe0-c9f8-11e9-85e6-5d593ce65e10-59gb7 uid: c80dbbc0-c9f8-11e9-95ad-0205fe66cbb6 spec: excludedNamespaces: [] excludedResources: [] hooks: resources: [] includeClusterResources: null includedNamespaces: - sock-shop includedResources: - persistentvolumes - persistentvolumeclaims - namespaces - imagestreams - imagestreamtags - secrets - configmaps - pods labelSelector: matchLabels: migration-included-stage-backup: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 storageLocation: myrepo-vpzq9 ttl: 720h0m0s volumeSnapshotLocations: - myrepo-wv6fx status: completionTimestamp: \"2019-08-29T01:02:36Z\" errors: 0 expiration: \"2019-09-28T01:02:35Z\" phase: Completed startTimestamp: \"2019-08-29T01:02:35Z\" validationErrors: null version: 1 volumeSnapshotsAttempted: 0 volumeSnapshotsCompleted: 0 warnings: 0",
"apiVersion: velero.io/v1 kind: Restore metadata: annotations: openshift.io/migrate-copy-phase: final openshift.io/migrate-quiesce-pods: \"true\" openshift.io/migration-registry: 172.30.90.187:5000 openshift.io/migration-registry-dir: /socks-shop-mig-plan-registry-36f54ca7-c925-11e9-825a-06fa9fb68c88 creationTimestamp: \"2019-08-28T00:09:49Z\" generateName: e13a1b60-c927-11e9-9555-d129df7f3b96- generation: 3 labels: app.kubernetes.io/part-of: migration migmigration: e18252c9-c927-11e9-825a-06fa9fb68c88 migration-final-restore: e18252c9-c927-11e9-825a-06fa9fb68c88 name: e13a1b60-c927-11e9-9555-d129df7f3b96-gb8nx namespace: openshift-migration resourceVersion: \"82329\" selfLink: /apis/velero.io/v1/namespaces/openshift-migration/restores/e13a1b60-c927-11e9-9555-d129df7f3b96-gb8nx uid: 26983ec0-c928-11e9-825a-06fa9fb68c88 spec: backupName: e13a1b60-c927-11e9-9555-d129df7f3b96-sz24f excludedNamespaces: null excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io includedNamespaces: null includedResources: null namespaceMapping: null restorePVs: true status: errors: 0 failureReason: \"\" phase: Completed validationErrors: null warnings: 15",
"podman login -u USD(oc whoami) -p USD(oc whoami -t) --tls-verify=false <registry_url>:<port>",
"podman login -u USD(oc whoami) -p USD(oc whoami -t) --tls-verify=false <registry_url>:<port>",
"podman pull <registry_url>:<port>/openshift/<image>",
"podman tag <registry_url>:<port>/openshift/<image> \\ 1 <registry_url>:<port>/openshift/<image> 2",
"podman push <registry_url>:<port>/openshift/<image> 1",
"oc get imagestream -n openshift | grep <image>",
"NAME IMAGE REPOSITORY TAGS UPDATED my_image image-registry.openshift-image-registry.svc:5000/openshift/my_image latest 32 seconds ago",
"oc describe migmigration <pod> -n openshift-migration",
"Some or all transfer pods are not running for more than 10 mins on destination cluster",
"oc get namespace <namespace> -o yaml 1",
"oc edit namespace <namespace>",
"apiVersion: v1 kind: Namespace metadata: annotations: openshift.io/node-selector: \"region=east\"",
"echo -n | openssl s_client -connect <host_FQDN>:<port> \\ 1 | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > <ca_bundle.cert> 2",
"oc logs <Velero_Pod> -n openshift-migration",
"level=error msg=\"Error checking repository for stale locks\" error=\"error getting backup storage location: BackupStorageLocation.velero.io \\\"ts-dpa-1\\\" not found\" error.file=\"/remote-source/src/github.com/vmware-tanzu/velero/pkg/restic/repository_manager.go:259\"",
"level=error msg=\"Error backing up item\" backup=velero/monitoring error=\"timed out waiting for all PodVolumeBackups to complete\" error.file=\"/go/src/github.com/heptio/velero/pkg/restic/backupper.go:165\" error.function=\"github.com/heptio/velero/pkg/restic.(*backupper).BackupPodVolumes\" group=v1",
"spec: restic_timeout: 1h 1",
"status: conditions: - category: Warn durable: true lastTransitionTime: 2020-04-16T20:35:16Z message: There were verify errors found in 1 Restic volume restores. See restore `<registry-example-migration-rvwcm>` for details 1 status: \"True\" type: ResticVerifyErrors 2",
"oc describe <registry-example-migration-rvwcm> -n openshift-migration",
"status: phase: Completed podVolumeRestoreErrors: - kind: PodVolumeRestore name: <registry-example-migration-rvwcm-98t49> namespace: openshift-migration podVolumeRestoreResticErrors: - kind: PodVolumeRestore name: <registry-example-migration-rvwcm-98t49> namespace: openshift-migration",
"oc describe <migration-example-rvwcm-98t49>",
"completionTimestamp: 2020-05-01T20:49:12Z errors: 1 resticErrors: 1 resticPod: <restic-nr2v5>",
"oc logs -f <restic-nr2v5>",
"backup=openshift-migration/<backup_id> controller=pod-volume-backup error=\"fork/exec /usr/bin/restic: permission denied\" error.file=\"/go/src/github.com/vmware-tanzu/velero/pkg/controller/pod_volume_backup_controller.go:280\" error.function=\"github.com/vmware-tanzu/velero/pkg/controller.(*podVolumeBackupController).processBackup\" logSource=\"pkg/controller/pod_volume_backup_controller.go:280\" name=<backup_id> namespace=openshift-migration",
"spec: restic_supplemental_groups: <group_id> 1",
"spec: restic_supplemental_groups: - 5555 - 6666",
"cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: labels: controller-tools.k8s.io: \"1.0\" name: <migmigration> namespace: openshift-migration spec: rollback: true migPlanRef: name: <migplan> 1 namespace: openshift-migration EOF",
"oc delete USD(oc get pods -l migration.openshift.io/is-stage-pod -n <namespace>) 1",
"oc scale deployment <deployment> --replicas=<premigration_replicas>",
"apiVersion: extensions/v1beta1 kind: Deployment metadata: annotations: deployment.kubernetes.io/revision: \"1\" migration.openshift.io/preQuiesceReplicas: \"1\"",
"oc get pod -n <namespace>"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/migrating_from_version_3_to_4/troubleshooting-3-4
|
Chapter 6. Getting Started with nftables
|
Chapter 6. Getting Started with nftables The nftables framework provides packet classification facilities and it is the designated successor to the iptables , ip6tables , arptables , ebtables , and ipset tools. It offers numerous improvements in convenience, features, and performance over packet-filtering tools, most notably: built-in lookup tables instead of linear processing a single framework for both the IPv4 and IPv6 protocols rules all applied atomically instead of fetching, updating, and storing a complete rule set support for debugging and tracing in the rule set ( nftrace ) and monitoring trace events (in the nft tool) more consistent and compact syntax, no protocol-specific extensions a Netlink API for third-party applications Similarly to iptables , nftables use tables for storing chains. The chains contain individual rules for performing actions. The nft tool replaces all tools from the packet-filtering frameworks. The libnftnl library can be used for low-level interaction with nftables Netlink API over the libmnl library. To display the effect of rule set changes, use the nft list ruleset command. Since these tools add tables, chains, rules, sets, and other objects to the nftables rule set, be aware that nftables rule-set operations, such as the nft flush ruleset command, might affect rule sets installed using the formerly separate legacy commands. When to use firewalld or nftables firewalld : Use the firewalld utility for simple firewall use cases. The utility is easy to use and covers the typical use cases for these scenarios. nftables : Use the nftables utility to set up complex and performance critical firewalls, such as for a whole network. Important To avoid that the different firewall services influence each other, run only one of them on a RHEL host, and disable the other services. 6.1. Writing and executing nftables scripts The nftables framework provides a native scripting environment that brings a major benefit over using shell scripts to maintain firewall rules: the execution of scripts is atomic. This means that the system either applies the whole script or prevents the execution if an error occurs. This guarantees that the firewall is always in a consistent state. Additionally, the nftables script environment enables administrators to: add comments define variables include other rule set files This section explains how to use these features, as well as creating and executing nftables scripts. When you install the nftables package, Red Hat Enterprise Linux automatically creates *.nft scripts in the /etc/nftables/ directory. These scripts contain commands that create tables and empty chains for different purposes. 6.1.1. Supported nftables script formats The nftables scripting environment supports scripts in the following formats: You can write a script in the same format as the nft list ruleset command displays the rule set: You can use the same syntax for commands as in nft commands: 6.1.2. Running nftables scripts You can run nftables script either by passing it to the nft utility or execute the script directly. Prerequisites The procedure of this section assumes that you stored an nftables script in the /etc/nftables/example_firewall.nft file. Procedure 6.1. Running nftables scripts using the nft utility To run an nftables script by passing it to the nft utility, enter: Procedure 6.2. Running the nftables script directly: Steps that are required only once: Ensure that the script starts with the following shebang sequence: Important If you omit the -f parameter, the nft utility does not read the script and displays: Error: syntax error, unexpected newline, expecting string. Optional: Set the owner of the script to root : Make the script executable for the owner: Run the script: If no output is displayed, the system executed the script successfully. Important Even if nft executes the script successfully, incorrectly placed rules, missing parameters, or other problems in the script can cause that the firewall behaves not as expected. Additional resources For details about setting the owner of a file, see the chown(1) man page. For details about setting permissions of a file, see the chmod(1) man page. For more information about loading nftables rules with system boot, see Section 6.1.6, "Automatically loading nftables rules when the system boots" 6.1.3. Using comments in nftables scripts The nftables scripting environment interprets everything to the right of a # character as a comment. Example 6.1. Comments in an nftables script Comments can start at the beginning of a line, as well as to a command: 6.1.4. Using variables in an nftables script To define a variable in an nftables script, use the define keyword. You can store single values and anonymous sets in a variable. For more complex scenarios, use named sets or verdict maps. Variables with a single value The following example defines a variable named INET_DEV with the value enp1s0 : You can use the variable in the script by writing the USD sign followed by the variable name: Variables that contain an anonymous set The following example defines a variable that contains an anonymous set: You can use the variable in the script by writing the USD sign followed by the variable name: Note Note that curly braces have special semantics when you use them in a rule because they indicate that the variable represents a set. Additional resources For more information about sets, see Section 6.4, "Using sets in nftables commands" . For more information about verdict maps, see Section 6.5, "Using verdict maps in nftables commands" . 6.1.5. Including files in an nftables script The nftables scripting environment enables administrators to include other scripts by using the include statement. If you specify only a file name without an absolute or relative path, nftables includes files from the default search path, which is set to /etc on Red Hat Enterprise Linux. Example 6.2. Including files from the default search directory To include a file from the default search directory: Example 6.3. Including all *.nft files from a directory To include all files ending in *.nft that are stored in the /etc/nftables/rulesets/ directory: Note that the include statement does not match files beginning with a dot. Additional resources For further details, see the Include files section in the nft(8) man page. 6.1.6. Automatically loading nftables rules when the system boots The nftables systemd service loads firewall scripts that are included in the /etc/sysconfig/nftables.conf file. This section explains how to load firewall rules when the system boots. Prerequisites The nftables scripts are stored in the /etc/nftables/ directory. Procedure 6.3. Automatically loading nftables rules when the system boots Edit the /etc/sysconfig/nftables.conf file. If you enhance *.nft scripts created in /etc/nftables/ when you installed the nftables package, uncomment the include statement for these scripts. If you write scripts from scratch, add include statements to include these scripts. For example, to load the /etc/nftables/example.nft script when the nftables service starts, add: Optionally, start the nftables service to load the firewall rules without rebooting the system: Enable the nftables service. Additional resources For more information, see Section 6.1.1, "Supported nftables script formats"
|
[
"#!/usr/sbin/nft -f Flush the rule set flush ruleset table inet example_table { chain example_chain { # Chain for incoming packets that drops all packets that # are not explicitly allowed by any rule in this chain type filter hook input priority 0; policy drop; # Accept connections to port 22 (ssh) tcp dport ssh accept } }",
"#!/usr/sbin/nft -f Flush the rule set flush ruleset Create a table add table inet example_table Create a chain for incoming packets that drops all packets that are not explicitly allowed by any rule in this chain add chain inet example_table example_chain { type filter hook input priority 0 ; policy drop ; } Add a rule that accepts connections to port 22 (ssh) add rule inet example_table example_chain tcp dport ssh accept",
"nft -f /etc/nftables/example_firewall.nft",
"#!/usr/sbin/nft -f",
"chown root /etc/nftables/ example_firewall.nft",
"chmod u+x /etc/nftables/ example_firewall.nft",
"/etc/nftables/ example_firewall.nft",
"Flush the rule set flush ruleset add table inet example_table # Create a table",
"define INET_DEV = enp1s0",
"add rule inet example_table example_chain iifname USDINET_DEV tcp dport ssh accept",
"define DNS_SERVERS = { 192.0.2.1, 192.0.2.2 }",
"add rule inet example_table example_chain ip daddr USDDNS_SERVERS accept",
"include \"example.nft\"",
"include \"/etc/nftables/rulesets/*.nft\"",
"include \"/etc/nftables/example.nft\"",
"systemctl start nftables",
"systemctl enable nftables"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/security_guide/chap-Getting_started_with_nftables
|
Chapter 2. Project [project.openshift.io/v1]
|
Chapter 2. Project [project.openshift.io/v1] Description Projects are the unit of isolation and collaboration in OpenShift. A project has one or more members, a quota on the resources that the project may consume, and the security controls on the resources in the project. Within a project, members may have different roles - project administrators can set membership, editors can create and manage the resources, and viewers can see but not access running containers. In a normal cluster project administrators are not able to alter their quotas - that is restricted to cluster administrators. Listing or watching projects will return only projects the user has the reader role on. An OpenShift project is an alternative representation of a Kubernetes namespace. Projects are exposed as editable to end users while namespaces are not. Direct creation of a project is typically restricted to administrators, while end users should use the requestproject resource. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object ProjectSpec describes the attributes on a Project status object ProjectStatus is information about the current status of a Project 2.1.1. .spec Description ProjectSpec describes the attributes on a Project Type object Property Type Description finalizers array (string) Finalizers is an opaque list of values that must be empty to permanently remove object from storage 2.1.2. .status Description ProjectStatus is information about the current status of a Project Type object Property Type Description conditions array (NamespaceCondition) Represents the latest available observations of the project current state. phase string Phase is the current lifecycle phase of the project Possible enum values: - "Active" means the namespace is available for use in the system - "Terminating" means the namespace is undergoing graceful termination 2.2. API endpoints The following API endpoints are available: /apis/project.openshift.io/v1/projects GET : list or watch objects of kind Project POST : create a Project /apis/project.openshift.io/v1/watch/projects GET : watch individual changes to a list of Project. deprecated: use the 'watch' parameter with a list operation instead. /apis/project.openshift.io/v1/projects/{name} DELETE : delete a Project GET : read the specified Project PATCH : partially update the specified Project PUT : replace the specified Project /apis/project.openshift.io/v1/watch/projects/{name} GET : watch changes to an object of kind Project. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 2.2.1. /apis/project.openshift.io/v1/projects Table 2.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description list or watch objects of kind Project Table 2.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 2.3. HTTP responses HTTP code Reponse body 200 - OK ProjectList schema 401 - Unauthorized Empty HTTP method POST Description create a Project Table 2.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.5. Body parameters Parameter Type Description body Project schema Table 2.6. HTTP responses HTTP code Reponse body 200 - OK Project schema 201 - Created Project schema 202 - Accepted Project schema 401 - Unauthorized Empty 2.2.2. /apis/project.openshift.io/v1/watch/projects Table 2.7. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of Project. deprecated: use the 'watch' parameter with a list operation instead. Table 2.8. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 2.2.3. /apis/project.openshift.io/v1/projects/{name} Table 2.9. Global path parameters Parameter Type Description name string name of the Project Table 2.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a Project Table 2.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 2.12. Body parameters Parameter Type Description body DeleteOptions schema Table 2.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Project Table 2.14. HTTP responses HTTP code Reponse body 200 - OK Project schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Project Table 2.15. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 2.16. Body parameters Parameter Type Description body Patch schema Table 2.17. HTTP responses HTTP code Reponse body 200 - OK Project schema 201 - Created Project schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Project Table 2.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.19. Body parameters Parameter Type Description body Project schema Table 2.20. HTTP responses HTTP code Reponse body 200 - OK Project schema 201 - Created Project schema 401 - Unauthorized Empty 2.2.4. /apis/project.openshift.io/v1/watch/projects/{name} Table 2.21. Global path parameters Parameter Type Description name string name of the Project Table 2.22. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind Project. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 2.23. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/project_apis/project-project-openshift-io-v1
|
Chapter 2. Migrating Data Grid Server deployments
|
Chapter 2. Migrating Data Grid Server deployments Review the details in this section to plan and prepare a successful migration of Data Grid Server. 2.1. Data Grid Server 8 Data Grid Server 8 is: Designed for modern system architectures. Built for containerized platforms. Optimized for native image compilation with Quarkus. The transition to a cloud-native architecture means that Data Grid Server 8 is no longer based on Red Hat JBoss Enterprise Application Platform (EAP). Instead Data Grid Server 8 is based on the Netty project's client/server framework. This change affects migration from versions because many of the facilities that integration with EAP provided are no longer relevant to Data Grid 8 or have changed. For instance, while complexity of server configuration is greatly reduced in comparison to releases, you do need to adapt your existing configuration to a new schema. Data Grid 8 also provides more of a convention for server configuration than in versions where it was possible to achieve much more granular configuration. Additionally Data Grid Server no longer leverages Domain Mode to centrally manage configuration. The Data Grid team acknowledge that these configuration changes place additional effort on our customers to migrate their existing clusters to Data Grid 8. We believe that it is better to use container orchestration platforms, such as Red Hat OpenShift, to provision and administer Data Grid clusters along with automation engines, such as Red Hat Ansible, to manage Data Grid configuration. These technologies offer greater flexibility in that they are more generic and suitable for multiple disparate systems, rather than solutions that are more specific to Data Grid. In terms of migration to Data Grid 8, it is worth noting that solutions like Red Hat Ansible are helpful with large-scale configuration deployment. However, that tooling might not necessarily aid the actual migration of your existing Data Grid configuration. 2.2. Data Grid Server configuration Data Grid provides a scalable data layer that lets you intelligently and efficiently utilize available computing resources. To achieve this with Data Grid Server deployments, configuration is separated into two layers: dynamic and static. Dynamic configuration Dynamic configuration is mutable, changing at runtime as you create caches and add and remove nodes to and from the cluster. After you deploy your Data Grid Server cluster, you create caches through the Data Grid CLI, Data Grid Console, or Hot Rod and REST endpoints. Data Grid Server permanently stores those caches as part of the cluster state that is distributed across nodes. Each joining node receives the complete cluster state that Data Grid Server automatically synchronizes across all nodes as changes occur. Static configuration Static configuration is immutable, remaining unchanged at runtime. You define static configuration when setting up underlying mechanisms such as cluster transport, authentication and encryption, shared datasources, and so on. By default Data Grid Server uses USDRHDG_HOME/server/conf/infinispan.xml for static configuration. The root element of the configuration is infinispan and declares two base schema: <infinispan xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:infinispan:config:15.0 https://infinispan.org/schemas/infinispan-config-15.0.xsd urn:infinispan:server:15.0 https://infinispan.org/schemas/infinispan-server-15.0.xsd" xmlns="urn:infinispan:config:15.0" xmlns:server="urn:infinispan:server:15.0"> The urn:infinispan:config schema validates configuration for core Infinispan capabilities such as the cache container. The urn:infinispan:server schema validates configuration for Data Grid Server. Cache container configuration You use the cache-container element to configure the CacheManager interface that provides mechanisms to manage cache lifecycles: <!-- Creates a cache manager named "default" that exports statistics. --> <cache-container name="default" statistics="true"> <!-- Defines cluster transport properties, including the cluster name. --> <!-- Uses the default TCP stack for inter-cluster communication. --> <transport cluster="USD{infinispan.cluster.name}" stack="USD{infinispan.cluster.stack:tcp}" node-name="USD{infinispan.node.name:}"/> </cache-container> The cache-container element can also hold the following configuration elements: security for the cache manager. metrics for MicroProfile compatible metrics. jmx for JMX monitoring and administration. Important In versions, you could define multiple cache-container elements in your Data Grid configuration to expose cache containers on different endpoints. In Data Grid 8 you must not configure multiple cache containers because the Data Grid CLI and Console can handle only one cache manager per cluster. However you can change the name of the cache container to something more meaningful to your environment than "default", if necessary. You should use separate Data Grid clusters to achieve multitenancy to ensure that cache managers do not interfere with each other. Server configuration You use the server element to configure underlying Data Grid Server mechanisms: <server> <interfaces> <interface name="public"> 1 <inet-address value="USD{infinispan.bind.address:127.0.0.1}"/> 2 </interface> </interfaces> <socket-bindings default-interface="public" 3 port-offset="USD{infinispan.socket.binding.port-offset:0}"> 4 <socket-binding name="default" 5 port="USD{infinispan.bind.port:11222}"/> 6 <socket-binding name="memcached" port="11221"/> 7 </socket-bindings> <security> <security-realms> 8 <security-realm name="default"> 9 <server-identities> 10 <ssl> <keystore path="application.keystore" 11 keystore-password="password" alias="server" key-password="password" generate-self-signed-certificate-host="localhost"/> </ssl> </server-identities> <properties-realm groups-attribute="Roles"> 12 <user-properties path="users.properties" 13 relative-to="infinispan.server.config.path" plain-text="true"/> 14 <group-properties path="groups.properties" 15 relative-to="infinispan.server.config.path" /> </properties-realm> </security-realm> </security-realms> </security> <endpoints socket-binding="default" security-realm="default" /> 16 </server> 1 Creates an interface named "public" that makes the server available on your network. 2 Uses the 127.0.0.1 loopback address for the public interface. 3 Binds the public interface to the network ports where Data Grid Server endpoints listen for incoming client connections. 4 Specifies an offset of 0 for network ports. 5 Creates a socket binding named "default". 6 Specifies port 11222 for the socket binding. 7 Creates a socket binding for the Memcached connector at port 11221 . 8 Defines security realms that protect endpoints from network intrusion. 9 Creates a security realm named "default". 10 Configures SSL/TLS keystores for identity verification. 11 Specifies the keystore that contains server certificates. 12 Configures the "default" security realm to use properties files to define users and groups that map users to roles. 13 Names the properties file that contains Data Grid users. 14 Specifies that contents of the users.properties file are stored as plain text. 15 Names the properties file that maps Data Grid users to roles. 16 Configures endpoints with Hot Rod and REST connectors. This example shows implicit hotrod-connector and rest-connector elements, which is the default from Data Grid 8.2. Data Grid Server configuration in 8.0 and 8.1 use explicitly declared Hot Rod and REST connectors. Additional resources Data Grid Server Guide Data Grid Server Reference 2.3. Changes to the Data Grid Server 8.2 configuration schema In 7.x versions there was no separate schema for Data Grid Server. This topic lists changes to the Data Grid Server configuration schema between 8.1 and 8.2. Security authorization As of Data Grid 8.2, the server configuration enables authorization by default to restrict user access based on roles and permissions. <cache-container name="default" statistics="true"> <transport cluster="USD{infinispan.cluster.name:cluster}" stack="USD{infinispan.cluster.stack:tcp}" node-name="USD{infinispan.node.name:}"/> <security> <authorization/> 1 </security> </cache-container> 1 Enables authorization for server administration and management and the cache manager lifecycle. You can remove the authorization element to disable security authorization. Client trust stores As of Data Grid 8.2, you can add client trust stores to the server identity configuration and use the truststore-realm element to verify certificates. 8.1 <security-realm name="default"> <server-identities> <ssl> <keystore path="server.pfx" keystore-password="password" alias="server"/> </ssl> </server-identities> <truststore-realm path="trust.pfx" password="secret"/> </security-realm> 8.2 <security-realm name="default"> <server-identities> <ssl> <keystore path="server.pfx" keystore-password="password" alias="server"/> <truststore path="trust.pfx" password="secret"/> 1 </ssl> </server-identities> <truststore-realm/> 2 </security-realm> 1 Specifies a trust store that holds client certificates. 2 If you include the truststore-realm element in the server configuration, the trust store must contain public certificates for all clients. If you do not include the truststore-realm element, the trust store needs only a certificate chain. Endpoint connectors As of Data Grid 8.2, the hotrod-connector and rest-connector elements are implicitly set in the default endpoints configuration. <endpoints socket-binding="default" security-realm="default"/> Modified elements and attributes path , provider , keystore-password , and relative-to attributes are removed from the truststore-realm element. name attribute is added to the truststore-realm element. New elements and attributes credential-stores child element added to the security element. The credential-stores element also contains the credential-store , clear-text-credential , and credential-reference child elements. The following is included in the server configuration by default: <credential-stores> <credential-store name="credentials" path="credentials.pfx"> <clear-text-credential clear-text="secret"/> </credential-store> </credential-stores> ip-filter , accept , and reject child elements added to the endpoints element. security-realm attribute added to the hotrod-connector and rest-connector elements. cache-max-size and cache-lifespan added to the security-realm element to configure the size of the identities cache and lifespan of entries. truststore child element added to the ssl element for specifying trust stores to validate client certificates. Deprecated elements and attributes The following elements and attributes are now deprecated: security-realm attribute on the authentication element. security-realm attribute on the encryption element. Removed elements and attributes No elements or attributes are removed in Data Grid 8.2. 2.4. Changes to the Data Grid Server 8.3 configuration schema This topic lists changes to the Data Grid Server configuration schema between 8.2 and 8.3. Schema changes endpoints element in the urn:infinispan:server namespace is no longer a repeating element but a wrapper for 0 or more endpoint elements. Data Grid Server 8.2 <endpoints socket-binding="default" security-realm="default"> <hotrod-connector name="hotrod"/> <rest-connector name="rest"/> </endpoints> Data Grid Server 8.3 <endpoints> <endpoint socket-binding="public" security-realm="application-realm" admin="false"> <hotrod-connector/> <rest-connector/> </endpoint> <endpoint socket-binding="private" security-realm="management-realm"> <hotrod-connector/> <rest-connector/> </endpoint> </endpoints> Modified elements and attributes No elements or attributes are modified in Data Grid 8.3. New elements and attributes endpoint element with the socket-binding and security-realm allow you to define multiple endpoint configurations contained within the endpoints element. security-realm-distributed element to combine multiple security realms. default-realm attribute for the security-realm element specifies a default security realm, which is the first security realm declared unless you specify a different realm. name attribute for the security-realm element to logically separate multiple realms of the same type. network-prefix-override attribute on the hotrod-connector element configures whether to use the netmask that the host system provides for interfaces or override with netmasks that follow IANA private address conventions. policy attribute on the sasl element to list policies that filter the available set of mechanisms. client-ssl-context attribute on the ldap-realm element to specify a realm that provides a trust store to validate clients for SSL/TLS connections. Deprecated elements and attributes The following elements and attributes are now deprecated: name attribute for the regex-principal-transformer element is now ignored. keystore-password attribute on the keystore element for an TLS/SSL server identity is deprecated. Use the password attribute instead. Removed elements and attributes No elements or attributes are removed in Data Grid 8.3. 2.5. Changes to the Data Grid Server 8.4 configuration schema This topic lists changes to the Data Grid Server configuration schema between 8.3 and 8.4. Schema changes No schema changes were made in Data Grid 8.4. Modified elements and attributes The following attributes for configuring a data source connection pool have now default values: max-size defaults to 2147483647 , which means that there is no limit on the number of connections in the pool. min-size defaults to 0 , which means the pool can be empty when it starts up. initial-size defaults to 0 , which means that no connections are created initially. The following attributes for configuring a data source connection pool have default value set to 0 , which means that these features are disabled. background-validation validate-on-acquisition leak-detection idle-removal New elements and attributes resp-connector element enables the RESP endpoint for Data Grid. The new maxOccurs attribute of the connection-property element specifies the maximum number of times this element can occur. The default value of maxOccurs is unbounded . masked-credential complexType that adds a masked password for the credential keystore. The masked attribute specifies a masked password in the MASKED_VALUE;SALT;ITERATION format. command-credential executes an external command that supplies the password for the credential keystore. The command attribute specifies an external command. Deprecated elements and attributes No elements and attributes were deprecated in Data Grid 8.4. Removed elements and attributes worker-threads attribute on the protocol-connector element is now removed. security-realm-filesystem element is now removed. 2.6. Changes to the Data Grid Server 8.5 configuration schema This topic lists changes to the Data Grid Server configuration schema between 8.4 and 8.5. Modified elements and attributes ldap-name-rewriter element has been renamed to name-rewriter . New elements and attributes security-evidence-decoder element that lets you specify the evidence decoder to be of type x500-subject-evidence-decoder , or x509-subject-alt-name-evidence-decoder . authentication element in memcached-connector encryption element in memcached-connector protocol attribute in memcached-connector lets you set the Memcached protocol to use. security-realm attribute in memcached-connector lets you define the security realm to use for authentication, cache authorization, and encryption. compression-level attribute value in rest-connector now defaults to 6 . compression-threshold attribute in rest-connector lets you set compression for the response body when the size of the response body exceeds the threshold. The value should be a non-negative number. 0 enables compression for all responses. require-ssl-client-auth attribute in rest-connector now is not optional. evidence-decoder element in security-realm aggregate-realm element in security-realm cache-lifespan attribute in security-realm now defaults to 60000 rather than -1 . groups-attribute attribute in security-realm-properties now has the default value Roles case-principal-transformer element in name-rewriter common-name-principal-transformer element in name-rewriter Deprecated elements and attributes No elements and attributes were deprecated in Data Grid 8.5. Removed elements and attributes No elements and attributes were removed in Data Grid 8.5. 2.7. Data Grid Server endpoint and network configuration This section describes Data Grid Server endpoint and network configuration when migrating from versions. Data Grid 8 simplifies server endpoint configuration by using a single network interface and port to expose endpoints on the network. 2.7.1. Interfaces Interfaces bind expose endpoints to network locations. Data Grid Server 7.x network interface configuration In Data Grid 7.x, the server configuration used different interfaces to separate administrative and management access from cache access. <interfaces> <interface name="management"> <inet-address value="USD{jboss.bind.address.management:127.0.0.1}"/> </interface> <interface name="public"> <inet-address value="USD{jboss.bind.address:127.0.0.1}"/> </interface> </interfaces> Data Grid Server 8 network interface configuration In Data Grid 8, there is one network interface for all client connections for administrative and management access as well as cache access. <interfaces> <interface name="public"> <inet-address value="USD{infinispan.bind.address:127.0.0.1}"/> </interface> </interfaces> 2.7.2. Socket bindings Socket bindings map network interfaces to ports where endpoints listen for client connections. Data Grid Server 7.x socket binding configuration In Data Grid 7.x, the server configuration used unique ports for management and administration, such as 9990 for the Management Console and port 9999 for the native management protocol. Older versions also used unique ports for each endpoint, such as 11222 for external Hot Rod access and 8080 for REST. <socket-binding-group name="standard-sockets" default-interface="public" port-offset="USD{jboss.socket.binding.port-offset:0}"> <socket-binding name="management-http" interface="management" port="USD{jboss.management.http.port:9990}"/> <socket-binding name="management-https" interface="management" port="USD{jboss.management.https.port:9993}"/> <socket-binding name="hotrod" port="11222"/> <socket-binding name="hotrod-internal" port="11223"/> <socket-binding name="hotrod-multi-tenancy" port="11224"/> <socket-binding name="memcached" port="11211"/> <socket-binding name="rest" port="8080"/> ... </socket-binding-group> Data Grid Server 8 single port configuration Data Grid 8 uses a single port to handle all connections to the server. Hot Rod clients, REST clients, Data Grid CLI, and Data Grid Console all use port 11222 . <socket-bindings default-interface="public" port-offset="USD{infinispan.socket.binding.port-offset:0}"> <socket-binding name="default" port="USD{infinispan.bind.port:11222}"/> <socket-binding name="memcached" port="11221"/> </socket-bindings> 2.7.3. Endpoints Endpoints listen for remote client connections and handle requests over protocols such as Hot Rod and HTTP (REST). Note Data Grid CLI uses the REST endpoint for all cache and administrative operations. Data Grid Server 7.x endpoint subsystem In Data Grid 7.x, the endpoint subsystem let you configure connectors for Hot Rod and REST endpoints. <subsystem xmlns="urn:infinispan:server:endpoint:9.4"> <hotrod-connector socket-binding="hotrod" cache-container="local"> <topology-state-transfer lazy-retrieval="false" lock-timeout="1000" replication-timeout="5000"/> </hotrod-connector> <rest-connector socket-binding="rest" cache-container="local"> <authentication security-realm="ApplicationRealm" auth-method="BASIC"/> </rest-connector> </subsystem> Data Grid Server 8 endpoint configuration Data Grid 8 replaces the endpoint subsystem with an endpoints element. The hotrod-connector and rest-connector configuration elements and attributes are the same as versions. As of Data Grid 8.2, the default endpoints configuration uses implicit Hot Rod and REST connectors as follows: <endpoints socket-binding="default" security-realm="default"/> Data Grid Server 8.0 to 8.2 <endpoints socket-binding="default" security-realm="default"> <hotrod-connector name="hotrod"/> <rest-connector name="rest"/> </endpoints> As of Data Grid Server 8.3 you configure endpoints with security realms and Hot Rod or REST connectors with endpoint elements. The endpoints element is now a wrapper for multiple endpoint configurations. Data Grid Server 8.3 and later <endpoints> <endpoint socket-binding="public" security-realm="application-realm" admin="false"> <hotrod-connector/> <rest-connector/> </endpoint> <endpoint socket-binding="private" security-realm="management-realm"> <hotrod-connector/> <rest-connector/> </endpoint> </endpoints> Additional resources Data Grid Server Guide 2.8. Data Grid Server security Data Grid Server security configures authentication and encryption to prevent network attack and safeguard data. 2.8.1. Security realms In Data Grid 8 security realms provide implicit configuration options that mean you do not need to provide as many settings as in versions. For example, if you define a Kerberos realm, you get Kerberos features. If you add a truststore, you get certificate authentication. In Data Grid 7.x, there were two default security realms: ManagementRealm secures the Management API. ApplicationRealm secures endpoints and remote client connections. Data Grid 8, on the other hand, provides a security element that lets you define multiple different security realms that you can use for Hot Rod and REST endpoints: <security> <security-realms> ... </security-realms> </security> Supported security realms Property realms use property files, users.properties and groups.properties , to define users and groups that can access Data Grid. LDAP realms connect to LDAP servers, such as OpenLDAP, Red Hat Directory Server, Apache Directory Server, or Microsoft Active Directory, to authenticate users and obtain membership information. Trust store realms use keystores that contain the public certificates of all clients that are allowed to access Data Grid. Token realms use external services to validate tokens and require providers that are compatible with RFC-7662 (OAuth2 Token Introspection) such as Red Hat SSO. 2.8.2. Server identities Server identities use certificate chains to prove Data Grid Server identities to remote clients. Data Grid 8 uses the same configuration to define SSL identities as in versions with some usability improvements. If a security realm contains an SSL identity, Data Grid automatically enables encryption for endpoints that use that security realm. For test and development environments, Data Grid includes a generate-self-signed-certificate-host attribute that automatically generates a keystore at startup. <security-realm name="default"> <server-identities> <ssl> <keystore path="..." relative-to="..." keystore-password="..." alias="..." key-password="..." generate-self-signed-certificate-host="..."/> </ssl> </server-identities> ... <security-realm> 2.8.3. Endpoint authentication mechanisms Hot Rod and REST endpoints use SASL or HTTP mechanisms to authenticate client connections. Data Grid 8 uses the same authentication element for hotrod-connector and rest-connector configuration as in Data Grid 7.x and earlier. <hotrod-connector name="hotrod"> <authentication> <sasl mechanisms="..." server-name="..."/> </authentication> </hotrod-connector> <rest-connector name="rest"> <authentication> <mechanisms="..." server-principal="..."/> </authentication> </rest-connector> One key difference with versions is that Data Grid 8 supports additional authentication mechanisms for endpoints. Hot Rod SASL authentication mechanisms Hot Rod clients now use SCRAM-SHA-512 as the default authentication mechanism instead of DIGEST-MD5 . Note If you use property security realms, you must use the PLAIN authentication mechanism. Authentication mechanism Description Related details PLAIN Uses credentials in plain-text format. You should use PLAIN authentication with encrypted connections only. Similar to the Basic HTTP mechanism. DIGEST-* Uses hashing algorithms and nonce values. Hot Rod connectors support DIGEST-MD5 , DIGEST-SHA , DIGEST-SHA-256 , DIGEST-SHA-384 , and DIGEST-SHA-512 hashing algorithms, in order of strength. Similar to the Digest HTTP mechanism. SCRAM-* Uses salt values in addition to hashing algorithms and nonce values. Hot Rod connectors support SCRAM-SHA , SCRAM-SHA-256 , SCRAM-SHA-384 , and SCRAM-SHA-512 hashing algorithms, in order of strength. Similar to the Digest HTTP mechanism. GSSAPI Uses Kerberos tickets and requires a Kerberos Domain Controller. You must add a corresponding kerberos server identity in the realm configuration. In most cases, you also specify an ldap-realm to provide user membership information. Similar to the SPNEGO HTTP mechanism. GS2-KRB5 Uses Kerberos tickets and requires a Kerberos Domain Controller. You must add a corresponding kerberos server identity in the realm configuration. In most cases, you also specify an ldap-realm to provide user membership information. Similar to the SPNEGO HTTP mechanism. EXTERNAL Uses client certificates. Similar to the CLIENT_CERT HTTP mechanism. OAUTHBEARER Uses OAuth tokens and requires a token-realm configuration. Similar to the BEARER_TOKEN HTTP mechanism. HTTP (REST) authentication mechanisms Authentication mechanism Description Related details BASIC Uses credentials in plain-text format. You should use BASIC authentication with encrypted connections only. Corresponds to the Basic HTTP authentication scheme and is similar to the PLAIN SASL mechanism. DIGEST Uses hashing algorithms and nonce values. REST connectors support SHA-512 , SHA-256 and MD5 hashing algorithms. Corresponds to the Digest HTTP authentication scheme and is similar to DIGEST-* SASL mechanisms. SPNEGO Uses Kerberos tickets and requires a Kerberos Domain Controller. You must add a corresponding kerberos server identity in the realm configuration. In most cases, you also specify an ldap-realm to provide user membership information. Corresponds to the Negotiate HTTP authentication scheme and is similar to the GSSAPI and GS2-KRB5 SASL mechanisms. BEARER_TOKEN Uses OAuth tokens and requires a token-realm configuration. Corresponds to the Bearer HTTP authentication scheme and is similar to OAUTHBEARER SASL mechanism. CLIENT_CERT Uses client certificates. Similar to the EXTERNAL SASL mechanism. 2.8.4. Authenticating EAP applications You can now add credentials to hotrod-client.properties on your EAP application classpath to authenticate with Data Grid through: Remote cache containers ( remote-cache-container ) Remote stores ( remote-store ) EAP modules 2.8.5. Logging Data Grid uses Apache Log4j2 instead of the logging subsystem in versions that was based on JBossLogManager. By default, Data Grid writes log messages to the following directory: USDRHDG_HOME/USD{infinispan.server.root}/log server.log is the default log file. Access logs In versions Data Grid included a logger to audit security logs for the caches: <authorization audit-logger="org.infinispan.security.impl.DefaultAuditLogger"> Data Grid 8 no longer provides this audit logger. However you can use the logging categories for the Hot Rod and REST endpoints: org.infinispan.HOTROD_ACCESS_LOG org.infinispan.REST_ACCESS_LOG Additional resources Data Grid Server Guide 2.9. Separating Data Grid Server endpoints When migrating from versions, you can create different network locations for Data Grid endpoints to match your existing configuration. However, because Data Grid architecture has changed and now uses a single port for all client connections, not all options in versions are available. Important Administration tools such as the Data Grid CLI and Console use the REST API. You cannot remove the REST API from your endpoint configuration without disabling the Data Grid CLI and Console. Likewise you cannot separate the REST endpoint to use different ports or socket bindings for cache access and administrative access. Procedure Define separate network interfaces for REST and Hot Rod endpoints. For example, define a "public" interface to expose the Hot Rod endpoint externally and a "private" interface to expose the REST endpoint on an network location that has restricted access. <interfaces> <interface name="public"> <inet-address value="USD{infinispan.bind.address:198.51.100.0}"/> </interface> <interface name="private"> <inet-address value="USD{infinispan.bind.address:192.0.2.0}"/> </interface> </interfaces> This configuration creates: A "public" interface with the 198.51.100.0 IP address. A "private" interface with the 192.0.2.0 IP address. Configure separate socket bindings for the endpoints, as in the following example: <socket-bindings default-interface="private" port-offset="USD{infinispan.socket.binding.port-offset:0}"> <socket-binding name="default" port="USD{infinispan.bind.port:8080}"/> <socket-binding name="hotrod" interface="public" port="11222"/> </socket-bindings> This example: Sets the "private" interface as the default for socket bindings. Creates a "default" socket binding that uses port 8080 . Creates a "hotrod" socket binding that uses the "public" interface and port 11222 . Create separate security realms for the endpoints, for example: <security> <security-realms> <security-realm name="truststore"> <server-identities> <ssl> <keystore path="server.p12" relative-to="infinispan.server.config.path" keystore-password="secret" alias="server"/> </ssl> </server-identities> <truststore-realm path="trust.p12" relative-to="infinispan.server.config.path" keystore-password="secret"/> </security-realm> <security-realm name="kerberos"> <server-identities> <kerberos keytab-path="http.keytab" principal="HTTP/[email protected]" required="true"/> </server-identities> </security-realm> </security-realms> </security> This example: Configures a trust store security realm. Configures a Kerberos security realm. Configure endpoints as follows: <endpoints> <endpoint socket-binding="default" security-realm="kerberos"> <hotrod-connector/> <rest-connector/> </endpoint> <endpoint socket-binding="hotrod" security-realm="truststore"> <hotrod-connector/> <rest-connector/> </endpoint> </endpoints> Start Data Grid Server. Logs contain the following messages that indicate the network locations where endpoints accept client connections: [org.infinispan.SERVER] ISPN080004: Protocol HotRod listening on 198.51.100.0:11222 [org.infinispan.SERVER] ISPN080004: Protocol SINGLE_PORT listening on 192.0.2.0:8080 [org.infinispan.SERVER] ISPN080034: Server '<hostname>' listening on http://192.0.2.0:8080 steps Access Data Grid Console from any browser at http://192.0.2.0:8080 Configure the Data Grid CLI to connect at the custom location, for example: USD bin/cli.sh -c http://192.0.2.0:8080 2.10. Data Grid Server shared datasources Data Grid 7.x JDBC cache stores can use a PooledConnectionFactory to obtain database connections. Data Grid 8 lets you create managed datasources in the server configuration to optimize connection pooling and performance for database connections with JDBC cache stores. Datasource configurations are composed of two sections: connection factory that defines how to connect to the database. connection pool that defines how to pool and reuse connections and is based on Agroal. You first define the datasource connection factory and connection pool in the server configuration and then add it to your JDBC cache store configuration. For more information on migrating JDBC cache stores, see the Migrating Cache Stores section in this document. Additional resources Data Grid Server Guide 2.11. Data Grid Server JMX and metrics Data Grid 8 exposes metrics via both JMX and a /metrics endpoint for integration with metrics tooling such as Prometheus. The /metrics endpoint provides: Gauges that return values, such as JVM uptime or average number of seconds for cache operations. Histograms that show how long read, write, and remove operations take, in percentiles. In versions, Prometheus metrics were collected by an agent that mapped JMX metrics instead of being supported natively. versions of Data Grid also used the JBoss Operations Network (JON) plug-in to obtain metrics and perform operations. Data Grid 8 no longer uses the JON plug-in. Data Grid 8 separates JMX and Prometheus metrics into cache manager and cache level configurations. <cache-container name="default" statistics="true"> 1 <jmx enabled="true" /> 2 </cache-container> 1 Enables statistics for the cache manager. This is the default. 2 Exports JMX MBeans, which includes all statistics and operations. <distributed-cache name="mycache" statistics="true" /> 1 1 Enables statistics for the cache. Additional resources Data Grid Server Guide 2.12. Data Grid Server cheatsheet Use the following commands and examples as a quick reference for working with Data Grid Server. Starting server instances Linux USD bin/server.sh Microsoft Windows USD bin\server.bat Starting the CLI Linux USD bin/cli.sh Microsoft Windows USD bin\cli.bat Creating users Linux USD bin/cli.sh user create myuser -p "qwer1234!" Microsoft Windows USD bin\cli.bat user create myuser -p "qwer1234!" Stopping server instances Single server instances [//containers/default]> shutdown server USDhostname Entire clusters [//containers/default]> shutdown cluster Listing available command options Use the -h flag to list available command options for running servers. Linux USD bin/server.sh -h Microsoft Windows USD bin\server.bat -h 7.x to 8 reference 7.x 8.x ./standalone.sh -c clustered.xml ./server.sh ./standalone.sh ./server.sh -c infinispan-local.xml -Djboss.default.multicast.address=234.99.54.20 -Djgroups.mcast_addr=234.99.54.20 -Djboss.bind.address=172.18.1.13 -Djgroups.bind.address=172.18.1.13 -Djboss.default.jgroups.stack=udp -j udp Additional resources Data Grid Server Guide
|
[
"<infinispan xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"urn:infinispan:config:15.0 https://infinispan.org/schemas/infinispan-config-15.0.xsd urn:infinispan:server:15.0 https://infinispan.org/schemas/infinispan-server-15.0.xsd\" xmlns=\"urn:infinispan:config:15.0\" xmlns:server=\"urn:infinispan:server:15.0\">",
"<!-- Creates a cache manager named \"default\" that exports statistics. --> <cache-container name=\"default\" statistics=\"true\"> <!-- Defines cluster transport properties, including the cluster name. --> <!-- Uses the default TCP stack for inter-cluster communication. --> <transport cluster=\"USD{infinispan.cluster.name}\" stack=\"USD{infinispan.cluster.stack:tcp}\" node-name=\"USD{infinispan.node.name:}\"/> </cache-container>",
"<server> <interfaces> <interface name=\"public\"> 1 <inet-address value=\"USD{infinispan.bind.address:127.0.0.1}\"/> 2 </interface> </interfaces> <socket-bindings default-interface=\"public\" 3 port-offset=\"USD{infinispan.socket.binding.port-offset:0}\"> 4 <socket-binding name=\"default\" 5 port=\"USD{infinispan.bind.port:11222}\"/> 6 <socket-binding name=\"memcached\" port=\"11221\"/> 7 </socket-bindings> <security> <security-realms> 8 <security-realm name=\"default\"> 9 <server-identities> 10 <ssl> <keystore path=\"application.keystore\" 11 keystore-password=\"password\" alias=\"server\" key-password=\"password\" generate-self-signed-certificate-host=\"localhost\"/> </ssl> </server-identities> <properties-realm groups-attribute=\"Roles\"> 12 <user-properties path=\"users.properties\" 13 relative-to=\"infinispan.server.config.path\" plain-text=\"true\"/> 14 <group-properties path=\"groups.properties\" 15 relative-to=\"infinispan.server.config.path\" /> </properties-realm> </security-realm> </security-realms> </security> <endpoints socket-binding=\"default\" security-realm=\"default\" /> 16 </server>",
"<cache-container name=\"default\" statistics=\"true\"> <transport cluster=\"USD{infinispan.cluster.name:cluster}\" stack=\"USD{infinispan.cluster.stack:tcp}\" node-name=\"USD{infinispan.node.name:}\"/> <security> <authorization/> 1 </security> </cache-container>",
"<security-realm name=\"default\"> <server-identities> <ssl> <keystore path=\"server.pfx\" keystore-password=\"password\" alias=\"server\"/> </ssl> </server-identities> <truststore-realm path=\"trust.pfx\" password=\"secret\"/> </security-realm>",
"<security-realm name=\"default\"> <server-identities> <ssl> <keystore path=\"server.pfx\" keystore-password=\"password\" alias=\"server\"/> <truststore path=\"trust.pfx\" password=\"secret\"/> 1 </ssl> </server-identities> <truststore-realm/> 2 </security-realm>",
"<endpoints socket-binding=\"default\" security-realm=\"default\"/>",
"<credential-stores> <credential-store name=\"credentials\" path=\"credentials.pfx\"> <clear-text-credential clear-text=\"secret\"/> </credential-store> </credential-stores>",
"<endpoints socket-binding=\"default\" security-realm=\"default\"> <hotrod-connector name=\"hotrod\"/> <rest-connector name=\"rest\"/> </endpoints>",
"<endpoints> <endpoint socket-binding=\"public\" security-realm=\"application-realm\" admin=\"false\"> <hotrod-connector/> <rest-connector/> </endpoint> <endpoint socket-binding=\"private\" security-realm=\"management-realm\"> <hotrod-connector/> <rest-connector/> </endpoint> </endpoints>",
"<interfaces> <interface name=\"management\"> <inet-address value=\"USD{jboss.bind.address.management:127.0.0.1}\"/> </interface> <interface name=\"public\"> <inet-address value=\"USD{jboss.bind.address:127.0.0.1}\"/> </interface> </interfaces>",
"<interfaces> <interface name=\"public\"> <inet-address value=\"USD{infinispan.bind.address:127.0.0.1}\"/> </interface> </interfaces>",
"<socket-binding-group name=\"standard-sockets\" default-interface=\"public\" port-offset=\"USD{jboss.socket.binding.port-offset:0}\"> <socket-binding name=\"management-http\" interface=\"management\" port=\"USD{jboss.management.http.port:9990}\"/> <socket-binding name=\"management-https\" interface=\"management\" port=\"USD{jboss.management.https.port:9993}\"/> <socket-binding name=\"hotrod\" port=\"11222\"/> <socket-binding name=\"hotrod-internal\" port=\"11223\"/> <socket-binding name=\"hotrod-multi-tenancy\" port=\"11224\"/> <socket-binding name=\"memcached\" port=\"11211\"/> <socket-binding name=\"rest\" port=\"8080\"/> </socket-binding-group>",
"<socket-bindings default-interface=\"public\" port-offset=\"USD{infinispan.socket.binding.port-offset:0}\"> <socket-binding name=\"default\" port=\"USD{infinispan.bind.port:11222}\"/> <socket-binding name=\"memcached\" port=\"11221\"/> </socket-bindings>",
"<subsystem xmlns=\"urn:infinispan:server:endpoint:9.4\"> <hotrod-connector socket-binding=\"hotrod\" cache-container=\"local\"> <topology-state-transfer lazy-retrieval=\"false\" lock-timeout=\"1000\" replication-timeout=\"5000\"/> </hotrod-connector> <rest-connector socket-binding=\"rest\" cache-container=\"local\"> <authentication security-realm=\"ApplicationRealm\" auth-method=\"BASIC\"/> </rest-connector> </subsystem>",
"<endpoints socket-binding=\"default\" security-realm=\"default\"/>",
"<endpoints socket-binding=\"default\" security-realm=\"default\"> <hotrod-connector name=\"hotrod\"/> <rest-connector name=\"rest\"/> </endpoints>",
"<endpoints> <endpoint socket-binding=\"public\" security-realm=\"application-realm\" admin=\"false\"> <hotrod-connector/> <rest-connector/> </endpoint> <endpoint socket-binding=\"private\" security-realm=\"management-realm\"> <hotrod-connector/> <rest-connector/> </endpoint> </endpoints>",
"<security> <security-realms> </security-realms> </security>",
"<security-realm name=\"default\"> <server-identities> <ssl> <keystore path=\"...\" relative-to=\"...\" keystore-password=\"...\" alias=\"...\" key-password=\"...\" generate-self-signed-certificate-host=\"...\"/> </ssl> </server-identities> <security-realm>",
"<hotrod-connector name=\"hotrod\"> <authentication> <sasl mechanisms=\"...\" server-name=\"...\"/> </authentication> </hotrod-connector> <rest-connector name=\"rest\"> <authentication> <mechanisms=\"...\" server-principal=\"...\"/> </authentication> </rest-connector>",
"<authorization audit-logger=\"org.infinispan.security.impl.DefaultAuditLogger\">",
"<interfaces> <interface name=\"public\"> <inet-address value=\"USD{infinispan.bind.address:198.51.100.0}\"/> </interface> <interface name=\"private\"> <inet-address value=\"USD{infinispan.bind.address:192.0.2.0}\"/> </interface> </interfaces>",
"<socket-bindings default-interface=\"private\" port-offset=\"USD{infinispan.socket.binding.port-offset:0}\"> <socket-binding name=\"default\" port=\"USD{infinispan.bind.port:8080}\"/> <socket-binding name=\"hotrod\" interface=\"public\" port=\"11222\"/> </socket-bindings>",
"<security> <security-realms> <security-realm name=\"truststore\"> <server-identities> <ssl> <keystore path=\"server.p12\" relative-to=\"infinispan.server.config.path\" keystore-password=\"secret\" alias=\"server\"/> </ssl> </server-identities> <truststore-realm path=\"trust.p12\" relative-to=\"infinispan.server.config.path\" keystore-password=\"secret\"/> </security-realm> <security-realm name=\"kerberos\"> <server-identities> <kerberos keytab-path=\"http.keytab\" principal=\"HTTP/[email protected]\" required=\"true\"/> </server-identities> </security-realm> </security-realms> </security>",
"<endpoints> <endpoint socket-binding=\"default\" security-realm=\"kerberos\"> <hotrod-connector/> <rest-connector/> </endpoint> <endpoint socket-binding=\"hotrod\" security-realm=\"truststore\"> <hotrod-connector/> <rest-connector/> </endpoint> </endpoints>",
"[org.infinispan.SERVER] ISPN080004: Protocol HotRod listening on 198.51.100.0:11222 [org.infinispan.SERVER] ISPN080004: Protocol SINGLE_PORT listening on 192.0.2.0:8080 [org.infinispan.SERVER] ISPN080034: Server '<hostname>' listening on http://192.0.2.0:8080",
"bin/cli.sh -c http://192.0.2.0:8080",
"<cache-container name=\"default\" statistics=\"true\"> 1 <jmx enabled=\"true\" /> 2 </cache-container>",
"<distributed-cache name=\"mycache\" statistics=\"true\" /> 1",
"bin/server.sh",
"bin\\server.bat",
"bin/cli.sh",
"bin\\cli.bat",
"bin/cli.sh user create myuser -p \"qwer1234!\"",
"bin\\cli.bat user create myuser -p \"qwer1234!\"",
"[//containers/default]> shutdown server USDhostname",
"[//containers/default]> shutdown cluster",
"bin/server.sh -h",
"bin\\server.bat -h"
] |
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/migrating_to_data_grid_8/server-migration
|
Chapter 14. Creating a role user
|
Chapter 14. Creating a role user As explained in Section 2.6.6.1, "Default administrative roles" , a bootstrap user was created during the installation. After the installation, create real users and assign them proper system privileges. For compliance, each user must be a member of only one role (group). This chapter instructs you how to: Create a Certificate System administrative user on the operating system Create a PKI role in Certificate System 14.1. Creating a PKI administrative user on the operating system This section is for administrative role users. Agent and Auditor role users, see Section 14.2, "Creating a PKI role user in Certificate System" . In general, administrators, agents, and auditors in Certificate System can manage the Certificate System instance remotely using client applications, such as command-line utilities, the Java Console, and browsers. For the majority of CS management tasks, a Certificate System role user does not need to log on to the host machine where the instance runs. For example, an auditor role user is allowed to retrieve signed audit logs remotely for verification, and an agent role user can use the agent interface to approve a certificate issuance, while an administrator role user can use command-line utilities to configure a profile. In certain cases, however, a Certificate System administrator requires to log in to the host system to modify configuration files directly, or to start or stop a Certificate System instance. Therefore, on the operating system, the administrator role user should be someone who is allowed to make changes to the configuration files and read various logs associated with Red Hat Certificate System. Note Do not allow the Certificate System administrators or anyone other than the auditors to access the audit log files. Create the pkiadmin group on the operating system. Add the pkiuser to the pkiadmin group: Create a user on the operating system. For example, to create the jsmith account: For details, see the useradd(8) man page. Add the user jsmith to the pkiadmin group: For details, see the usermod(8) man page. If you are using a nShield hardware security module (HSM), add the user who manages the HSM device to the nfast group: Add proper sudo rules to allow the pkiadmin group to Certificate System and other system services. For both simplicity of administration and security, the Certificate System and Directory Server processes can be configured so that PKI administrators (instead of only root) can start and stop the services. A recommended option when setting up subsystems is to use a pkiadmin system group. (Details are Section 7.1.1, "Creating OS users and groups" ). All of the operating system users which will be Certificate System administrators are then added to this group. If this pkiadmin system group exists, then it can be granted sudo access to perform certain tasks. Edit the /etc/sudoers file; on Red Hat Enterprise Linux, this can be done using the visudo command: Depending on what is installed on the machine, add a line for the Directory Server, PKI management tools, and each PKI subsystem instance, granting sudo rights to the pkiadmin group: Important Make sure to set sudo permissions for every Certificate System and Directory Server on the machine -and only for those instances on the machine. There could be multiple instances of the same subsystem type on a machine or no instance of a subsystem type. It depends on the deployment. Set the group on the following files to pkiadmin : After creating the administrative user on the operating system, follow Section 14.2, "Creating a PKI role user in Certificate System" . 14.2. Creating a PKI role user in Certificate System To create a PKI role user, see Chapter 11 Managing Certificate System Users and Groups in the Administration Guide (Common Criteria Edition) .
|
[
"groupadd -r pkiadmin",
"usermod -a -G pkiadmin pkiuser",
"useradd -g pkiadmin -d /home/jsmith -s /bin/bash -c \"Red Hat Certificate System Administrator John Smith\" -m jsmith",
"usermod -a -G pkiadmin jsmith",
"usermod -a -G nfast pkiuser",
"usermod -a -G nfast jsmith",
"visudo",
"For Directory Server services %pkiadmin ALL = PASSWD: /usr/bin/systemctl * dirsrv.target %pkiadmin ALL = PASSWD: /usr/bin/systemctl * dirsrv-admin.service For PKI instance management %pkiadmin ALL = PASSWD: /usr/sbin/pkispawn * %pkiadmin ALL = PASSWD: /usr/sbin/pkidestroy * For PKI instance services %pkiadmin ALL = PASSWD: /usr/bin/systemctl * pki-tomcatd@instance_name.service",
"chgrp pkiadmin /etc/pki/instance_name/server.xml chgrp -R pkiadmin /etc/pki/instance_name/alias chgrp pkiadmin /etc/pki/instance_name/subsystem/CS.cfg chgrp pkiadmin /var/log/pki/instance_name/subsystem/debug"
] |
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide_common_criteria_edition/creating_a_role_user
|
Chapter 2. Configuring Identity Management for smart card authentication
|
Chapter 2. Configuring Identity Management for smart card authentication Identity Management (IdM) supports smart card authentication with: User certificates issued by the IdM certificate authority User certificates issued by an external certificate authority You can configure smart card authentication in IdM for both types of certificates. In this scenario, the rootca.pem CA certificate is the file containing the certificate of a trusted external certificate authority. Note Currently, IdM does not support importing multiple CAs that share the same Subject Distinguished Name (DN) but are cryptographically different. For information about smart card authentication in IdM, see Understanding smart card authentication . For more details on configuring smart card authentication: Configuring the IdM server for smart card authentication Configuring the IdM client for smart card authentication Adding a certificate to a user entry in the IdM Web UI Adding a certificate to a user entry in the IdM CLI Installing tools for managing and using smart cards Storing a certificate on a smart card Logging in to IdM with smart cards Configuring GDM access using smart card authentication Configuring su access using smart card authentication 2.1. Configuring the IdM server for smart card authentication If you want to enable smart card authentication for users whose certificates have been issued by the certificate authority (CA) of the <EXAMPLE.ORG> domain that your Identity Management (IdM) CA trusts, you must obtain the following certificates so that you can add them when running the ipa-advise script that configures the IdM server: The certificate of the root CA that has either issued the certificate for the <EXAMPLE.ORG> CA directly, or through one or more of its sub-CAs. You can download the certificate chain from a web page whose certificate has been issued by the authority. For details, see Steps 1 - 4a in Configuring a browser to enable certificate authentication . The IdM CA certificate. You can obtain the CA certificate from the /etc/ipa/ca.crt file on the IdM server on which an IdM CA instance is running. The certificates of all of the intermediate CAs; that is, intermediate between the <EXAMPLE.ORG> CA and the IdM CA. To configure an IdM server for smart card authentication: Obtain files with the CA certificates in the PEM format. Run the built-in ipa-advise script. Reload the system configuration. Prerequisites You have root access to the IdM server. You have the root CA certificate and all the intermediate CA certificates. Procedure Create a directory in which you will do the configuration: Navigate to the directory: Obtain the relevant CA certificates stored in files in PEM format. If your CA certificate is stored in a file of a different format, such as DER, convert it to PEM format. The IdM Certificate Authority certificate is in PEM format and is located in the /etc/ipa/ca.crt file. Convert a DER file to a PEM file: For convenience, copy the certificates to the directory in which you want to do the configuration: Optional: If you use certificates of external certificate authorities, use the openssl x509 utility to view the contents of the files in the PEM format to check that the Issuer and Subject values are correct: Generate a configuration script with the in-built ipa-advise utility, using the administrator's privileges: The config-server-for-smart-card-auth.sh script performs the following actions: It configures the IdM Apache HTTP Server. It enables Public Key Cryptography for Initial Authentication in Kerberos (PKINIT) on the Key Distribution Center (KDC). It configures the IdM Web UI to accept smart card authorization requests. Execute the script, adding the PEM files containing the root CA and sub CA certificates as arguments: Note Ensure that you add the root CA's certificate as an argument before any sub CA certificates and that the CA or sub CA certificates have not expired. Optional: If the certificate authority that issued the user certificate does not provide any Online Certificate Status Protocol (OCSP) responder, you may need to disable OCSP check for authentication to the IdM Web UI: Set the SSLOCSPEnable parameter to off in the /etc/httpd/conf.d/ssl.conf file: Restart the Apache daemon (httpd) for the changes to take effect immediately: Warning Do not disable the OCSP check if you only use user certificates issued by the IdM CA. OCSP responders are part of IdM. For instructions on how to keep the OCSP check enabled, and yet prevent a user certificate from being rejected by the IdM server if it does not contain the information about the location at which the CA that issued the user certificate listens for OCSP service requests, see the SSLOCSPDefaultResponder directive in Apache mod_ssl configuration options . The server is now configured for smart card authentication. Note To enable smart card authentication in the whole topology, run the procedure on each IdM server. 2.2. Using Ansible to configure the IdM server for smart card authentication You can use Ansible to enable smart card authentication for users whose certificates have been issued by the certificate authority (CA) of the <EXAMPLE.ORG> domain that your Identity Management (IdM) CA trusts. To do that, you must obtain the following certificates so that you can use them when running an Ansible playbook with the ipasmartcard_server ansible-freeipa role script: The certificate of the root CA that has either issued the certificate for the <EXAMPLE.ORG> CA directly, or through one or more of its sub-CAs. You can download the certificate chain from a web page whose certificate has been issued by the authority. For details, see Step 4 in Configuring a browser to enable certificate authentication . The IdM CA certificate. You can obtain the CA certificate from the /etc/ipa/ca.crt file on any IdM CA server. The certificates of all of the CAs that are intermediate between the <EXAMPLE.ORG> CA and the IdM CA. Prerequisites You have root access to the IdM server. You know the IdM admin password. You have the root CA certificate, the IdM CA certificate, and all the intermediate CA certificates. You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.15 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure If your CA certificates are stored in files of a different format, such as DER , convert them to PEM format: The IdM Certificate Authority certificate is in PEM format and is located in the /etc/ipa/ca.crt file. Optional: Use the openssl x509 utility to view the contents of the files in the PEM format to check that the Issuer and Subject values are correct: Navigate to your ~/ MyPlaybooks / directory: Create a subdirectory dedicated to the CA certificates: For convenience, copy all the required certificates to the ~/MyPlaybooks/SmartCard/ directory: In your Ansible inventory file, specify the following: The IdM servers that you want to configure for smart card authentication. The IdM administrator password. The paths to the certificates of the CAs in the following order: The root CA certificate file The intermediate CA certificates files The IdM CA certificate file The file can look as follows: Create an install-smartcard-server.yml playbook with the following content: Save the file. Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: The ipasmartcard_server Ansible role performs the following actions: It configures the IdM Apache HTTP Server. It enables Public Key Cryptography for Initial Authentication in Kerberos (PKINIT) on the Key Distribution Center (KDC). It configures the IdM Web UI to accept smart card authorization requests. Optional: If the certificate authority that issued the user certificate does not provide any Online Certificate Status Protocol (OCSP) responder, you may need to disable OCSP check for authentication to the IdM Web UI: Connect to the IdM server as root : Set the SSLOCSPEnable parameter to off in the /etc/httpd/conf.d/ssl.conf file: Restart the Apache daemon (httpd) for the changes to take effect immediately: Warning Do not disable the OCSP check if you only use user certificates issued by the IdM CA. OCSP responders are part of IdM. For instructions on how to keep the OCSP check enabled, and yet prevent a user certificate from being rejected by the IdM server if it does not contain the information about the location at which the CA that issued the user certificate listens for OCSP service requests, see the SSLOCSPDefaultResponder directive in Apache mod_ssl configuration options . The server listed in the inventory file is now configured for smart card authentication. Note To enable smart card authentication in the whole topology, set the hosts variable in the Ansible playbook to ipacluster : Additional resources Sample playbooks using the ipasmartcard_server role in the /usr/share/doc/ansible-freeipa/playbooks/ directory 2.3. Configuring the IdM client for smart card authentication Follow this procedure to configure IdM clients for smart card authentication. The procedure needs to be run on each IdM system, a client or a server, to which you want to connect while using a smart card for authentication. For example, to enable an ssh connection from host A to host B, the script needs to be run on host B. As an administrator, run this procedure to enable smart card authentication using The ssh protocol For details see Configuring SSH access using smart card authentication . The console login The GNOME Display Manager (GDM) The su command This procedure is not required for authenticating to the IdM Web UI. Authenticating to the IdM Web UI involves two hosts, neither of which needs to be an IdM client: The machine on which the browser is running. The machine can be outside of the IdM domain. The IdM server on which httpd is running. The following procedure assumes that you are configuring smart card authentication on an IdM client, not an IdM server. For this reason you need two computers: an IdM server to generate the configuration script, and the IdM client on which to run the script. Prerequisites Your IdM server has been configured for smart card authentication, as described in Configuring the IdM server for smart card authentication . You have root access to the IdM server and the IdM client. You have the root CA certificate and all the intermediate CA certificates. You installed the IdM client with the --mkhomedir option to ensure remote users can log in successfully. If you do not create a home directory, the default login location is the root of the directory structure, / . Procedure On an IdM server, generate a configuration script with ipa-advise using the administrator's privileges: The config-client-for-smart-card-auth.sh script performs the following actions: It configures the smart card daemon. It sets the system-wide truststore. It configures the System Security Services Daemon (SSSD) to allow users to authenticate with either their user name and password or with their smart card. For more details on SSSD profile options for smart card authentication, see Smart card authentication options in RHEL . From the IdM server, copy the script to a directory of your choice on the IdM client machine: From the IdM server, copy the CA certificate files in PEM format for convenience to the same directory on the IdM client machine as used in the step: On the client machine, execute the script, adding the PEM files containing the CA certificates as arguments: Note Ensure that you add the root CA's certificate as an argument before any sub CA certificates and that the CA or sub CA certificates have not expired. The client is now configured for smart card authentication. 2.4. Using Ansible to configure IdM clients for smart card authentication Follow this procedure to use the ansible-freeipa ipasmartcard_client module to configure specific Identity Management (IdM) clients to permit IdM users to authenticate with a smart card. Run this procedure to enable smart card authentication for IdM users that use any of the following to access IdM: The ssh protocol For details see Configuring SSH access using smart card authentication . The console login The GNOME Display Manager (GDM) The su command Note This procedure is not required for authenticating to the IdM Web UI. Authenticating to the IdM Web UI involves two hosts, neither of which needs to be an IdM client: The machine on which the browser is running. The machine can be outside of the IdM domain. The IdM server on which httpd is running. Prerequisites Your IdM server has been configured for smart card authentication, as described in Using Ansible to configure the IdM server for smart card authentication . You have root access to the IdM server and the IdM client. You have the root CA certificate, the IdM CA certificate, and all the intermediate CA certificates. You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.15 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure If your CA certificates are stored in files of a different format, such as DER , convert them to PEM format: The IdM CA certificate is in PEM format and is located in the /etc/ipa/ca.crt file. Optional: Use the openssl x509 utility to view the contents of the files in the PEM format to check that the Issuer and Subject values are correct: On your Ansible control node, navigate to your ~/ MyPlaybooks / directory: Create a subdirectory dedicated to the CA certificates: For convenience, copy all the required certificates to the ~/MyPlaybooks/SmartCard/ directory, for example: In your Ansible inventory file, specify the following: The IdM clients that you want to configure for smart card authentication. The IdM administrator password. The paths to the certificates of the CAs in the following order: The root CA certificate file The intermediate CA certificates files The IdM CA certificate file The file can look as follows: Create an install-smartcard-clients.yml playbook with the following content: Save the file. Run the Ansible playbook. Specify the playbook and inventory files: The ipasmartcard_client Ansible role performs the following actions: It configures the smart card daemon. It sets the system-wide truststore. It configures the System Security Services Daemon (SSSD) to allow users to authenticate with either their user name and password or their smart card. For more details on SSSD profile options for smart card authentication, see Smart card authentication options in RHEL . The clients listed in the ipaclients section of the inventory file are now configured for smart card authentication. Note If you have installed the IdM clients with the --mkhomedir option, remote users will be able to log in to their home directories. Otherwise, the default login location is the root of the directory structure, / . Additional resources Sample playbooks using the ipasmartcard_server role in the /usr/share/doc/ansible-freeipa/playbooks/ directory 2.5. Adding a certificate to a user entry in the IdM Web UI Follow this procedure to add an external certificate to a user entry in IdM Web UI. Note Instead of uploading the whole certificate, it is also possible to upload certificate mapping data to a user entry in IdM. User entries containing either full certificates or certificate mapping data can be used in conjunction with corresponding certificate mapping rules to facilitate the configuration of smart card authentication for system administrators. For details, see Certificate mapping rules for configuring authentication . Note If the user's certificate has been issued by the IdM Certificate Authority, the certificate is already stored in the user entry, and you do not need to follow this procedure. Prerequisites You have the certificate that you want to add to the user entry at your disposal. Procedure Log into the IdM Web UI as an administrator if you want to add a certificate to another user. For adding a certificate to your own profile, you do not need the administrator's credentials. Navigate to Users Active users sc_user . Find the Certificate option and click Add . On the command line, display the certificate in the PEM format using the cat utility or a text editor: Copy and paste the certificate from the CLI into the window that has opened in the Web UI. Click Add . Figure 2.1. Adding a new certificate in the IdM Web UI The sc_user entry now contains an external certificate. 2.6. Adding a certificate to a user entry in the IdM CLI Follow this procedure to add an external certificate to a user entry in IdM CLI. Note Instead of uploading the whole certificate, it is also possible to upload certificate mapping data to a user entry in IdM. User entries containing either full certificates or certificate mapping data can be used in conjunction with corresponding certificate mapping rules to facilitate the configuration of smart card authentication for system administrators. For details, see Certificate mapping rules for configuring authentication . Note If the user's certificate has been issued by the IdM Certificate Authority, the certificate is already stored in the user entry, and you do not need to follow this procedure. Prerequisites You have the certificate that you want to add to the user entry at your disposal. Procedure Log into the IdM CLI as an administrator if you want to add a certificate to another user: For adding a certificate to your own profile, you do not need the administrator's credentials: Create an environment variable containing the certificate with the header and footer removed and concatenated into a single line, which is the format expected by the ipa user-add-cert command: Note that certificate in the testuser.crt file must be in the PEM format. Add the certificate to the profile of sc_user using the ipa user-add-cert command: The sc_user entry now contains an external certificate. 2.7. Installing tools for managing and using smart cards Prerequisites The gnutls-utils package is installed. The opensc package is installed. The pcscd service is running. Before you can configure your smart card, you must install the corresponding tools, which can generate certificates and start the pscd service. Procedure Install the opensc and gnutls-utils packages: Start the pcscd service. Verification Verify that the pcscd service is up and running 2.8. Preparing your smart card and uploading your certificates and keys to your smart card Follow this procedure to configure your smart card with the pkcs15-init tool, which helps you to configure: Erasing your smart card Setting new PINs and optional PIN Unblocking Keys (PUKs) Creating a new slot on the smart card Storing the certificate, private key, and public key in the slot If required, locking the smart card settings as certain smart cards require this type of finalization Note The pkcs15-init tool may not work with all smart cards. You must use the tools that work with the smart card you are using. Prerequisites The opensc package, which includes the pkcs15-init tool, is installed. For more details, see Installing tools for managing and using smart cards . The card is inserted in the reader and connected to the computer. You have a private key, a public key, and a certificate to store on the smart card. In this procedure, testuser.key , testuserpublic.key , and testuser.crt are the names used for the private key, public key, and the certificate. You have your current smart card user PIN and Security Officer PIN (SO-PIN). Procedure Erase your smart card and authenticate yourself with your PIN: The card has been erased. Initialize your smart card, set your user PIN and PUK, and your Security Officer PIN and PUK: The pcks15-init tool creates a new slot on the smart card. Set a label and the authentication ID for the slot: The label is set to a human-readable value, in this case, testuser . The auth-id must be two hexadecimal values, in this case it is set to 01 . Store and label the private key in the new slot on the smart card: Note The value you specify for --id must be the same when storing your private key and storing your certificate in the step. Specifying your own value for --id is recommended as otherwise a more complicated value is calculated by the tool. Store and label the certificate in the new slot on the smart card: Optional: Store and label the public key in the new slot on the smart card: Note If the public key corresponds to a private key or certificate, specify the same ID as the ID of the private key or certificate. Optional: Certain smart cards require you to finalize the card by locking the settings: At this stage, your smart card includes the certificate, private key, and public key in the newly created slot. You have also created your user PIN and PUK and the Security Officer PIN and PUK. 2.9. Logging in to IdM with smart cards Follow this procedure to use smart cards for logging in to the IdM Web UI. Prerequisites The web browser is configured for using smart card authentication. The IdM server is configured for smart card authentication. The certificate installed on your smart card is either issued by the IdM server or has been added to the user entry in IdM. You know the PIN required to unlock the smart card. The smart card has been inserted into the reader. Procedure Open the IdM Web UI in the browser. Click Log In Using Certificate . If the Password Required dialog box opens, add the PIN to unlock the smart card and click the OK button. The User Identification Request dialog box opens. If the smart card contains more than one certificate, select the certificate you want to use for authentication in the drop down list below Choose a certificate to present as identification . Click the OK button. Now you are successfully logged in to the IdM Web UI. 2.10. Logging in to GDM using smart card authentication on an IdM client The GNOME Desktop Manager (GDM) requires authentication. You can use your password; however, you can also use a smart card for authentication. Follow this procedure to use smart card authentication to access GDM. Prerequisites The system has been configured for smart card authentication. For details, see Configuring the IdM client for smart card authentication . The smart card contains your certificate and private key. The user account is a member of the IdM domain. The certificate on the smart card maps to the user entry through: Assigning the certificate to a particular user entry. For details, see, Adding a certificate to a user entry in the IdM Web UI or Adding a certificate to a user entry in the IdM CLI . The certificate mapping data being applied to the account. For details, see Certificate mapping rules for configuring authentication on smart cards . Procedure Insert the smart card in the reader. Enter the smart card PIN. Click Sign In . You are successfully logged in to the RHEL system and you have a TGT provided by the IdM server. Verification In the Terminal window, enter klist and check the result: 2.11. Using smart card authentication with the su command Changing to a different user requires authentication. You can use a password or a certificate. Follow this procedure to use your smart card with the su command. It means that after entering the su command, you are prompted for the smart card PIN. Prerequisites Your IdM server and client have been configured for smart card authentication. See Configuring the IdM server for smart card authentication See Configuring the IdM client for smart card authentication The smart card contains your certificate and private key. See Storing a certificate on a smart card The card is inserted in the reader and connected to the computer. Procedure In a terminal window, change to a different user with the su command: If the configuration is correct, you are prompted to enter the smart card PIN.
|
[
"mkdir ~/SmartCard/",
"cd ~/SmartCard/",
"openssl x509 -in <filename>.der -inform DER -out <filename>.pem -outform PEM",
"cp /tmp/rootca.pem ~/SmartCard/ cp /tmp/subca.pem ~/SmartCard/ cp /tmp/issuingca.pem ~/SmartCard/",
"openssl x509 -noout -text -in rootca.pem | more",
"kinit admin ipa-advise config-server-for-smart-card-auth > config-server-for-smart-card-auth.sh",
"chmod +x config-server-for-smart-card-auth.sh ./config-server-for-smart-card-auth.sh rootca.pem subca.pem issuingca.pem Ticket cache:KEYRING:persistent:0:0 Default principal: [email protected] [...] Systemwide CA database updated. The ipa-certupdate command was successful",
"SSLOCSPEnable off",
"systemctl restart httpd",
"openssl x509 -in <filename>.der -inform DER -out <filename>.pem -outform PEM",
"openssl x509 -noout -text -in root-ca.pem | more",
"cd ~/ MyPlaybooks /",
"mkdir SmartCard/",
"cp /tmp/root-ca.pem ~/MyPlaybooks/SmartCard/ cp /tmp/intermediate-ca.pem ~/MyPlaybooks/SmartCard/ cp /etc/ipa/ca.crt ~/MyPlaybooks/SmartCard/ipa-ca.crt",
"[ipaserver] ipaserver.idm.example.com [ipareplicas] ipareplica1.idm.example.com ipareplica2.idm.example.com [ipacluster:children] ipaserver ipareplicas [ipacluster:vars] ipaadmin_password= \"{{ ipaadmin_password }}\" ipasmartcard_server_ca_certs=/home/<user_name>/MyPlaybooks/SmartCard/root-ca.pem,/home/<user_name>/MyPlaybooks/SmartCard/intermediate-ca.pem,/home/<user_name>/MyPlaybooks/SmartCard/ipa-ca.crt",
"--- - name: Playbook to set up smart card authentication for an IdM server hosts: ipaserver become: true roles: - role: ipasmartcard_server state: present",
"ansible-playbook --vault-password-file=password_file -v -i inventory install-smartcard-server.yml",
"ssh [email protected]",
"SSLOCSPEnable off",
"systemctl restart httpd",
"--- - name: Playbook to setup smartcard for IPA server and replicas hosts: ipacluster [...]",
"kinit admin ipa-advise config-client-for-smart-card-auth > config-client-for-smart-card-auth.sh",
"scp config-client-for-smart-card-auth.sh root @ client.idm.example.com:/root/SmartCard/ Password: config-client-for-smart-card-auth.sh 100% 2419 3.5MB/s 00:00",
"scp {rootca.pem,subca.pem,issuingca.pem} root @ client.idm.example.com:/root/SmartCard/ Password: rootca.pem 100% 1237 9.6KB/s 00:00 subca.pem 100% 2514 19.6KB/s 00:00 issuingca.pem 100% 2514 19.6KB/s 00:00",
"kinit admin chmod +x config-client-for-smart-card-auth.sh ./config-client-for-smart-card-auth.sh rootca.pem subca.pem issuingca.pem Ticket cache:KEYRING:persistent:0:0 Default principal: [email protected] [...] Systemwide CA database updated. The ipa-certupdate command was successful",
"openssl x509 -in <filename>.der -inform DER -out <filename>.pem -outform PEM",
"openssl x509 -noout -text -in root-ca.pem | more",
"cd ~/ MyPlaybooks /",
"mkdir SmartCard/",
"cp /tmp/root-ca.pem ~/MyPlaybooks/SmartCard/ cp /tmp/intermediate-ca.pem ~/MyPlaybooks/SmartCard/ cp /etc/ipa/ca.crt ~/MyPlaybooks/SmartCard/ipa-ca.crt",
"[ipaclients] ipaclient1.example.com ipaclient2.example.com [ipaclients:vars] ipaadmin_password=SomeADMINpassword ipasmartcard_client_ca_certs=/home/<user_name>/MyPlaybooks/SmartCard/root-ca.pem,/home/<user_name>/MyPlaybooks/SmartCard/intermediate-ca.pem,/home/<user_name>/MyPlaybooks/SmartCard/ipa-ca.crt",
"--- - name: Playbook to set up smart card authentication for an IdM client hosts: ipaclients become: true roles: - role: ipasmartcard_client state: present",
"ansible-playbook --vault-password-file=password_file -v -i inventory install-smartcard-clients.yml",
"[user@client SmartCard]USD cat testuser.crt",
"[user@client SmartCard]USD kinit admin",
"[user@client SmartCard]USD kinit sc_user",
"[user@client SmartCard]USD export CERT=`openssl x509 -outform der -in testuser.crt | base64 -w0 -`",
"[user@client SmartCard]USD ipa user-add-cert sc_user --certificate=USDCERT",
"dnf -y install opensc gnutls-utils",
"systemctl start pcscd",
"systemctl status pcscd",
"pkcs15-init --erase-card --use-default-transport-keys Using reader with a card: Reader name PIN [Security Officer PIN] required. Please enter PIN [Security Officer PIN]:",
"pkcs15-init --create-pkcs15 --use-default-transport-keys --pin 963214 --puk 321478 --so-pin 65498714 --so-puk 784123 Using reader with a card: Reader name",
"pkcs15-init --store-pin --label testuser --auth-id 01 --so-pin 65498714 --pin 963214 --puk 321478 Using reader with a card: Reader name",
"pkcs15-init --store-private-key testuser.key --label testuser_key --auth-id 01 --id 01 --pin 963214 Using reader with a card: Reader name",
"pkcs15-init --store-certificate testuser.crt --label testuser_crt --auth-id 01 --id 01 --format pem --pin 963214 Using reader with a card: Reader name",
"pkcs15-init --store-public-key testuserpublic.key --label testuserpublic_key --auth-id 01 --id 01 --pin 963214 Using reader with a card: Reader name",
"pkcs15-init -F",
"klist Ticket cache: KEYRING:persistent:1358900015:krb_cache_TObtNMd Default principal: [email protected] Valid starting Expires Service principal 04/20/2020 13:58:24 04/20/2020 23:58:24 krbtgt/[email protected] renew until 04/27/2020 08:58:15",
"su - example.user PIN for smart_card"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_smart_card_authentication/configuring-idm-for-smart-card-auth_managing-smart-card-authentication
|
Chapter 19. Near Caching
|
Chapter 19. Near Caching Near caches are optional caches for Hot Rod Java client implementations that keep recently accessed data close to the user, providing faster access to data that is accessed frequently. This cache acts as a local Hot Rod client cache that is updated whenever a remote entry is retrieved via get or getVersioned operations. In Red Hat JBoss Data Grid, near cache consistency is achieved by using remote events, which send notifications to clients when entries are modified or removed (refer to Section 7.7, "Remote Event Listeners (Hot Rod)" ). With Near Caching, local cache remains consistent with remote cache. Local entry is updated or invalidated whenever remote entry on the server is updated or removed. At the client level, near caching is configurable as either Lazy or Eager modes, which is determined by the user when enabling near caching. Note Near caching is disabled for Hot Rod clients by default. Figure 19.1. Near Caching Architecture Report a bug 19.1. Lazy and Eager Near Caches Lazy Near Cache Entries are only added to lazy near caches when they are received remotely via get or getVersioned . If a cache entry is modified or removed on the server side, the Hot Rod client receives the events, which then invalidate the near cache entries by removing them from the near cache. This is an efficient way of maintaining near cache consistency as the events sent back to the client only contain key information. However, if a cache entry is retrieved after being modified the Hot Rod client must then retrieve it from the remote server. Eager Near Cache Eager near caches are eagerly populated as entries are created on the server. When entries are modified, the latest value is sent along with the notification to the client, which stores it in the near cache. Eager caches are also populated when an entry is retrieved remotely, provided it is not already present. Eager near caches have the advantage of reducing the cost of accessing the server by having newly created entries present in the near cache before requests to retrieve them are received. Eager near caches also allow modified entries that are re-queried by the client to be fetched directly from the near cache. The drawback of using eager near caching is that events received from the server are larger in size due to shipping value information, and entries may be sent to the client that will not be queried. Warning Although the eager near caching setting is provided, it is not supported for production use, as with high number of events, value sizes, or clients, eager near caching can generate a large amount of network traffic and potentially overload clients. For production use, it is recommended to use lazy near caches instead. Report a bug
| null |
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/developer_guide/chap-near_caching
|
Chapter 3. Block pools
|
Chapter 3. Block pools The OpenShift Data Foundation operator installs a default set of storage pools depending on the platform in use. These default storage pools are owned and controlled by the operator and it cannot be deleted or modified. With OpenShift Container Platform, you can create multiple custom storage pools which map to storage classes that provide the following features: Enable applications with their own high availability to use persistent volumes with two replicas, potentially improving application performance. Save space for persistent volume claims using storage classes with compression enabled. Note Multiple block pools are not supported for external mode OpenShift Data Foundation clusters. 3.1. Creating a block pool Prerequisites You must be logged into the OpenShift Container Platform web console as an administrator. Procedure Click Storage Data Foundation . In the Storage systems tab, select the storage system and then click the BlockPools tab. Click Create Block Pool . Enter Pool name . Note Using 2-way replication data protection policy is not supported for the default pool. However, you can use 2-way replication if you are creating an additional pool. Select Data protection policy as either 2-way Replication or 3-way Replication . Select Volume Type . Optional: Select Enable compression checkbox if you need to compress the data. Enabling compression can impact application performance and might prove ineffective when data to be written is already compressed or encrypted. Data written before enabling compression will not be compressed. Click Create . 3.2. Updating an existing pool Prerequisites You must be logged into the OpenShift Container Platform web console as an administrator. Procedure Click Storage Data Foundation . In the Storage systems tab, select the storage system and then click BlockPools . Click the Action Menu (...) at the end the pool you want to update. Click Edit Block Pool . Modify the form details as follows: Note Using 2-way replication data protection policy is not supported for the default pool. However, you can use 2-way replication if you are creating an additional pool. Change the Data protection policy to either 2-way Replication or 3-way Replication. Enable or disable the compression option. Enabling compression can impact application performance and might prove ineffective when data to be written is already compressed or encrypted. Data written before enabling compression will not be compressed. Click Save . 3.3. Deleting a pool Use this procedure to delete a pool in OpenShift Data Foundation. Prerequisites You must be logged into the OpenShift Container Platform web console as an administrator. Procedure . Click Storage Data Foundation . In the Storage systems tab, select the storage system and then click the BlockPools tab. Click the Action Menu (...) at the end the pool you want to delete. Click Delete Block Pool . Click Delete to confirm the removal of the Pool. Note A pool cannot be deleted when it is bound to a PVC. You must detach all the resources before performing this activity.
| null |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/managing_and_allocating_storage_resources/block-pools_rhodf
|
Chapter 1. Red Hat Single Sign-On features and concepts
|
Chapter 1. Red Hat Single Sign-On features and concepts Red Hat Single Sign-On is a single sign on solution for web apps and RESTful web services. The goal of Red Hat Single Sign-On is to make security simple so that it is easy for application developers to secure the apps and services they have deployed in their organization. Security features that developers normally have to write for themselves are provided out of the box and are easily tailorable to the individual requirements of your organization. Red Hat Single Sign-On provides customizable user interfaces for login, registration, administration, and account management. You can also use Red Hat Single Sign-On as an integration platform to hook it into existing LDAP and Active Directory servers. You can also delegate authentication to third party identity providers like Facebook and Google. 1.1. Features Red Hat Single Sign-On provides the following features: Single-Sign On and Single-Sign Out for browser applications. OpenID Connect support. OAuth 2.0 support. SAML support. Identity Brokering - Authenticate with external OpenID Connect or SAML Identity Providers. Social Login - Enable login with Google, GitHub, Facebook, Twitter, and other social networks. User Federation - Sync users from LDAP and Active Directory servers. Kerberos bridge - Automatically authenticate users that are logged-in to a Kerberos server. Admin Console for central management of users, roles, role mappings, clients and configuration. Account Management console that allows users to centrally manage their account. Theme support - Customize all user facing pages to integrate with your applications and branding. Two-factor Authentication - Support for TOTP/HOTP via Google Authenticator or FreeOTP. Login flows - optional user self-registration, recover password, verify email, require password update, etc. Session management - Admins and users themselves can view and manage user sessions. Token mappers - Map user attributes, roles, etc. how you want into tokens and statements. Not-before revocation policies per realm, application and user. CORS support - Client adapters have built-in support for CORS. Client adapters for JavaScript applications, JBoss EAP, etc. Supports any platform/language that has an OpenID Connect Relying Party library or SAML 2.0 Service Provider library. 1.2. Basic Red Hat Single Sign-On operations Red Hat Single Sign-On is a separate server that you manage on your network. Applications are configured to point to and be secured by this server. Red Hat Single Sign-On uses open protocol standards like OpenID Connect or SAML 2.0 to secure your applications. Browser applications redirect a user's browser from the application to the Red Hat Single Sign-On authentication server where they enter their credentials. This redirection is important because users are completely isolated from applications and applications never see a user's credentials. Applications instead are given an identity token or assertion that is cryptographically signed. These tokens can have identity information like username, address, email, and other profile data. They can also hold permission data so that applications can make authorization decisions. These tokens can also be used to make secure invocations on REST-based services. 1.3. Core concepts and terms Consider these core concepts and terms before attempting to use Red Hat Single Sign-On to secure your web applications and REST services. users Users are entities that are able to log into your system. They can have attributes associated with themselves like email, username, address, phone number, and birth day. They can be assigned group membership and have specific roles assigned to them. authentication The process of identifying and validating a user. authorization The process of granting access to a user. credentials Credentials are pieces of data that Red Hat Single Sign-On uses to verify the identity of a user. Some examples are passwords, one-time-passwords, digital certificates, or even fingerprints. roles Roles identify a type or category of user. Admin , user , manager , and employee are all typical roles that may exist in an organization. Applications often assign access and permissions to specific roles rather than individual users as dealing with users can be too fine grained and hard to manage. user role mapping A user role mapping defines a mapping between a role and a user. A user can be associated with zero or more roles. This role mapping information can be encapsulated into tokens and assertions so that applications can decide access permissions on various resources they manage. composite roles A composite role is a role that can be associated with other roles. For example a superuser composite role could be associated with the sales-admin and order-entry-admin roles. If a user is mapped to the superuser role they also inherit the sales-admin and order-entry-admin roles. groups Groups manage groups of users. Attributes can be defined for a group. You can map roles to a group as well. Users that become members of a group inherit the attributes and role mappings that group defines. realms A realm manages a set of users, credentials, roles, and groups. A user belongs to and logs into a realm. Realms are isolated from one another and can only manage and authenticate the users that they control. clients Clients are entities that can request Red Hat Single Sign-On to authenticate a user. Most often, clients are applications and services that want to use Red Hat Single Sign-On to secure themselves and provide a single sign-on solution. Clients can also be entities that just want to request identity information or an access token so that they can securely invoke other services on the network that are secured by Red Hat Single Sign-On. client adapters Client adapters are plugins that you install into your application environment to be able to communicate and be secured by Red Hat Single Sign-On. Red Hat Single Sign-On has a number of adapters for different platforms that you can download. There are also third-party adapters you can get for environments that we don't cover. consent Consent is when you as an admin want a user to give permission to a client before that client can participate in the authentication process. After a user provides their credentials, Red Hat Single Sign-On will pop up a screen identifying the client requesting a login and what identity information is requested of the user. User can decide whether or not to grant the request. client scopes When a client is registered, you must define protocol mappers and role scope mappings for that client. It is often useful to store a client scope, to make creating new clients easier by sharing some common settings. This is also useful for requesting some claims or roles to be conditionally based on the value of scope parameter. Red Hat Single Sign-On provides the concept of a client scope for this. client role Clients can define roles that are specific to them. This is basically a role namespace dedicated to the client. identity token A token that provides identity information about the user. Part of the OpenID Connect specification. access token A token that can be provided as part of an HTTP request that grants access to the service being invoked on. This is part of the OpenID Connect and OAuth 2.0 specification. assertion Information about a user. This usually pertains to an XML blob that is included in a SAML authentication response that provided identity metadata about an authenticated user. service account Each client has a built-in service account which allows it to obtain an access token. direct grant A way for a client to obtain an access token on behalf of a user via a REST invocation. protocol mappers For each client you can tailor what claims and assertions are stored in the OIDC token or SAML assertion. You do this per client by creating and configuring protocol mappers. session When a user logs in, a session is created to manage the login session. A session contains information like when the user logged in and what applications have participated within single-sign on during that session. Both admins and users can view session information. user federation provider Red Hat Single Sign-On can store and manage users. Often, companies already have LDAP or Active Directory services that store user and credential information. You can point Red Hat Single Sign-On to validate credentials from those external stores and pull in identity information. identity provider An identity provider (IDP) is a service that can authenticate a user. Red Hat Single Sign-On is an IDP. identity provider federation Red Hat Single Sign-On can be configured to delegate authentication to one or more IDPs. Social login via Facebook or Google+ is an example of identity provider federation. You can also hook Red Hat Single Sign-On to delegate authentication to any other OpenID Connect or SAML 2.0 IDP. identity provider mappers When doing IDP federation you can map incoming tokens and assertions to user and session attributes. This helps you propagate identity information from the external IDP to your client requesting authentication. required actions Required actions are actions a user must perform during the authentication process. A user will not be able to complete the authentication process until these actions are complete. For example, an admin may schedule users to reset their passwords every month. An update password required action would be set for all these users. authentication flows Authentication flows are work flows a user must perform when interacting with certain aspects of the system. A login flow can define what credential types are required. A registration flow defines what profile information a user must enter and whether something like reCAPTCHA must be used to filter out bots. Credential reset flow defines what actions a user must do before they can reset their password. events Events are audit streams that admins can view and hook into. themes Every screen provided by Red Hat Single Sign-On is backed by a theme. Themes define HTML templates and stylesheets which you can override as needed.
| null |
https://docs.redhat.com/en/documentation/red_hat_single_sign-on/7.6/html/server_administration_guide/red_hat_single_sign_on_features_and_concepts
|
Providing feedback on Red Hat documentation
|
Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug .
| null |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/managing_hybrid_and_multicloud_resources/providing-feedback-on-red-hat-documentation_rhodf
|
Chapter 8. Backing up and restoring IdM
|
Chapter 8. Backing up and restoring IdM Identity Management lets you manually back up and restore the IdM system after a data loss event. During a backup, the system creates a directory that stores information about your IdM setup. You can use this backup directory to restore your original IdM setup. Note The IdM backup and restore features are designed to help prevent data loss. To mitigate the impact of server loss and ensure continued operation, provide alternative servers to clients. For information on establishing a replication topology see Preparing for server loss with replication . 8.1. IdM backup types With the ipa-backup utility, you can create two types of backups: Full-server backup Contains all server configuration files related to IdM, and LDAP data in LDAP Data Interchange Format (LDIF) files IdM services must be offline . Suitable for rebuilding an IdM deployment from scratch. Data-only backup Contains LDAP data in LDIF files and the replication changelog IdM services can be online or offline . Suitable for restoring IdM data to a state in the past 8.2. Naming conventions for IdM backup files By default, IdM stores backups as .tar archives in subdirectories of the /var/lib/ipa/backup/ directory. The archives and subdirectories follow these naming conventions: Full-server backup An archive named ipa-full.tar in a directory named ipa-full- <YEAR-MM-DD-HH-MM-SS> , with the time specified in GMT time. Data-only backup An archive named ipa-data.tar in a directory named ipa-data- <YEAR-MM-DD-HH-MM-SS> , with the time specified in GMT time. Note Uninstalling an IdM server does not automatically remove any backup files. 8.3. Considerations when creating a backup The important behaviors and limitations of the ipa-backup command include the following: By default, the ipa-backup utility runs in offline mode, which stops all IdM services. The utility automatically restarts IdM services after the backup is finished. A full-server backup must always run with IdM services offline, but a data-only backup can be performed with services online. By default, the ipa-backup utility creates backups on the file system containing the /var/lib/ipa/backup/ directory. Red Hat recommends creating backups regularly on a file system separate from the production filesystem used by IdM, and archiving the backups to a fixed medium, such as tape or optical storage. Consider performing backups on hidden replicas . IdM services can be shut down on hidden replicas without affecting IdM clients. Starting with RHEL 8.3.0, the ipa-backup utility checks if all of the services used in your IdM cluster, such as a Certificate Authority (CA), Domain Name System (DNS), and Key Recovery Agent (KRA), are installed on the server where you are running the backup. If the server does not have all these services installed, the ipa-backup utility exits with a warning, because backups taken on that host would not be sufficient for a full cluster restoration. For example, if your IdM deployment uses an integrated Certificate Authority (CA), a backup run on a non-CA replica will not capture CA data. Red Hat recommends verifying that the replica where you perform an ipa-backup has all of the IdM services used in the cluster installed. You can bypass the IdM server role check with the ipa-backup --disable-role-check command, but the resulting backup will not contain all the data necessary to restore IdM fully. 8.4. Creating an IdM backup Create a full-server and data-only backup in offline and online modes using the ipa-backup command. Prerequisites You must have root privileges to run the ipa-backup utility. Procedure To create a full-server backup in offline mode, use the ipa-backup utility without additional options. To create an offline data-only backup, specify the --data option. To create a full-server backup that includes IdM log files, use the --logs option. To create a data-only backup while IdM services are running, specify both --data and --online options. Note If the backup fails due to insufficient space in the /tmp directory, use the TMPDIR environment variable to change the destination for temporary files created by the backup process: Verification Ensure the backup directory contains an archive with the backup. Additional resources ipa-backup command fails to finish (Red Hat Knowledgebase) 8.5. Creating a GPG2-encrypted IdM backup You can create encrypted backups using GNU Privacy Guard (GPG) encryption. The following procedure creates an IdM backup and encrypts it using a GPG2 key. Prerequisites You have created a GPG2 key. See Creating a GPG2 key . Procedure Create a GPG-encrypted backup by specifying the --gpg option. Verification Ensure that the backup directory contains an encrypted archive with a .gpg file extension. Additional resources Creating a backup . 8.6. Creating a GPG2 key The following procedure describes how to generate a GPG2 key to use with encryption utilities. Prerequisites You need root privileges. Procedure Install and configure the pinentry utility. Create a key-input file used for generating a GPG keypair with your preferred details. For example: Optional: By default, GPG2 stores its keyring in the ~/.gnupg file. To use a custom keyring location, set the GNUPGHOME environment variable to a directory that is only accessible by root. Generate a new GPG2 key based on the contents of the key-input file. Enter a passphrase to protect the GPG2 key. You use this passphrase to access the private key for decryption. Confirm the correct passphrase by entering it again. Verify that the new GPG2 key was created successfully. Verification List the GPG keys on the server. Additional resources GNU Privacy Guard 8.7. When to restore from an IdM backup You can respond to several disaster scenarios by restoring from an IdM backup: Undesirable changes were made to the LDAP content : Entries were modified or deleted, replication carried out those changes throughout the deployment, and you want to revert those changes. Restoring a data-only backup returns the LDAP entries to the state without affecting the IdM configuration itself. Total Infrastructure Loss, or loss of all CA instances : If a disaster damages all Certificate Authority replicas, the deployment has lost the ability to rebuild itself by deploying additional servers. In this situation, restore a backup of a CA Replica and build new replicas from it. An upgrade on an isolated server failed : The operating system remains functional, but the IdM data is corrupted, which is why you want to restore the IdM system to a known good state. Red Hat recommends working with Technical Support to diagnose and troubleshoot the issue. If those efforts fail, restore from a full-server backup. Important The preferred solution for hardware or upgrade failure is to rebuild the lost server from a replica. For more information, see Recovering a single server with replication . 8.8. Considerations when restoring from an IdM backup If you have a backup created with the ipa-backup utility, you can restore your IdM server or the LDAP content to the state they were in when the backup was performed. The following are the key considerations while restoring from an IdM backup: You can only restore a backup on a server that matches the configuration of the server where the backup was originally created. The server must have: The same hostname The same IP address The same version of IdM software If one IdM server among many is restored, the restored server becomes the only source of information for IdM. All other servers must be re-initialized from the restored server. Since any data created after the last backup will be lost, do not use the backup and restore solution for normal system maintenance. If a server is lost, Red Hat recommends rebuilding the server by reinstalling it as a replica, instead of restoring from a backup. Creating a new replica preserves data from the current working environment. For more information, see Preparing for server loss with replication . The backup and restore features can only be managed from the command line and are not available in the IdM web UI. You cannot restore from backup files located in the /tmp or /var/tmp directories. The IdM Directory Server uses a PrivateTmp directory and cannot access the /tmp or /var/tmp directories commonly available to the operating system. Tip Restoring from a backup requires the same software (RPM) versions on the target host as were installed when the backup was performed. Due to this, Red Hat recommends restoring from a Virtual Machine snapshot rather than a backup. For more information, see Recovering from data loss with VM snapshots . 8.9. Restoring an IdM server from a backup Restore an IdM server, or its LDAP data, from an IdM backup. Figure 8.1. Replication topology used in this example Table 8.1. Server naming conventions used in this example Server host name Function server1.example.com The server that needs to be restored from backup. caReplica2.example.com A Certificate Authority (CA) replica connected to the server1.example.com host. replica3.example.com A replica connected to the caReplica2.example.com host. Prerequisites You have generated a full-server or data-only backup of the IdM server with the ipa-backup utility. See Creating a backup . Your backup files are not in the /tmp or /var/tmp directories. Before performing a full-server restore from a full-server backup, uninstall IdM from the server and reinstall IdM using the same server configuration as before. Procedure Use the ipa-restore utility to restore a full-server or data-only backup. If the backup directory is in the default /var/lib/ipa/backup/ location, enter only the name of the directory: If the backup directory is not in the default location, enter its full path: Note The ipa-restore utility automatically detects the type of backup that the directory contains, and performs the same type of restore by default. To perform a data-only restore from a full-server backup, add the --data option to the ipa-restore command: Enter the Directory Manager password. Enter yes to confirm overwriting current data with the backup. The ipa-restore utility disables replication on all servers that are available: The utility then stops IdM services, restores the backup, and restarts the services: Re-initialize all replicas connected to the restored server: List all replication topology segments for the domain suffix, taking note of topology segments involving the restored server. Re-initialize the domain suffix for all topology segments with the restored server. In this example, perform a re-initialization of caReplica2 with data from server1 . Moving on to Certificate Authority data, list all replication topology segments for the ca suffix. Re-initialize all CA replicas connected to the restored server. In this example, perform a csreplica re-initialization of caReplica2 with data from server1 . Continue moving outward through the replication topology, re-initializing successive replicas, until all servers have been updated with the data from restored server server1.example.com . In this example, we only have to re-initialize the domain suffix on replica3 with the data from caReplica2 : Clear SSSD's cache on every server to avoid authentication problems due to invalid data: Stop the SSSD service: Remove all cached content from SSSD: Start the SSSD service: Reboot the server. Additional resources The ipa-restore (1) man page also covers in detail how to handle complex replication scenarios during restoration. 8.10. Restoring from an encrypted backup This procedure restores an IdM server from an encrypted IdM backup. The ipa-restore utility automatically detects if an IdM backup is encrypted and restores it using the GPG2 root keyring. Prerequisites A GPG-encrypted IdM backup. See Creating encrypted IdM backups . The LDAP Directory Manager password The passphrase used when creating the GPG key Procedure If you used a custom keyring location when creating the GPG2 keys, verify that the USDGNUPGHOME environment variable is set to that directory. See Creating a GPG2 key . Provide the ipa-restore utility with the backup directory location. Enter the Directory Manager password. Enter the passphrase you used when creating the GPG key. Re-initialize all replicas connected to the restored server. See Restoring an IdM server from backup .
|
[
"ll /var/lib/ipa/backup/ ipa-full -2021-01-29-12-11-46 total 3056 -rw-r--r--. 1 root root 158 Jan 29 12:11 header -rw-r--r--. 1 root root 3121511 Jan 29 12:11 ipa-full.tar",
"ll /var/lib/ipa/backup/ ipa-data -2021-01-29-12-14-23 total 1072 -rw-r--r--. 1 root root 158 Jan 29 12:14 header -rw-r--r--. 1 root root 1090388 Jan 29 12:14 ipa-data.tar",
"ipa-backup Preparing backup on server.example.com Stopping IPA services Backing up ipaca in EXAMPLE-COM to LDIF Backing up userRoot in EXAMPLE-COM to LDIF Backing up EXAMPLE-COM Backing up files Starting IPA service Backed up to /var/lib/ipa/backup/ipa-full-2020-01-14-11-26-06 The ipa-backup command was successful",
"ipa-backup --data",
"ipa-backup --logs",
"ipa-backup --data --online",
"TMPDIR=/new/location ipa-backup",
"ls /var/lib/ipa/backup/ipa-full-2020-01-14-11-26-06 header ipa-full.tar",
"ipa-backup --gpg Preparing backup on server.example.com Stopping IPA services Backing up ipaca in EXAMPLE-COM to LDIF Backing up userRoot in EXAMPLE-COM to LDIF Backing up EXAMPLE-COM Backing up files Starting IPA service Encrypting /var/lib/ipa/backup/ipa-full-2020-01-13-14-38-00/ipa-full.tar Backed up to /var/lib/ipa/backup/ipa-full-2020-01-13-14-38-00 The ipa-backup command was successful",
"ls /var/lib/ipa/backup/ipa-full-2020-01-13-14-38-00 header ipa-full.tar.gpg",
"yum install pinentry mkdir ~/.gnupg -m 700 echo \"pinentry-program /usr/bin/pinentry-curses\" >> ~/.gnupg/gpg-agent.conf",
"cat >key-input <<EOF %echo Generating a standard key Key-Type: RSA Key-Length: 2048 Name-Real: GPG User Name-Comment: first key Name-Email: [email protected] Expire-Date: 0 %commit %echo Finished creating standard key EOF",
"export GNUPGHOME= /root/backup mkdir -p USDGNUPGHOME -m 700",
"gpg2 --batch --gen-key key-input",
"ββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β Please enter the passphrase to β β protect your new key β β β β Passphrase: <passphrase> β β β β <OK> <Cancel> β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββ",
"ββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β Please re-enter this passphrase β β β β Passphrase: <passphrase> β β β β <OK> <Cancel> β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββ",
"gpg: keybox '/root/backup/pubring.kbx' created gpg: Generating a standard key gpg: /root/backup/trustdb.gpg: trustdb created gpg: key BF28FFA302EF4557 marked as ultimately trusted gpg: directory '/root/backup/openpgp-revocs.d' created gpg: revocation certificate stored as '/root/backup/openpgp-revocs.d/8F6FCF10C80359D5A05AED67BF28FFA302EF4557.rev' gpg: Finished creating standard key",
"gpg2 --list-secret-keys gpg: checking the trustdb gpg: marginals needed: 3 completes needed: 1 trust model: pgp gpg: depth: 0 valid: 1 signed: 0 trust: 0-, 0q, 0n, 0m, 0f, 1u / root /backup/pubring.kbx ------------------------ sec rsa2048 2020-01-13 [SCEA] 8F6FCF10C80359D5A05AED67BF28FFA302EF4557 uid [ultimate] GPG User (first key) <[email protected]>",
"ipa-restore ipa-full-2020-01-14-12-02-32",
"ipa-restore /mybackups/ipa-data-2020-02-01-05-30-00",
"ipa-restore --data ipa-full-2020-01-14-12-02-32",
"Directory Manager (existing master) password:",
"Preparing restore from /var/lib/ipa/backup/ipa-full-2020-01-14-12-02-32 on server1.example.com Performing FULL restore from FULL backup Temporary setting umask to 022 Restoring data will overwrite existing live data. Continue to restore? [no]: yes",
"Each master will individually need to be re-initialized or re-created from this one. The replication agreements on masters running IPA 3.1 or earlier will need to be manually re-enabled. See the man page for details. Disabling all replication. Disabling replication agreement on server1.example.com to caReplica2.example.com Disabling CA replication agreement on server1.example.com to caReplica2.example.com Disabling replication agreement on caReplica2.example.com to server1.example.com Disabling replication agreement on caReplica2.example.com to replica3.example.com Disabling CA replication agreement on caReplica2.example.com to server1.example.com Disabling replication agreement on replica3.example.com to caReplica2.example.com",
"Stopping IPA services Systemwide CA database updated. Restoring files Systemwide CA database updated. Restoring from userRoot in EXAMPLE-COM Restoring from ipaca in EXAMPLE-COM Restarting GSS-proxy Starting IPA services Restarting SSSD Restarting oddjobd Restoring umask to 18 The ipa-restore command was successful",
"ipa topologysegment-find domain ------------------ 2 segments matched ------------------ Segment name: server1.example.com-to-caReplica2.example.com Left node: server1.example.com Right node: caReplica2.example.com Connectivity: both Segment name: caReplica2.example.com-to-replica3.example.com Left node: caReplica2.example.com Right node: replica3.example.com Connectivity: both ---------------------------- Number of entries returned 2 ----------------------------",
"ipa-replica-manage re-initialize --from= server1.example.com Update in progress, 2 seconds elapsed Update succeeded",
"ipa topologysegment-find ca ----------------- 1 segment matched ----------------- Segment name: server1.example.com-to-caReplica2.example.com Left node: server1.example.com Right node: caReplica2.example.com Connectivity: both ---------------------------- Number of entries returned 1 ----------------------------",
"ipa-csreplica-manage re-initialize --from= server1.example.com Directory Manager password: Update in progress, 3 seconds elapsed Update succeeded",
"ipa-replica-manage re-initialize --from= caReplica2.example.com Directory Manager password: Update in progress, 3 seconds elapsed Update succeeded",
"systemctl stop sssd",
"sss_cache -E",
"systemctl start sssd",
"echo USDGNUPGHOME /root/backup",
"ipa-restore ipa-full-2020-01-13-18-30-54",
"Directory Manager (existing master) password:",
"ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β Please enter the passphrase to unlock the OpenPGP secret key: β β \"GPG User (first key) <[email protected]>\" β β 2048-bit RSA key, ID BF28FFA302EF4557, β β created 2020-01-13. β β β β β β Passphrase: <passphrase> β β β β <OK> <Cancel> β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/planning_identity_management/backing-up-and-restoring-idm_planning-identity-management
|
Chapter 29. Revision History
|
Chapter 29. Revision History 0.14-26 Mon Aug 05 2019, Marie Dolezelova ( [email protected] ) Preparing document for 7.7 GA publication. 0.14-23 Mon Aug 13 2018, Marie Dolezelova ( [email protected] ) Preparing document for 7.6 Beta publication. 0.14-19 Tue Mar 20 2018, Marie Dolezelova ( [email protected] ) Preparing document for 7.5 GA publication. 0.14-17 Tue Dec 5 2017, Marie Dolezelova ( [email protected] ) Updated Samba section. Added section about Configuring RELP with TLS. Updated section on Upgrading from GRUB Legacy to GRUB 2. 0.14-16 Mon Aug 8 2017, Marie Dolezelova ( [email protected] ) Minor fixes throughout the guide, added links to articles dealing with choosing a target for ordering and dependencies of the custom unit files to the chapter "Creating Custom Unit Files". 0.14-14 Thu Jul 27 2017, Marie Dolezelova ( [email protected] ) Document version for 7.4 GA publication. 0.14-8 Mon Nov 3 2016, Maxim Svistunov ( [email protected] ) Version for 7.3 GA publication. 0.14-7 Mon Jun 20 2016, Maxim Svistunov ( [email protected] ) Added Relax-and-Recover (ReaR) ; made minor improvements. 0.14-6 Thu Mar 10 2016, Maxim Svistunov ( [email protected] ) Minor fixes and updates. 0.14-5 Thu Jan 21 2016, Lenka Spackova ( [email protected] ) Minor factual updates. 0.14-3 Wed Nov 11 2015, Jana Heves ( [email protected] ) Version for 7.2 GA release. 0.14-1 Mon Nov 9 2015, Jana Heves ( [email protected] ) Minor fixes, added links to RH training courses. 0.14-0.3 Fri Apr 3 2015, Stephen Wadeley ( [email protected] ) Added Registering the System and Managing Subscriptions , Accessing Support Using the Red Hat Support Tool , updated Viewing and Managing Log Files . 0.13-2 Tue Feb 24 2015, Stephen Wadeley ( [email protected] ) Version for 7.1 GA release. 0.12-0.6 Tue Nov 18 2014, Stephen Wadeley ( [email protected] ) Improved TigerVNC . 0.12-0.4 Mon Nov 10 2014, Stephen Wadeley ( [email protected] ) Improved Yum , Managing Services with systemd , OpenLDAP , Viewing and Managing Log Files , OProfile , and Working with the GRUB 2 Boot Loader . 0.12-0 Tue 19 Aug 2014, Stephen Wadeley ( [email protected] ) Red Hat Enterprise Linux 7.0 GA release of the System Administrator's Guide. 29.1. Acknowledgments Certain portions of this text first appeared in the Red Hat Enterprise Linux 6 Deployment Guide ,
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/system_administrators_guide/app-revision_history
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.