title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
listlengths
1
5.62k
url
stringlengths
79
342
Appendix C. Job template examples and extensions
Appendix C. Job template examples and extensions Use this section as a reference to help modify, customize, and extend your job templates to suit your requirements. C.1. Customizing job templates When creating a job template, you can include an existing template in the template editor field. This way you can combine templates, or create more specific templates from the general ones. The following template combines default templates to install and start the nginx service on clients: <%= render_template 'Package Action - SSH Default', :action => 'install', :package => 'nginx' %> <%= render_template 'Service Action - SSH Default', :action => 'start', :service_name => 'nginx' %> The above template specifies parameter values for the rendered template directly. It is also possible to use the input() method to allow users to define input for the rendered template on job execution. For example, you can use the following syntax: <%= render_template 'Package Action - SSH Default', :action => 'install', :package => input("package") %> With the above template, you have to import the parameter definition from the rendered template. To do so, navigate to the Jobs tab, click Add Foreign Input Set , and select the rendered template from the Target template list. You can import all parameters or specify a comma separated list. C.2. Default job template categories Job template category Description Packages Templates for performing package related actions. Install, update, and remove actions are included by default. Puppet Templates for executing Puppet runs on target hosts. Power Templates for performing power related actions. Restart and shutdown actions are included by default. Commands Templates for executing custom commands on remote hosts. Services Templates for performing service related actions. Start, stop, restart, and status actions are included by default. Katello Templates for performing content related actions. These templates are used mainly from different parts of the Satellite web UI (for example bulk actions UI for content hosts), but can be used separately to perform operations such as errata installation. C.3. Example restorecon template This example shows how to create a template called Run Command - restorecon that restores the default SELinux context for all files in the selected directory on target hosts. Procedure In the Satellite web UI, navigate to Hosts > Templates > Job templates . Click New Job Template . Enter Run Command - restorecon in the Name field. Select Default to make the template available to all organizations. Add the following text to the template editor: restorecon -RvF <%= input("directory") %> The <%= input("directory") %> string is replaced by a user-defined directory during job invocation. On the Job tab, set Job category to Commands . Click Add Input to allow job customization. Enter directory to the Name field. The input name must match the value specified in the template editor. Click Required so that the command cannot be executed without the user specified parameter. Select User input from the Input type list. Enter a description to be shown during job invocation, for example Target directory for restorecon . Click Submit . For more information, see Executing a restorecon Template on Multiple Hosts in Managing hosts . C.4. Rendering a restorecon template This example shows how to create a template derived from the Run command - restorecon template created in Example restorecon Template . This template does not require user input on job execution, it will restore the SELinux context in all files under the /home/ directory on target hosts. Create a new template as described in Setting up Job Templates , and specify the following string in the template editor: <%= render_template("Run Command - restorecon", :directory => "/home") %> C.5. Executing a restorecon template on multiple hosts This example shows how to run a job based on the template created in Example restorecon Template on multiple hosts. The job restores the SELinux context in all files under the /home/ directory. Procedure In the Satellite web UI, navigate to Monitor > Jobs and click Run job . Select Commands as Job category and Run Command - restorecon as Job template and click . Select the hosts on which you want to run the job. If you do not select any hosts, the job will run on all hosts you can see in the current context. In the directory field, provide a directory, for example /home , and click . Optional: To configure advanced settings for the job, fill in the Advanced fields . To learn more about advanced settings, see Section 12.23, "Advanced settings in the job wizard" . When you are done entering the advanced settings or if it is not required, click . Schedule time for the job. To execute the job immediately, keep the pre-selected Immediate execution . To execute the job in future time, select Future execution . To execute the job on regular basis, select Recurring execution . Optional: If you selected future or recurring execution, select the Query type , otherwise click . Static query means that the job executes on the exact list of hosts that you provided. Dynamic query means that the list of hosts is evaluated just before the job is executed. If you entered the list of hosts based on some filter, the results can be different from when you first used that filter. Click after you have selected the query type. Optional: If you selected future or recurring execution, provide additional details: For Future execution , enter the Starts at date and time. You also have the option to select the Starts before date and time. If the job cannot start before that time, it will be canceled. For Recurring execution , select the start date and time, frequency, and condition for ending the recurring job. You can choose the recurrence to never end, end at a certain time, or end after a given number of repetitions. You can also add Purpose - a special label for tracking the job. There can only be one active job with a given purpose at a time. Click after you have entered the required information. Review job details. You have the option to return to any part of the job wizard and edit the information. Click Submit to schedule the job for execution. C.6. Including power actions in templates This example shows how to set up a job template for performing power actions, such as reboot. This procedure prevents Satellite from interpreting the disconnect exception upon reboot as an error, and consequently, remote execution of the job works correctly. Create a new template as described in Setting up Job Templates , and specify the following string in the template editor: <%= render_template("Power Action - SSH Default", :action => "restart") %>
[ "<%= render_template 'Package Action - SSH Default', :action => 'install', :package => 'nginx' %> <%= render_template 'Service Action - SSH Default', :action => 'start', :service_name => 'nginx' %>", "<%= render_template 'Package Action - SSH Default', :action => 'install', :package => input(\"package\") %>", "restorecon -RvF <%= input(\"directory\") %>", "<%= render_template(\"Run Command - restorecon\", :directory => \"/home\") %>", "<%= render_template(\"Power Action - SSH Default\", :action => \"restart\") %>" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/managing_hosts/job_template_examples_and_extensions_managing-hosts
11.4. About JTA Transaction Manager Lookup Classes
11.4. About JTA Transaction Manager Lookup Classes In order to execute a cache operation, the cache requires a reference to the environment's Transaction Manager. Configure the cache with the class name that belongs to an implementation of the TransactionManagerLookup interface. When initialized, the cache creates an instance of the specified class and invokes its getTransactionManager() method to locate and return a reference to the Transaction Manager. Table 11.1. Transaction Manager Lookup Classes Class Name Details org.infinispan.transaction.lookup.DummyTransactionManagerLookup Used primarily for testing environments. This testing transaction manager is not for use in a production environment and is severely limited in terms of functionality, specifically for concurrent transactions and recovery. org.infinispan.transaction.lookup.JBossStandaloneJTAManagerLookup It is a fully functional JBoss Transactions based transaction manager that overcomes the functionality limits of the DummyTransactionManager . org.infinispan.transaction.lookup.GenericTransactionManagerLookup GenericTransactionManagerLookup is used by default when no transaction lookup class is specified. This lookup class is recommended when using JBoss Data Grid with Java EE-compatible environment that provides a TransactionManager interface, and is capable of locating the Transaction Manager in most Java EE application servers. If no transaction manager is located, it defaults to DummyTransactionManager . It is important to note that when using Red Hat JBoss Data Grid with Tomcat or an ordinary Java Virtual Machine (JVM), the recommended Transaction Manager Lookup class is JBossStandaloneJTAManagerLookup , which uses JBoss Transactions. Report a bug
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/developer_guide/about_jta_transaction_manager_lookup_classes
Chapter 4. Template [template.openshift.io/v1]
Chapter 4. Template [template.openshift.io/v1] Description Template contains the inputs needed to produce a Config. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required objects 4.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds labels object (string) labels is a optional set of labels that are applied to every object during the Template to Config transformation. message string message is an optional instructional message that will be displayed when this template is instantiated. This field should inform the user how to utilize the newly created resources. Parameter substitution will be performed on the message before being displayed so that generated credentials and other parameters can be included in the output. metadata ObjectMeta objects array (RawExtension) objects is an array of resources to include in this template. If a namespace value is hardcoded in the object, it will be removed during template instantiation, however if the namespace value is, or contains, a USD{PARAMETER_REFERENCE}, the resolved value after parameter substitution will be respected and the object will be created in that namespace. parameters array parameters is an optional array of Parameters used during the Template to Config transformation. parameters[] object Parameter defines a name/value variable that is to be processed during the Template to Config transformation. 4.1.1. .parameters Description parameters is an optional array of Parameters used during the Template to Config transformation. Type array 4.1.2. .parameters[] Description Parameter defines a name/value variable that is to be processed during the Template to Config transformation. Type object Required name Property Type Description description string Description of a parameter. Optional. displayName string Optional: The name that will show in UI instead of parameter 'Name' from string From is an input value for the generator. Optional. generate string generate specifies the generator to be used to generate random string from an input value specified by From field. The result string is stored into Value field. If empty, no generator is being used, leaving the result Value untouched. Optional. The only supported generator is "expression", which accepts a "from" value in the form of a simple regular expression containing the range expression "[a-zA-Z0-9]", and the length expression "a{length}". Examples: from | value ----------------------------- "test[0-9]{1}x" | "test7x" "[0-1]{8}" | "01001100" "0x[A-F0-9]{4}" | "0xB3AF" "[a-zA-Z0-9]{8}" | "hW4yQU5i" name string Name must be set and it can be referenced in Template Items using USD{PARAMETER_NAME}. Required. required boolean Optional: Indicates the parameter must have a value. Defaults to false. value string Value holds the Parameter data. If specified, the generator will be ignored. The value replaces all occurrences of the Parameter USD{Name} expression during the Template to Config transformation. Optional. 4.2. API endpoints The following API endpoints are available: /apis/template.openshift.io/v1/templates GET : list or watch objects of kind Template /apis/template.openshift.io/v1/watch/templates GET : watch individual changes to a list of Template. deprecated: use the 'watch' parameter with a list operation instead. /apis/template.openshift.io/v1/namespaces/{namespace}/templates DELETE : delete collection of Template GET : list or watch objects of kind Template POST : create a Template /apis/template.openshift.io/v1/watch/namespaces/{namespace}/templates GET : watch individual changes to a list of Template. deprecated: use the 'watch' parameter with a list operation instead. /apis/template.openshift.io/v1/namespaces/{namespace}/templates/{name} DELETE : delete a Template GET : read the specified Template PATCH : partially update the specified Template PUT : replace the specified Template /apis/template.openshift.io/v1/namespaces/{namespace}/processedtemplates POST : create a Template /apis/template.openshift.io/v1/watch/namespaces/{namespace}/templates/{name} GET : watch changes to an object of kind Template. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 4.2.1. /apis/template.openshift.io/v1/templates Table 4.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list or watch objects of kind Template Table 4.2. HTTP responses HTTP code Reponse body 200 - OK TemplateList schema 401 - Unauthorized Empty 4.2.2. /apis/template.openshift.io/v1/watch/templates Table 4.3. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of Template. deprecated: use the 'watch' parameter with a list operation instead. Table 4.4. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 4.2.3. /apis/template.openshift.io/v1/namespaces/{namespace}/templates Table 4.5. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 4.6. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of Template Table 4.7. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 4.8. Body parameters Parameter Type Description body DeleteOptions schema Table 4.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind Template Table 4.10. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 4.11. HTTP responses HTTP code Reponse body 200 - OK TemplateList schema 401 - Unauthorized Empty HTTP method POST Description create a Template Table 4.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.13. Body parameters Parameter Type Description body Template schema Table 4.14. HTTP responses HTTP code Reponse body 200 - OK Template schema 201 - Created Template schema 202 - Accepted Template schema 401 - Unauthorized Empty 4.2.4. /apis/template.openshift.io/v1/watch/namespaces/{namespace}/templates Table 4.15. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 4.16. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of Template. deprecated: use the 'watch' parameter with a list operation instead. Table 4.17. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 4.2.5. /apis/template.openshift.io/v1/namespaces/{namespace}/templates/{name} Table 4.18. Global path parameters Parameter Type Description name string name of the Template namespace string object name and auth scope, such as for teams and projects Table 4.19. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a Template Table 4.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 4.21. Body parameters Parameter Type Description body DeleteOptions schema Table 4.22. HTTP responses HTTP code Reponse body 200 - OK Template schema 202 - Accepted Template schema 401 - Unauthorized Empty HTTP method GET Description read the specified Template Table 4.23. HTTP responses HTTP code Reponse body 200 - OK Template schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Template Table 4.24. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 4.25. Body parameters Parameter Type Description body Patch schema Table 4.26. HTTP responses HTTP code Reponse body 200 - OK Template schema 201 - Created Template schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Template Table 4.27. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.28. Body parameters Parameter Type Description body Template schema Table 4.29. HTTP responses HTTP code Reponse body 200 - OK Template schema 201 - Created Template schema 401 - Unauthorized Empty 4.2.6. /apis/template.openshift.io/v1/namespaces/{namespace}/processedtemplates Table 4.30. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 4.31. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. pretty string If 'true', then the output is pretty printed. HTTP method POST Description create a Template Table 4.32. Body parameters Parameter Type Description body Template schema Table 4.33. HTTP responses HTTP code Reponse body 200 - OK Template schema 201 - Created Template schema 202 - Accepted Template schema 401 - Unauthorized Empty 4.2.7. /apis/template.openshift.io/v1/watch/namespaces/{namespace}/templates/{name} Table 4.34. Global path parameters Parameter Type Description name string name of the Template namespace string object name and auth scope, such as for teams and projects Table 4.35. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind Template. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 4.36. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/template_apis/template-template-openshift-io-v1
Chapter 2. Introduction to cloud-init
Chapter 2. Introduction to cloud-init The cloud-init utility automates the initialization of cloud instances during system boot. You can configure cloud-init to perform a variety of tasks: Configuring a host name Installing packages on an instance Running scripts Suppressing default virtual machine (VM) behavior Prerequisites Sign up for a Red Hat Customer Portal account. The cloud-init is available in various types of RHEL images. For example: If you download a KVM guest image from the Red Hat Customer Portal , the image comes preinstalled with the cloud-init package. After you launch the instance, the cloud-init package becomes enabled. KVM guest images on the Red Hat Customer Portal are intended to use with Red Hat Virtualization (RHV), the Red Hat OpenStack Platform (RHOSP), and Red Hat OpenShift Virtualization. You can also download the RHEL ISO image from the Red Hat Customer Portal to create a custom guest image. In this case, you need to install the cloud-init package on the customized guest image. If you require to use an image from a cloud service provider (for example, AWS or Azure), use the RHEL image builder to create the image. Image builder images are customized for specific cloud providers. The following image types include cloud-init already installed: Amazon Machine Image (AMI) Virtual Hard Disk (VHD) QEMU copy-on-write (qcow2) For details about the RHEL image builder, see Composing a customized RHEL system image . Most cloud platforms support cloud-init , but configuration procedures and supported options vary. Alternatively, you can configure cloud-init for the NoCloud environment. In addition, you can configure cloud-init on one VM and then use that VM as a template to create additional VMs or clusters of VMs. Specific Red Hat products, for example, Red Hat Virtualization , have documented procedures to configure cloud-init for those products. 2.1. Overview of the cloud-init configuration The cloud-init utility uses YAML-formatted configuration files to apply user-defined tasks to instances. When an instance boots, the cloud-init service starts and executes the instructions from the YAML file. Depending on the configuration, tasks complete either during the first boot or on subsequent boots of the VM. To define the specific tasks, configure the /etc/cloud/cloud.cfg file and add directives under the /etc/cloud/cloud.cfg.d/ directory. The cloud.cfg file includes directives for various system configurations, such as user access, authentication, and system information. The file also includes default and optional modules for cloud-init . These modules execute in order in the following phases: .. The cloud-init initialization phase .. The configuration phase .. The final phase. + In the cloud.cfg file, the modules for the three phases are listed under cloud_init_modules , cloud_config_modules , and cloud_final_modules respectively. You can add additional directives for cloud-init in the cloud.cfg.d directory. When adding directives to the cloud.cfg.d directory, you need to add them to a custom file named *.cfg , and always include #cloud-config at the top of the file. 2.2. cloud-init operates in stages During system boot, the cloud-init utility operates in five stages that determine whether cloud-init runs and where it finds its datasources, among other tasks. The stages are as follows: Generator stage : By using the systemd service, this phase determines whether to run cloud-init utility at the time of boot. Local stage : cloud-init searches local datasources and applies network configuration, including the DHCP-based fallback mechanism. Network stage : cloud-init processes user data by running modules listed under cloud_init_modules in the /etc/cloud/cloud.cfg file. You can add, remove, enable, or disable modules in the cloud_init_modules section. Config stage : cloud-init runs modules listed under cloud_config_modules section in the /etc/cloud/cloud.cfg file. You can add, remove, enable, or disable modules in the cloud_config_modules section. Final stage : cloud-init runs modules and configurations included in the cloud_final_modules section of the /etc/cloud/cloud.cfg file. It can include the installation of specific packages, as well as triggering configuration management plug-ins and user-defined scripts. You can add, remove, enable, or disable modules in the cloud_final_modules section. Additional resources Boot Stages of cloud-init 2.3. cloud-init modules execute in phases When cloud-init runs, it executes the modules within cloud.cfg in order within three phases: The network phase ( cloud_init_modules ) The configuration phase ( cloud_config_modules ) The final phase ( cloud_final_modules ) When cloud-init runs for the first time on a VM, all the modules you have configured run in their respective phases. On a subsequent running of cloud-init , whether a module runs within a phase depends on the module frequency of the individual module. Some modules run every time cloud-init runs; some modules only run the first time cloud-init runs, even if the instance ID changes. Note An instance ID uniquely identifies an instance. When an instance ID changes, cloud-init treats the instance as a new instance. The possible module frequency values are as follows: Per instance means that the module runs on first boot of an instance. For example, if you clone an instance or create a new instance from a saved image, the modules designated as per instance run again. Per once means that the module runs only once. For example, if you clone an instance or create a new instance from a saved image, the modules designated per once do not run again on those instances. Per always means the module runs on every boot. Note You can override a module's frequency when you configure the module or by using the command line. 2.4. cloud-init acts upon user data, metadata, and vendor data The datasources that cloud-init consumes are user data, metadata, and vendor data. User data includes directives you specify in the cloud.cfg file and in the cloud.cfg.d directory, for example, user data can include files to run, packages to install, and shell scripts. Refer to the cloud-init Documentation section User-Data Formats for information about the types of user data that cloud-init allows. Metadata includes data associated with a specific datasource, for example, metadata can include a server name and instance ID. If you are using a specific cloud platform, the platform determines where your instances find user data and metadata. Your platform may require that you add metadata and user data to an HTTP service; in this case, when cloud-init runs it consumes metadata and user data from the HTTP service. Vendor data is optionally provided by the organization (for example, a cloud provider) and includes information that can customize the image to better fit the environment where the image runs. cloud-init acts upon optional vendor data and user data after it reads any metadata and initializes the system. By default, vendor data runs on the first boot. You can disable vendor data execution. Refer to the cloud-init Documentation section Instance Metadata for a description of metadata; Datasources for a list of datasources; and Vendor Data for more information about vendor data. 2.5. cloud-init identifies the cloud platform cloud-init attempts to identify the cloud platform using the script ds-identify . The script runs on the first boot of an instance. Adding a datasource directive can save time when cloud-init runs. You would add the directive in the /etc/cloud/cloud.cfg file or in the /etc/cloud/cloud.cfg.d directory. For example: Beyond adding the directive for your cloud platform, you can further configure cloud-init by adding additional configuration details, such as metadata URLs. After cloud-init runs, you can view a log file ( run/cloud-init/ds-identify.log ) that provides detailed information about the platform. Additional resources Datasources How to identify the datasource I'm using How can I debug my user data? 2.6. Additional resources Upstream documentation for cloud-init
[ "datasource_list:[Ec2]", "datasource_list: [Ec2] datasource: Ec2: metadata_urls: ['http://169.254.169.254']" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_cloud-init_for_rhel_9/introduction-to-cloud-init_cloud-content
Common object reference
Common object reference OpenShift Container Platform 4.17 Reference guide common API objects Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/common_object_reference/index
Chapter 1. Connectivity Link prerequisites and permissions
Chapter 1. Connectivity Link prerequisites and permissions Before you install Connectivity Link, you must ensure that you have access to the required platforms in your environment with the correct user permissions. 1.1. Required platforms and components Red Hat account You have a Red Hat account with subscriptions for Connectivity Link and OpenShift. OpenShift OpenShift Container Platform 4.16 or later is installed, or you have access to a supported OpenShift cloud service. You are logged into an OpenShift cluster with the cluster-admin role. You have the kubectl or oc command installed. OpenShift Service Mesh Red Hat OpenShift Service Mesh 3.0 Technology Preview 2 is installed on OpenShift as your Gateway API provider. For more details, see the OpenShift Service Mesh installation documentation . You have enabled the Gateway API feature in OpenShift Service Mesh 3.0 Technology Preview 2. For more details, see the OpenShift Service Mesh documentation on enabling Gateway API . Important Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information, see Technology Preview Features - Scope of Support . cert-manager Operator for Red Hat OpenShift cert-manager Operator for Red Hat OpenShift 1.14 is installed to manage the TLS certificates for your Gateways. For more details, see the cert-manager Operator for Red Hat OpenShift documentation . Note Before using a Connectivity Link TLSPolicy, you must set up a certificate issuer for your cloud provider platform. For more details, see the OpenShift documentation on configuring an ACME issuer . 1.2. Optional platforms and components DNSPolicy For DNSPolicy, you have an account for one of the supported cloud DNS providers and have set up a hosted zone for Connectivity Link. For more details, see your cloud DNS provider documentation: Amazon Route 53 documentation . Google Cloud DNS documentation . Microsoft Azure DNS documentation . RateLimitPolicy For rate limiting policies, you have a shared accessible Redis-based datastore for rate limit counters in a multicluster environment. For details on how to install and configure a secure and highly available datastore, see the documentation for your Redis-compatible datastore: Redis documentation AWS ElastiCache (Redis OSS) User Guide Dragonfly documentation AuthPolicy For AuthPolicy, you can install Red Hat build of Keycloak if this is required in your environment. For more details, see the Red Hat build of Keycloak documentation . Observability For Observability, OpenShift user workload monitoring must be configured to remote write to a central storage system such as Thanos. For more details, see the Connectivity Link Observability Guide . Additional resources For more details, see Supported Configurations for Red Hat Connectivity Link .
null
https://docs.redhat.com/en/documentation/red_hat_connectivity_link/1.0/html/installing_connectivity_link_on_openshift/install-prerequisites_connectivity-link
Chapter 84. Phreak rule algorithm in the decision engine
Chapter 84. Phreak rule algorithm in the decision engine The decision engine in Red Hat Decision Manager uses the Phreak algorithm for rule evaluation. Phreak evolved from the Rete algorithm, including the enhanced Rete algorithm ReteOO that was introduced in versions of Red Hat Decision Manager for object-oriented systems. Overall, Phreak is more scalable than Rete and ReteOO, and is faster in large systems. While Rete is considered eager (immediate rule evaluation) and data oriented, Phreak is considered lazy (delayed rule evaluation) and goal oriented. The Rete algorithm performs many actions during the insert, update, and delete actions in order to find partial matches for all rules. This eagerness of the Rete algorithm during rule matching requires a lot of time before eventually executing rules, especially in large systems. With Phreak, this partial matching of rules is delayed deliberately to handle large amounts of data more efficiently. The Phreak algorithm adds the following set of enhancements to Rete algorithms: Three layers of contextual memory: Node, segment, and rule memory types Rule-based, segment-based, and node-based linking Lazy (delayed) rule evaluation Stack-based evaluations with pause and resume Isolated rule evaluation Set-oriented propagations 84.1. Rule evaluation in Phreak When the decision engine starts, all rules are considered to be unlinked from pattern-matching data that can trigger the rules. At this stage, the Phreak algorithm in the decision engine does not evaluate the rules. The insert , update , and delete actions are queued, and Phreak uses a heuristic, based on the rule most likely to result in execution, to calculate and select the rule for evaluation. When all the required input values are populated for a rule, the rule is considered to be linked to the relevant pattern-matching data. Phreak then creates a goal that represents this rule and places the goal into a priority queue that is ordered by rule salience. Only the rule for which the goal was created is evaluated, and other potential rule evaluations are delayed. While individual rules are evaluated, node sharing is still achieved through the process of segmentation. Unlike the tuple-oriented Rete, the Phreak propagation is collection oriented. For the rule that is being evaluated, the decision engine accesses the first node and processes all queued insert, update, and delete actions. The results are added to a set, and the set is propagated to the child node. In the child node, all queued insert, update, and delete actions are processed, adding the results to the same set. The set is then propagated to the child node and the same process repeats until it reaches the terminal node. This cycle creates a batch process effect that can provide performance advantages for certain rule constructs. The linking and unlinking of rules happens through a layered bit-mask system, based on network segmentation. When the rule network is built, segments are created for rule network nodes that are shared by the same set of rules. A rule is composed of a path of segments. In case a rule does not share any node with any other rule, it becomes a single segment. A bit-mask offset is assigned to each node in the segment. Another bit mask is assigned to each segment in the path of the rule according to these requirements: If at least one input for a node exists, the node bit is set to the on state. If each node in a segment has the bit set to the on state, the segment bit is also set to the on state. If any node bit is set to the off state, the segment is also set to the off state. If each segment in the path of the rule is set to the on state, the rule is considered linked, and a goal is created to schedule the rule for evaluation. The same bit-mask technique is used to track modified nodes, segments, and rules. This tracking ability enables an already linked rule to be unscheduled from evaluation if it has been modified since the evaluation goal for it was created. As a result, no rules can ever evaluate partial matches. This process of rule evaluation is possible in Phreak because, as opposed to a single unit of memory in Rete, Phreak has three layers of contextual memory with node, segment, and rule memory types. This layering enables much more contextual understanding during the evaluation of a rule. Figure 84.1. Phreak three-layered memory system The following examples illustrate how rules are organized and evaluated in this three-layered memory system in Phreak. Example 1: A single rule (R1) with three patterns: A, B and C. The rule forms a single segment, with bits 1, 2, and 4 for the nodes. The single segment has a bit offset of 1. Figure 84.2. Example 1: Single rule Example 2: Rule R2 is added and shares pattern A. Figure 84.3. Example 2: Two rules with pattern sharing Pattern A is placed in its own segment, resulting in two segments for each rule. Those two segments form a path for their respective rules. The first segment is shared by both paths. When pattern A is linked, the segment becomes linked. The segment then iterates over each path that the segment is shared by, setting the bit 1 to on . If patterns B and C are later turned on, the second segment for path R1 is linked, and this causes bit 2 to be turned on for R1. With bit 1 and bit 2 turned on for R1, the rule is now linked and a goal is created to schedule the rule for later evaluation and execution. When a rule is evaluated, the segments enable the results of the matching to be shared. Each segment has a staging memory to queue all inserts, updates, and deletes for that segment. When R1 is evaluated, the rule processes pattern A, and this results in a set of tuples. The algorithm detects a segmentation split, creates peered tuples for each insert, update, and delete in the set, and adds them to the R2 staging memory. Those tuples are then merged with any existing staged tuples and are executed when R2 is eventually evaluated. Example 3: Rules R3 and R4 are added and share patterns A and B. Figure 84.4. Example 3: Three rules with pattern sharing Rules R3 and R4 have three segments and R1 has two segments. Patterns A and B are shared by R1, R3, and R4, while pattern D is shared by R3 and R4. Example 4: A single rule (R1) with a subnetwork and no pattern sharing. Figure 84.5. Example 4: Single rule with a subnetwork and no pattern sharing Subnetworks are formed when a Not , Exists , or Accumulate node contains more than one element. In this example, the element B not( C ) forms the subnetwork. The element not( C ) is a single element that does not require a subnetwork and is therefore merged inside of the Not node. The subnetwork uses a dedicated segment. Rule R1 still has a path of two segments and the subnetwork forms another inner path. When the subnetwork is linked, it is also linked in the outer segment. Example 5: Rule R1 with a subnetwork that is shared by rule R2. Figure 84.6. Example 5: Two rules, one with a subnetwork and pattern sharing The subnetwork nodes in a rule can be shared by another rule that does not have a subnetwork. This sharing causes the subnetwork segment to be split into two segments. Constrained Not nodes and Accumulate nodes can never unlink a segment, and are always considered to have their bits turned on. The Phreak evaluation algorithm is stack based instead of method-recursion based. Rule evaluation can be paused and resumed at any time when a StackEntry is used to represent the node currently being evaluated. When a rule evaluation reaches a subnetwork, a StackEntry object is created for the outer path segment and the subnetwork segment. The subnetwork segment is evaluated first, and when the set reaches the end of the subnetwork path, the segment is merged into a staging list for the outer node that the segment feeds into. The StackEntry object is then resumed and can now process the results of the subnetwork. This process has the added benefit, especially for Accumulate nodes, that all work is completed in a batch, before propagating to the child node. The same stack system is used for efficient backward chaining. When a rule evaluation reaches a query node, the evaluation is paused and the query is added to the stack. The query is then evaluated to produce a result set, which is saved in a memory location for the resumed StackEntry object to pick up and propagate to the child node. If the query itself called other queries, the process repeats, while the current query is paused and a new evaluation is set up for the current query node. 84.1.1. Rule evaluation with forward and backward chaining The decision engine in Red Hat Decision Manager is a hybrid reasoning system that uses both forward chaining and backward chaining to evaluate rules. A forward-chaining rule system is a data-driven system that starts with a fact in the working memory of the decision engine and reacts to changes to that fact. When objects are inserted into working memory, any rule conditions that become true as a result of the change are scheduled for execution by the agenda. In contrast, a backward-chaining rule system is a goal-driven system that starts with a conclusion that the decision engine attempts to satisfy, often using recursion. If the system cannot reach the conclusion or goal, it searches for subgoals, which are conclusions that complete part of the current goal. The system continues this process until either the initial conclusion is satisfied or all subgoals are satisfied. The following diagram illustrates how the decision engine evaluates rules using forward chaining overall with a backward-chaining segment in the logic flow: Figure 84.7. Rule evaluation logic using forward and backward chaining 84.2. Rule base configuration Red Hat Decision Manager contains a RuleBaseConfiguration.java object that you can use to configure exception handler settings, multithreaded execution, and sequential mode in the decision engine. For the rule base configuration options, download the Red Hat Process Automation Manager 7.13.5 Source Distribution ZIP file from the Red Hat Customer Portal and navigate to ~/rhpam-7.13.5-sources/src/drools-USDVERSION/drools-core/src/main/java/org/drools/core/RuleBaseConfiguration.java . The following rule base configuration options are available for the decision engine: drools.consequenceExceptionHandler When configured, this system property defines the class that manages the exceptions thrown by rule consequences. You can use this property to specify a custom exception handler for rule evaluation in the decision engine. Default value: org.drools.core.runtime.rule.impl.DefaultConsequenceExceptionHandler You can specify the custom exception handler using one of the following options: Specify the exception handler in a system property: Specify the exception handler while creating the KIE base programmatically: KieServices ks = KieServices.Factory.get(); KieBaseConfiguration kieBaseConf = ks.newKieBaseConfiguration(); kieBaseConf.setOption(ConsequenceExceptionHandlerOption.get(MyCustomConsequenceExceptionHandler.class)); KieBase kieBase = kieContainer.newKieBase(kieBaseConf); drools.multithreadEvaluation When enabled, this system property enables the decision engine to evaluate rules in parallel by dividing the Phreak rule network into independent partitions. You can use this property to increase the speed of rule evaluation for specific rule bases. Default value: false You can enable multithreaded evaluation using one of the following options: Enable the multithreaded evaluation system property: Enable multithreaded evaluation while creating the KIE base programmatically: KieServices ks = KieServices.Factory.get(); KieBaseConfiguration kieBaseConf = ks.newKieBaseConfiguration(); kieBaseConf.setOption(MultithreadEvaluationOption.YES); KieBase kieBase = kieContainer.newKieBase(kieBaseConf); Warning Rules that use queries, salience, or agenda groups are currently not supported by the parallel decision engine. If these rule elements are present in the KIE base, the compiler emits a warning and automatically switches back to single-threaded evaluation. However, in some cases, the decision engine might not detect the unsupported rule elements and rules might be evaluated incorrectly. For example, the decision engine might not detect when rules rely on implicit salience given by rule ordering inside the DRL file, resulting in incorrect evaluation due to the unsupported salience attribute. drools.sequential When enabled, this system property enables sequential mode in the decision engine. In sequential mode, the decision engine evaluates rules one time in the order that they are listed in the decision engine agenda without regard to changes in the working memory. This means that the decision engine ignores any insert , modify , or update statements in rules and executes rules in a single sequence. As a result, rule execution may be faster in sequential mode, but important updates may not be applied to your rules. You can use this property if you use stateless KIE sessions and you do not want the execution of rules to influence subsequent rules in the agenda. Sequential mode applies to stateless KIE sessions only. Default value: false You can enable sequential mode using one of the following options: Enable the sequential mode system property: Enable sequential mode while creating the KIE base programmatically: KieServices ks = KieServices.Factory.get(); KieBaseConfiguration kieBaseConf = ks.newKieBaseConfiguration(); kieBaseConf.setOption(SequentialOption.YES); KieBase kieBase = kieContainer.newKieBase(kieBaseConf); Enable sequential mode in the KIE module descriptor file ( kmodule.xml ) for a specific Red Hat Decision Manager project: <kmodule> ... <kbase name="KBase2" default="false" sequential="true" packages="org.domain.pkg2, org.domain.pkg3" includes="KBase1"> ... </kbase> ... </kmodule> 84.3. Sequential mode in Phreak Sequential mode is an advanced rule base configuration in the decision engine, supported by Phreak, that enables the decision engine to evaluate rules one time in the order that they are listed in the decision engine agenda without regard to changes in the working memory. In sequential mode, the decision engine ignores any insert , modify , or update statements in rules and executes rules in a single sequence. As a result, rule execution may be faster in sequential mode, but important updates may not be applied to your rules. Sequential mode applies to only stateless KIE sessions because stateful KIE sessions inherently use data from previously invoked KIE sessions. If you use a stateless KIE session and you want the execution of rules to influence subsequent rules in the agenda, then do not enable sequential mode. Sequential mode is disabled by default in the decision engine. To enable sequential mode, use one of the following options: Set the system property drools.sequential to true . Enable sequential mode while creating the KIE base programmatically: KieServices ks = KieServices.Factory.get(); KieBaseConfiguration kieBaseConf = ks.newKieBaseConfiguration(); kieBaseConf.setOption(SequentialOption.YES); KieBase kieBase = kieContainer.newKieBase(kieBaseConf); Enable sequential mode in the KIE module descriptor file ( kmodule.xml ) for a specific Red Hat Decision Manager project: <kmodule> ... <kbase name="KBase2" default="false" sequential="true" packages="org.domain.pkg2, org.domain.pkg3" includes="KBase1"> ... </kbase> ... </kmodule> To configure sequential mode to use a dynamic agenda, use one of the following options: Set the system property drools.sequential.agenda to dynamic . Set the sequential agenda option while creating the KIE base programmatically: KieServices ks = KieServices.Factory.get(); KieBaseConfiguration kieBaseConf = ks.newKieBaseConfiguration(); kieBaseConf.setOption(SequentialAgendaOption.DYNAMIC); KieBase kieBase = kieContainer.newKieBase(kieBaseConf); When you enable sequential mode, the decision engine evaluates rules in the following way: Rules are ordered by salience and position in the rule set. An element for each possible rule match is created. The element position indicates the execution order. Node memory is disabled, with the exception of the right-input object memory. The left-input adapter node propagation is disconnected and the object with the node is referenced in a Command object. The Command object is added to a list in the working memory for later execution. All objects are asserted, and then the list of Command objects is checked and executed. All matches that result from executing the list are added to elements based on the sequence number of the rule. The elements that contain matches are executed in a sequence. If you set a maximum number of rule executions, the decision engine activates no more than that number of rules in the agenda for execution. In sequential mode, the LeftInputAdapterNode node creates a Command object and adds it to a list in the working memory of the decision engine. This Command object contains references to the LeftInputAdapterNode node and the propagated object. These references stop any left-input propagations at insertion time so that the right-input propagation never needs to attempt to join the left inputs. The references also avoid the need for the left-input memory. All nodes have their memory turned off, including the left-input tuple memory, but excluding the right-input object memory. After all the assertions are finished and the right-input memory of all the objects is populated, the decision engine iterates over the list of LeftInputAdatperNode Command objects. The objects propagate down the network, attempting to join the right-input objects, but they are not retained in the left input. The agenda with a priority queue to schedule the tuples is replaced by an element for each rule. The sequence number of the RuleTerminalNode node indicates the element where to place the match. After all Command objects have finished, the elements are checked and existing matches are executed. To improve performance, the first and the last populated cell in the elements are retained. When the network is constructed, each RuleTerminalNode node receives a sequence number based on its salience number and the order in which it was added to the network. The right-input node memories are typically hash maps for fast object deletion. Because object deletions are not supported, Phreak uses an object list when the values of the object are not indexed. For a large number of objects, indexed hash maps provide a performance increase. If an object has only a few instances, Phreak uses an object list instead of an index.
[ "drools.consequenceExceptionHandler=org.drools.core.runtime.rule.impl.MyCustomConsequenceExceptionHandler", "KieServices ks = KieServices.Factory.get(); KieBaseConfiguration kieBaseConf = ks.newKieBaseConfiguration(); kieBaseConf.setOption(ConsequenceExceptionHandlerOption.get(MyCustomConsequenceExceptionHandler.class)); KieBase kieBase = kieContainer.newKieBase(kieBaseConf);", "drools.multithreadEvaluation=true", "KieServices ks = KieServices.Factory.get(); KieBaseConfiguration kieBaseConf = ks.newKieBaseConfiguration(); kieBaseConf.setOption(MultithreadEvaluationOption.YES); KieBase kieBase = kieContainer.newKieBase(kieBaseConf);", "drools.sequential=true", "KieServices ks = KieServices.Factory.get(); KieBaseConfiguration kieBaseConf = ks.newKieBaseConfiguration(); kieBaseConf.setOption(SequentialOption.YES); KieBase kieBase = kieContainer.newKieBase(kieBaseConf);", "<kmodule> <kbase name=\"KBase2\" default=\"false\" sequential=\"true\" packages=\"org.domain.pkg2, org.domain.pkg3\" includes=\"KBase1\"> </kbase> </kmodule>", "KieServices ks = KieServices.Factory.get(); KieBaseConfiguration kieBaseConf = ks.newKieBaseConfiguration(); kieBaseConf.setOption(SequentialOption.YES); KieBase kieBase = kieContainer.newKieBase(kieBaseConf);", "<kmodule> <kbase name=\"KBase2\" default=\"false\" sequential=\"true\" packages=\"org.domain.pkg2, org.domain.pkg3\" includes=\"KBase1\"> </kbase> </kmodule>", "KieServices ks = KieServices.Factory.get(); KieBaseConfiguration kieBaseConf = ks.newKieBaseConfiguration(); kieBaseConf.setOption(SequentialAgendaOption.DYNAMIC); KieBase kieBase = kieContainer.newKieBase(kieBaseConf);" ]
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/developing_decision_services_in_red_hat_decision_manager/phreak-algorithm-con_decision-engine
7.105. libica
7.105. libica 7.105.1. RHBA-2015:1283 - libica bug fix and enhancement update Updated libica packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The libica library contains a set of functions and utilities for accessing the IBM eServer Cryptographic Accelerator (ICA) hardware on IBM System z. Note The libica packages have been upgraded to upstream version 2.4.2, which provides a number of bug fixes and enhancements over the version, including improved statistics tracking of cryptographic requests issued by libica, increased security of the cryptography library, and enhanced usability that enables better monitoring and debugging of the cryptography stack on IBM System z. (BZ# 1148124 ) Users of libica are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-libica
Chapter 17. Configuring system controls and interface attributes using the tuning plugin
Chapter 17. Configuring system controls and interface attributes using the tuning plugin In Linux, sysctl allows an administrator to modify kernel parameters at runtime. You can modify interface-level network sysctls using the tuning Container Network Interface (CNI) meta plugin. The tuning CNI meta plugin operates in a chain with a main CNI plugin as illustrated. The main CNI plugin assigns the interface and passes this interface to the tuning CNI meta plugin at runtime. You can change some sysctls and several interface attributes such as promiscuous mode, all-multicast mode, MTU, and MAC address in the network namespace by using the tuning CNI meta plugin. 17.1. Configuring system controls by using the tuning CNI The following procedure configures the tuning CNI to change the interface-level network net.ipv4.conf.IFNAME.accept_redirects sysctl. This example enables accepting and sending ICMP-redirected packets. In the tuning CNI meta plugin configuration, the interface name is represented by the IFNAME token and is replaced with the actual name of the interface at runtime. Procedure Create a network attachment definition, such as tuning-example.yaml , with the following content: apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: <name> 1 namespace: default 2 spec: config: '{ "cniVersion": "0.4.0", 3 "name": "<name>", 4 "plugins": [{ "type": "<main_CNI_plugin>" 5 }, { "type": "tuning", 6 "sysctl": { "net.ipv4.conf.IFNAME.accept_redirects": "1" 7 } } ] } 1 Specifies the name for the additional network attachment to create. The name must be unique within the specified namespace. 2 Specifies the namespace that the object is associated with. 3 Specifies the CNI specification version. 4 Specifies the name for the configuration. It is recommended to match the configuration name to the name value of the network attachment definition. 5 Specifies the name of the main CNI plugin to configure. 6 Specifies the name of the CNI meta plugin. 7 Specifies the sysctl to set. The interface name is represented by the IFNAME token and is replaced with the actual name of the interface at runtime. An example YAML file is shown here: apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: tuningnad namespace: default spec: config: '{ "cniVersion": "0.4.0", "name": "tuningnad", "plugins": [{ "type": "bridge" }, { "type": "tuning", "sysctl": { "net.ipv4.conf.IFNAME.accept_redirects": "1" } } ] }' Apply the YAML by running the following command: USD oc apply -f tuning-example.yaml Example output networkattachmentdefinition.k8.cni.cncf.io/tuningnad created Create a pod such as examplepod.yaml with the network attachment definition similar to the following: apiVersion: v1 kind: Pod metadata: name: tunepod namespace: default annotations: k8s.v1.cni.cncf.io/networks: tuningnad 1 spec: containers: - name: podexample image: centos command: ["/bin/bash", "-c", "sleep INF"] securityContext: runAsUser: 2000 2 runAsGroup: 3000 3 allowPrivilegeEscalation: false 4 capabilities: 5 drop: ["ALL"] securityContext: runAsNonRoot: true 6 seccompProfile: 7 type: RuntimeDefault 1 Specify the name of the configured NetworkAttachmentDefinition . 2 runAsUser controls which user ID the container is run with. 3 runAsGroup controls which primary group ID the containers is run with. 4 allowPrivilegeEscalation determines if a pod can request to allow privilege escalation. If unspecified, it defaults to true. This boolean directly controls whether the no_new_privs flag gets set on the container process. 5 capabilities permit privileged actions without giving full root access. This policy ensures all capabilities are dropped from the pod. 6 runAsNonRoot: true requires that the container will run with a user with any UID other than 0. 7 RuntimeDefault enables the default seccomp profile for a pod or container workload. Apply the yaml by running the following command: USD oc apply -f examplepod.yaml Verify that the pod is created by running the following command: USD oc get pod Example output NAME READY STATUS RESTARTS AGE tunepod 1/1 Running 0 47s Log in to the pod by running the following command: USD oc rsh tunepod Verify the values of the configured sysctl flags. For example, find the value net.ipv4.conf.net1.accept_redirects by running the following command: sh-4.4# sysctl net.ipv4.conf.net1.accept_redirects Expected output net.ipv4.conf.net1.accept_redirects = 1 17.2. Enabling all-multicast mode by using the tuning CNI You can enable all-multicast mode by using the tuning Container Network Interface (CNI) meta plugin. The following procedure describes how to configure the tuning CNI to enable the all-multicast mode. Procedure Create a network attachment definition, such as tuning-example.yaml , with the following content: apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: <name> 1 namespace: default 2 spec: config: '{ "cniVersion": "0.4.0", 3 "name": "<name>", 4 "plugins": [{ "type": "<main_CNI_plugin>" 5 }, { "type": "tuning", 6 "allmulti": true 7 } } ] } 1 Specifies the name for the additional network attachment to create. The name must be unique within the specified namespace. 2 Specifies the namespace that the object is associated with. 3 Specifies the CNI specification version. 4 Specifies the name for the configuration. Match the configuration name to the name value of the network attachment definition. 5 Specifies the name of the main CNI plugin to configure. 6 Specifies the name of the CNI meta plugin. 7 Changes the all-multicast mode of interface. If enabled, all multicast packets on the network will be received by the interface. An example YAML file is shown here: apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: setallmulti namespace: default spec: config: '{ "cniVersion": "0.4.0", "name": "setallmulti", "plugins": [ { "type": "bridge" }, { "type": "tuning", "allmulti": true } ] }' Apply the settings specified in the YAML file by running the following command: USD oc apply -f tuning-allmulti.yaml Example output networkattachmentdefinition.k8s.cni.cncf.io/setallmulti created Create a pod with a network attachment definition similar to that specified in the following examplepod.yaml sample file: apiVersion: v1 kind: Pod metadata: name: allmultipod namespace: default annotations: k8s.v1.cni.cncf.io/networks: setallmulti 1 spec: containers: - name: podexample image: centos command: ["/bin/bash", "-c", "sleep INF"] securityContext: runAsUser: 2000 2 runAsGroup: 3000 3 allowPrivilegeEscalation: false 4 capabilities: 5 drop: ["ALL"] securityContext: runAsNonRoot: true 6 seccompProfile: 7 type: RuntimeDefault 1 Specifies the name of the configured NetworkAttachmentDefinition . 2 Specifies the user ID the container is run with. 3 Specifies which primary group ID the containers is run with. 4 Specifies if a pod can request privilege escalation. If unspecified, it defaults to true . This boolean directly controls whether the no_new_privs flag gets set on the container process. 5 Specifies the container capabilities. The drop: ["ALL"] statement indicates that all Linux capabilities are dropped from the pod, providing a more restrictive security profile. 6 Specifies that the container will run with a user with any UID other than 0. 7 Specifies the container's seccomp profile. In this case, the type is set to RuntimeDefault . Seccomp is a Linux kernel feature that restricts the system calls available to a process, enhancing security by minimizing the attack surface. Apply the settings specified in the YAML file by running the following command: USD oc apply -f examplepod.yaml Verify that the pod is created by running the following command: USD oc get pod Example output NAME READY STATUS RESTARTS AGE allmultipod 1/1 Running 0 23s Log in to the pod by running the following command: USD oc rsh allmultipod List all the interfaces associated with the pod by running the following command: sh-4.4# ip link Example output 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: eth0@if22: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8901 qdisc noqueue state UP mode DEFAULT group default link/ether 0a:58:0a:83:00:10 brd ff:ff:ff:ff:ff:ff link-netnsid 0 1 3: net1@if24: <BROADCAST,MULTICAST,ALLMULTI,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default link/ether ee:9b:66:a4:ec:1d brd ff:ff:ff:ff:ff:ff link-netnsid 0 2 1 eth0@if22 is the primary interface 2 net1@if24 is the secondary interface configured with the network-attachment-definition that supports the all-multicast mode (ALLMULTI flag) 17.3. Additional resources Using sysctls in containers SR-IOV network node configuration object Configuring interface-level network sysctl settings and all-multicast mode for SR-IOV networks
[ "apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: <name> 1 namespace: default 2 spec: config: '{ \"cniVersion\": \"0.4.0\", 3 \"name\": \"<name>\", 4 \"plugins\": [{ \"type\": \"<main_CNI_plugin>\" 5 }, { \"type\": \"tuning\", 6 \"sysctl\": { \"net.ipv4.conf.IFNAME.accept_redirects\": \"1\" 7 } } ] }", "apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: tuningnad namespace: default spec: config: '{ \"cniVersion\": \"0.4.0\", \"name\": \"tuningnad\", \"plugins\": [{ \"type\": \"bridge\" }, { \"type\": \"tuning\", \"sysctl\": { \"net.ipv4.conf.IFNAME.accept_redirects\": \"1\" } } ] }'", "oc apply -f tuning-example.yaml", "networkattachmentdefinition.k8.cni.cncf.io/tuningnad created", "apiVersion: v1 kind: Pod metadata: name: tunepod namespace: default annotations: k8s.v1.cni.cncf.io/networks: tuningnad 1 spec: containers: - name: podexample image: centos command: [\"/bin/bash\", \"-c\", \"sleep INF\"] securityContext: runAsUser: 2000 2 runAsGroup: 3000 3 allowPrivilegeEscalation: false 4 capabilities: 5 drop: [\"ALL\"] securityContext: runAsNonRoot: true 6 seccompProfile: 7 type: RuntimeDefault", "oc apply -f examplepod.yaml", "oc get pod", "NAME READY STATUS RESTARTS AGE tunepod 1/1 Running 0 47s", "oc rsh tunepod", "sh-4.4# sysctl net.ipv4.conf.net1.accept_redirects", "net.ipv4.conf.net1.accept_redirects = 1", "apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: <name> 1 namespace: default 2 spec: config: '{ \"cniVersion\": \"0.4.0\", 3 \"name\": \"<name>\", 4 \"plugins\": [{ \"type\": \"<main_CNI_plugin>\" 5 }, { \"type\": \"tuning\", 6 \"allmulti\": true 7 } } ] }", "apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: setallmulti namespace: default spec: config: '{ \"cniVersion\": \"0.4.0\", \"name\": \"setallmulti\", \"plugins\": [ { \"type\": \"bridge\" }, { \"type\": \"tuning\", \"allmulti\": true } ] }'", "oc apply -f tuning-allmulti.yaml", "networkattachmentdefinition.k8s.cni.cncf.io/setallmulti created", "apiVersion: v1 kind: Pod metadata: name: allmultipod namespace: default annotations: k8s.v1.cni.cncf.io/networks: setallmulti 1 spec: containers: - name: podexample image: centos command: [\"/bin/bash\", \"-c\", \"sleep INF\"] securityContext: runAsUser: 2000 2 runAsGroup: 3000 3 allowPrivilegeEscalation: false 4 capabilities: 5 drop: [\"ALL\"] securityContext: runAsNonRoot: true 6 seccompProfile: 7 type: RuntimeDefault", "oc apply -f examplepod.yaml", "oc get pod", "NAME READY STATUS RESTARTS AGE allmultipod 1/1 Running 0 23s", "oc rsh allmultipod", "sh-4.4# ip link", "1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: eth0@if22: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8901 qdisc noqueue state UP mode DEFAULT group default link/ether 0a:58:0a:83:00:10 brd ff:ff:ff:ff:ff:ff link-netnsid 0 1 3: net1@if24: <BROADCAST,MULTICAST,ALLMULTI,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default link/ether ee:9b:66:a4:ec:1d brd ff:ff:ff:ff:ff:ff link-netnsid 0 2" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/networking/configure-syscontrols-interface-tuning-cni
Chapter 16. RHCOS image layering
Chapter 16. RHCOS image layering Red Hat Enterprise Linux CoreOS (RHCOS) image layering allows you to easily extend the functionality of your base RHCOS image by layering additional images onto the base image. This layering does not modify the base RHCOS image. Instead, it creates a custom layered image that includes all RHCOS functionality and adds additional functionality to specific nodes in the cluster. You create a custom layered image by using a Containerfile and applying it to nodes by using a MachineConfig object. The Machine Config Operator overrides the base RHCOS image, as specified by the osImageURL value in the associated machine config, and boots the new image. You can remove the custom layered image by deleting the machine config, The MCO reboots the nodes back to the base RHCOS image. With RHCOS image layering, you can install RPMs into your base image, and your custom content will be booted alongside RHCOS. The Machine Config Operator (MCO) can roll out these custom layered images and monitor these custom containers in the same way it does for the default RHCOS image. RHCOS image layering gives you greater flexibility in how you manage your RHCOS nodes. Important Installing realtime kernel and extensions RPMs as custom layered content is not recommended. This is because these RPMs can conflict with RPMs installed by using a machine config. If there is a conflict, the MCO enters a degraded state when it tries to install the machine config RPM. You need to remove the conflicting extension from your machine config before proceeding. As soon as you apply the custom layered image to your cluster, you effectively take ownership of your custom layered images and those nodes. While Red Hat remains responsible for maintaining and updating the base RHCOS image on standard nodes, you are responsible for maintaining and updating images on nodes that use a custom layered image. You assume the responsibility for the package you applied with the custom layered image and any issues that might arise with the package. To apply a custom layered image, you create a Containerfile that references an OpenShift Container Platform image and the RPM that you want to apply. You then push the resulting custom layered image to an image registry. In a non-production OpenShift Container Platform cluster, create a MachineConfig object for the targeted node pool that points to the new image. Note Use the same base RHCOS image installed on the rest of your cluster. Use the oc adm release info --image-for rhel-coreos command to obtain the base image used in your cluster. RHCOS image layering allows you to use the following types of images to create custom layered images: OpenShift Container Platform Hotfixes . You can work with Customer Experience and Engagement (CEE) to obtain and apply Hotfix packages on top of your RHCOS image. In some instances, you might want a bug fix or enhancement before it is included in an official OpenShift Container Platform release. RHCOS image layering allows you to easily add the Hotfix before it is officially released and remove the Hotfix when the underlying RHCOS image incorporates the fix. Important Some Hotfixes require a Red Hat Support Exception and are outside of the normal scope of OpenShift Container Platform support coverage or life cycle policies. In the event you want a Hotfix, it will be provided to you based on Red Hat Hotfix policy . Apply it on top of the base image and test that new custom layered image in a non-production environment. When you are satisfied that the custom layered image is safe to use in production, you can roll it out on your own schedule to specific node pools. For any reason, you can easily roll back the custom layered image and return to using the default RHCOS. Example Containerfile to apply a Hotfix # Using a 4.12.0 image FROM quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256... #Install hotfix rpm RUN rpm-ostree override replace https://example.com/myrepo/haproxy-1.0.16-5.el8.src.rpm && \ rpm-ostree cleanup -m && \ ostree container commit RHEL packages . You can download Red Hat Enterprise Linux (RHEL) packages from the Red Hat Customer Portal , such as chrony, firewalld, and iputils. Example Containerfile to apply the firewalld utility FROM quay.io/openshift-release-dev/ocp-release@sha256... ADD configure-firewall-playbook.yml . RUN rpm-ostree install firewalld ansible && \ ansible-playbook configure-firewall-playbook.yml && \ rpm -e ansible && \ ostree container commit Example Containerfile to apply the libreswan utility # Get RHCOS base image of target cluster `oc adm release info --image-for rhel-coreos` # hadolint ignore=DL3006 FROM quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256... # Install our config file COPY my-host-to-host.conf /etc/ipsec.d/ # RHEL entitled host is needed here to access RHEL packages # Install libreswan as extra RHEL package RUN rpm-ostree install libreswan && \ systemctl enable ipsec && \ ostree container commit Because libreswan requires additional RHEL packages, the image must be built on an entitled RHEL host. Third-party packages . You can download and install RPMs from third-party organizations, such as the following types of packages: Bleeding edge drivers and kernel enhancements to improve performance or add capabilities. Forensic client tools to investigate possible and actual break-ins. Security agents. Inventory agents that provide a coherent view of the entire cluster. SSH Key management packages. Example Containerfile to apply a third-party package from EPEL # Get RHCOS base image of target cluster `oc adm release info --image-for rhel-coreos` # hadolint ignore=DL3006 FROM quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256... # Install our config file COPY my-host-to-host.conf /etc/ipsec.d/ # RHEL entitled host is needed here to access RHEL packages # Install libreswan as extra RHEL package RUN rpm-ostree install libreswan && \ systemctl enable ipsec && \ ostree container commit Example Containerfile to apply a third-party package that has RHEL dependencies # Get RHCOS base image of target cluster `oc adm release info --image-for rhel-coreos` # hadolint ignore=DL3006 FROM quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256... # Install our config file COPY my-host-to-host.conf /etc/ipsec.d/ # RHEL entitled host is needed here to access RHEL packages # Install libreswan as extra RHEL package RUN rpm-ostree install libreswan && \ systemctl enable ipsec && \ ostree container commit This Containerfile installs the Linux fish program. Because fish requires additional RHEL packages, the image must be built on an entitled RHEL host. After you create the machine config, the Machine Config Operator (MCO) performs the following steps: Renders a new machine config for the specified pool or pools. Performs cordon and drain operations on the nodes in the pool or pools. Writes the rest of the machine config parameters onto the nodes. Applies the custom layered image to the node. Reboots the node using the new image. Important It is strongly recommended that you test your images outside of your production environment before rolling out to your cluster. 16.1. Applying a RHCOS custom layered image You can easily configure Red Hat Enterprise Linux CoreOS (RHCOS) image layering on the nodes in specific machine config pools. The Machine Config Operator (MCO) reboots those nodes with the new custom layered image, overriding the base Red Hat Enterprise Linux CoreOS (RHCOS) image. To apply a custom layered image to your cluster, you must have the custom layered image in a repository that your cluster can access. Then, create a MachineConfig object that points to the custom layered image. You need a separate MachineConfig object for each machine config pool that you want to configure. Important When you configure a custom layered image, OpenShift Container Platform no longer automatically updates any node that uses the custom layered image. You become responsible for manually updating your nodes as appropriate. If you roll back the custom layer, OpenShift Container Platform will again automatically update the node. See the Additional resources section that follows for important information about updating nodes that use a custom layered image. Prerequisites You must create a custom layered image that is based on an OpenShift Container Platform image digest, not a tag. Note You should use the same base RHCOS image that is installed on the rest of your cluster. Use the oc adm release info --image-for rhel-coreos command to obtain the base image being used in your cluster. For example, the following Containerfile creates a custom layered image from an OpenShift Container Platform 4.15 image and overrides the kernel package with one from CentOS 9 Stream: Example Containerfile for a custom layer image # Using a 4.15.0 image FROM quay.io/openshift-release/ocp-release@sha256... 1 #Install hotfix rpm RUN rpm-ostree cliwrap install-to-root / && \ 2 rpm-ostree override replace http://mirror.stream.centos.org/9-stream/BaseOS/x86_64/os/Packages/kernel-{,core-,modules-,modules-core-,modules-extra-}5.14.0-295.el9.x86_64.rpm && \ 3 rpm-ostree cleanup -m && \ ostree container commit 1 Specifies the RHCOS base image of your cluster. 2 Enables cliwrap . This is currently required to intercept some command invocations made from kernel scripts. 3 Replaces the kernel packages. Note Instructions on how to create a Containerfile are beyond the scope of this documentation. Because the process for building a custom layered image is performed outside of the cluster, you must use the --authfile /path/to/pull-secret option with Podman or Buildah. Alternatively, to have the pull secret read by these tools automatically, you can add it to one of the default file locations: ~/.docker/config.json , USDXDG_RUNTIME_DIR/containers/auth.json , ~/.docker/config.json , or ~/.dockercfg . Refer to the containers-auth.json man page for more information. You must push the custom layered image to a repository that your cluster can access. Procedure Create a machine config file. Create a YAML file similar to the following: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: os-layer-custom spec: osImageURL: quay.io/my-registry/custom-image@sha256... 2 1 Specifies the machine config pool to apply the custom layered image. 2 Specifies the path to the custom layered image in the repository. Create the MachineConfig object: USD oc create -f <file_name>.yaml Important It is strongly recommended that you test your images outside of your production environment before rolling out to your cluster. Verification You can verify that the custom layered image is applied by performing any of the following checks: Check that the worker machine config pool has rolled out with the new machine config: Check that the new machine config is created: USD oc get mc Sample output NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 5bdb57489b720096ef912f738b46330a8f577803 3.2.0 95m 00-worker 5bdb57489b720096ef912f738b46330a8f577803 3.2.0 95m 01-master-container-runtime 5bdb57489b720096ef912f738b46330a8f577803 3.2.0 95m 01-master-kubelet 5bdb57489b720096ef912f738b46330a8f577803 3.2.0 95m 01-worker-container-runtime 5bdb57489b720096ef912f738b46330a8f577803 3.2.0 95m 01-worker-kubelet 5bdb57489b720096ef912f738b46330a8f577803 3.2.0 95m 99-master-generated-registries 5bdb57489b720096ef912f738b46330a8f577803 3.2.0 95m 99-master-ssh 3.2.0 98m 99-worker-generated-registries 5bdb57489b720096ef912f738b46330a8f577803 3.2.0 95m 99-worker-ssh 3.2.0 98m os-layer-custom 10s 1 rendered-master-15961f1da260f7be141006404d17d39b 5bdb57489b720096ef912f738b46330a8f577803 3.2.0 95m rendered-worker-5aff604cb1381a4fe07feaf1595a797e 5bdb57489b720096ef912f738b46330a8f577803 3.2.0 95m rendered-worker-5de4837625b1cbc237de6b22bc0bc873 5bdb57489b720096ef912f738b46330a8f577803 3.2.0 4s 2 1 New machine config 2 New rendered machine config Check that the osImageURL value in the new machine config points to the expected image: USD oc describe mc rendered-worker-5de4837625b1cbc237de6b22bc0bc873 Example output Name: rendered-worker-5de4837625b1cbc237de6b22bc0bc873 Namespace: Labels: <none> Annotations: machineconfiguration.openshift.io/generated-by-controller-version: 5bdb57489b720096ef912f738b46330a8f577803 machineconfiguration.openshift.io/release-image-version: 4.15.0-ec.3 API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig ... Os Image URL: quay.io/my-registry/custom-image@sha256... Check that the associated machine config pool is updated with the new machine config: USD oc get mcp Sample output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-15961f1da260f7be141006404d17d39b True False False 3 3 3 0 39m worker rendered-worker-5de4837625b1cbc237de6b22bc0bc873 True False False 3 0 0 0 39m 1 1 When the UPDATING field is True , the machine config pool is updating with the new machine config. In this case, you will not see the new machine config listed in the output. When the field becomes False , the worker machine config pool has rolled out to the new machine config. Check the nodes to see that scheduling on the nodes is disabled. This indicates that the change is being applied: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ip-10-0-148-79.us-west-1.compute.internal Ready worker 32m v1.28.5 ip-10-0-155-125.us-west-1.compute.internal Ready,SchedulingDisabled worker 35m v1.28.5 ip-10-0-170-47.us-west-1.compute.internal Ready control-plane,master 42m v1.28.5 ip-10-0-174-77.us-west-1.compute.internal Ready control-plane,master 42m v1.28.5 ip-10-0-211-49.us-west-1.compute.internal Ready control-plane,master 42m v1.28.5 ip-10-0-218-151.us-west-1.compute.internal Ready worker 31m v1.28.5 When the node is back in the Ready state, check that the node is using the custom layered image: Open an oc debug session to the node. For example: USD oc debug node/ip-10-0-155-125.us-west-1.compute.internal Set /host as the root directory within the debug shell: sh-4.4# chroot /host Run the rpm-ostree status command to view that the custom layered image is in use: sh-4.4# sudo rpm-ostree status Example output Additional resources Updating with a RHCOS custom layered image 16.2. Removing a RHCOS custom layered image You can easily revert Red Hat Enterprise Linux CoreOS (RHCOS) image layering from the nodes in specific machine config pools. The Machine Config Operator (MCO) reboots those nodes with the cluster base Red Hat Enterprise Linux CoreOS (RHCOS) image, overriding the custom layered image. To remove a Red Hat Enterprise Linux CoreOS (RHCOS) custom layered image from your cluster, you need to delete the machine config that applied the image. Procedure Delete the machine config that applied the custom layered image. USD oc delete mc os-layer-custom After deleting the machine config, the nodes reboot. Verification You can verify that the custom layered image is removed by performing any of the following checks: Check that the worker machine config pool is updating with the machine config: USD oc get mcp Sample output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-6faecdfa1b25c114a58cf178fbaa45e2 True False False 3 3 3 0 39m worker rendered-worker-6b000dbc31aaee63c6a2d56d04cd4c1b False True False 3 0 0 0 39m 1 1 When the UPDATING field is True , the machine config pool is updating with the machine config. When the field becomes False , the worker machine config pool has rolled out to the machine config. Check the nodes to see that scheduling on the nodes is disabled. This indicates that the change is being applied: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ip-10-0-148-79.us-west-1.compute.internal Ready worker 32m v1.28.5 ip-10-0-155-125.us-west-1.compute.internal Ready,SchedulingDisabled worker 35m v1.28.5 ip-10-0-170-47.us-west-1.compute.internal Ready control-plane,master 42m v1.28.5 ip-10-0-174-77.us-west-1.compute.internal Ready control-plane,master 42m v1.28.5 ip-10-0-211-49.us-west-1.compute.internal Ready control-plane,master 42m v1.28.5 ip-10-0-218-151.us-west-1.compute.internal Ready worker 31m v1.28.5 When the node is back in the Ready state, check that the node is using the base image: Open an oc debug session to the node. For example: USD oc debug node/ip-10-0-155-125.us-west-1.compute.internal Set /host as the root directory within the debug shell: sh-4.4# chroot /host Run the rpm-ostree status command to view that the custom layered image is in use: sh-4.4# sudo rpm-ostree status Example output 16.3. Updating with a RHCOS custom layered image When you configure Red Hat Enterprise Linux CoreOS (RHCOS) image layering, OpenShift Container Platform no longer automatically updates the node pool that uses the custom layered image. You become responsible to manually update your nodes as appropriate. To update a node that uses a custom layered image, follow these general steps: The cluster automatically upgrades to version x.y.z+1, except for the nodes that use the custom layered image. You could then create a new Containerfile that references the updated OpenShift Container Platform image and the RPM that you had previously applied. Create a new machine config that points to the updated custom layered image. Updating a node with a custom layered image is not required. However, if that node gets too far behind the current OpenShift Container Platform version, you could experience unexpected results.
[ "Using a 4.12.0 image FROM quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256 #Install hotfix rpm RUN rpm-ostree override replace https://example.com/myrepo/haproxy-1.0.16-5.el8.src.rpm && rpm-ostree cleanup -m && ostree container commit", "FROM quay.io/openshift-release-dev/ocp-release@sha256 ADD configure-firewall-playbook.yml . RUN rpm-ostree install firewalld ansible && ansible-playbook configure-firewall-playbook.yml && rpm -e ansible && ostree container commit", "Get RHCOS base image of target cluster `oc adm release info --image-for rhel-coreos` hadolint ignore=DL3006 FROM quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256 Install our config file COPY my-host-to-host.conf /etc/ipsec.d/ RHEL entitled host is needed here to access RHEL packages Install libreswan as extra RHEL package RUN rpm-ostree install libreswan && systemctl enable ipsec && ostree container commit", "Get RHCOS base image of target cluster `oc adm release info --image-for rhel-coreos` hadolint ignore=DL3006 FROM quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256 Install our config file COPY my-host-to-host.conf /etc/ipsec.d/ RHEL entitled host is needed here to access RHEL packages Install libreswan as extra RHEL package RUN rpm-ostree install libreswan && systemctl enable ipsec && ostree container commit", "Get RHCOS base image of target cluster `oc adm release info --image-for rhel-coreos` hadolint ignore=DL3006 FROM quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256 Install our config file COPY my-host-to-host.conf /etc/ipsec.d/ RHEL entitled host is needed here to access RHEL packages Install libreswan as extra RHEL package RUN rpm-ostree install libreswan && systemctl enable ipsec && ostree container commit", "Using a 4.15.0 image FROM quay.io/openshift-release/ocp-release@sha256... 1 #Install hotfix rpm RUN rpm-ostree cliwrap install-to-root / && \\ 2 rpm-ostree override replace http://mirror.stream.centos.org/9-stream/BaseOS/x86_64/os/Packages/kernel-{,core-,modules-,modules-core-,modules-extra-}5.14.0-295.el9.x86_64.rpm && \\ 3 rpm-ostree cleanup -m && ostree container commit", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: os-layer-custom spec: osImageURL: quay.io/my-registry/custom-image@sha256... 2", "oc create -f <file_name>.yaml", "oc get mc", "NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 5bdb57489b720096ef912f738b46330a8f577803 3.2.0 95m 00-worker 5bdb57489b720096ef912f738b46330a8f577803 3.2.0 95m 01-master-container-runtime 5bdb57489b720096ef912f738b46330a8f577803 3.2.0 95m 01-master-kubelet 5bdb57489b720096ef912f738b46330a8f577803 3.2.0 95m 01-worker-container-runtime 5bdb57489b720096ef912f738b46330a8f577803 3.2.0 95m 01-worker-kubelet 5bdb57489b720096ef912f738b46330a8f577803 3.2.0 95m 99-master-generated-registries 5bdb57489b720096ef912f738b46330a8f577803 3.2.0 95m 99-master-ssh 3.2.0 98m 99-worker-generated-registries 5bdb57489b720096ef912f738b46330a8f577803 3.2.0 95m 99-worker-ssh 3.2.0 98m os-layer-custom 10s 1 rendered-master-15961f1da260f7be141006404d17d39b 5bdb57489b720096ef912f738b46330a8f577803 3.2.0 95m rendered-worker-5aff604cb1381a4fe07feaf1595a797e 5bdb57489b720096ef912f738b46330a8f577803 3.2.0 95m rendered-worker-5de4837625b1cbc237de6b22bc0bc873 5bdb57489b720096ef912f738b46330a8f577803 3.2.0 4s 2", "oc describe mc rendered-worker-5de4837625b1cbc237de6b22bc0bc873", "Name: rendered-worker-5de4837625b1cbc237de6b22bc0bc873 Namespace: Labels: <none> Annotations: machineconfiguration.openshift.io/generated-by-controller-version: 5bdb57489b720096ef912f738b46330a8f577803 machineconfiguration.openshift.io/release-image-version: 4.15.0-ec.3 API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Os Image URL: quay.io/my-registry/custom-image@sha256", "oc get mcp", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-15961f1da260f7be141006404d17d39b True False False 3 3 3 0 39m worker rendered-worker-5de4837625b1cbc237de6b22bc0bc873 True False False 3 0 0 0 39m 1", "oc get nodes", "NAME STATUS ROLES AGE VERSION ip-10-0-148-79.us-west-1.compute.internal Ready worker 32m v1.28.5 ip-10-0-155-125.us-west-1.compute.internal Ready,SchedulingDisabled worker 35m v1.28.5 ip-10-0-170-47.us-west-1.compute.internal Ready control-plane,master 42m v1.28.5 ip-10-0-174-77.us-west-1.compute.internal Ready control-plane,master 42m v1.28.5 ip-10-0-211-49.us-west-1.compute.internal Ready control-plane,master 42m v1.28.5 ip-10-0-218-151.us-west-1.compute.internal Ready worker 31m v1.28.5", "oc debug node/ip-10-0-155-125.us-west-1.compute.internal", "sh-4.4# chroot /host", "sh-4.4# sudo rpm-ostree status", "State: idle Deployments: * ostree-unverified-registry:quay.io/my-registry/ Digest: sha256:", "oc delete mc os-layer-custom", "oc get mcp", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-6faecdfa1b25c114a58cf178fbaa45e2 True False False 3 3 3 0 39m worker rendered-worker-6b000dbc31aaee63c6a2d56d04cd4c1b False True False 3 0 0 0 39m 1", "oc get nodes", "NAME STATUS ROLES AGE VERSION ip-10-0-148-79.us-west-1.compute.internal Ready worker 32m v1.28.5 ip-10-0-155-125.us-west-1.compute.internal Ready,SchedulingDisabled worker 35m v1.28.5 ip-10-0-170-47.us-west-1.compute.internal Ready control-plane,master 42m v1.28.5 ip-10-0-174-77.us-west-1.compute.internal Ready control-plane,master 42m v1.28.5 ip-10-0-211-49.us-west-1.compute.internal Ready control-plane,master 42m v1.28.5 ip-10-0-218-151.us-west-1.compute.internal Ready worker 31m v1.28.5", "oc debug node/ip-10-0-155-125.us-west-1.compute.internal", "sh-4.4# chroot /host", "sh-4.4# sudo rpm-ostree status", "State: idle Deployments: * ostree-unverified-registry:podman pull quay.io/openshift-release-dev/ocp-release@sha256:e2044c3cfebe0ff3a99fc207ac5efe6e07878ad59fd4ad5e41f88cb016dacd73 Digest: sha256:e2044c3cfebe0ff3a99fc207ac5efe6e07878ad59fd4ad5e41f88cb016dacd73" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/postinstallation_configuration/coreos-layering
Chapter 7. Enabling SSL/TLS on overcloud public endpoints
Chapter 7. Enabling SSL/TLS on overcloud public endpoints By default, the overcloud uses unencrypted endpoints for the overcloud services. To enable SSL/TLS in your overcloud, Red Hat recommends that you use a certificate authority (CA) solution. When you use a CA solution, you have production ready solutions such as a certificate renewals, certificate revocation lists (CRLs), and industry accepted cryptography. For information on using Red Hat Identity Manager (IdM) as a CA, see Implementing TLS-e with Ansible . You can use the following manual process to enable SSL/TLS for Public API endpoints only, the Internal and Admin APIs remain unencrypted. You must also manually update SSL/TLS certificates if you do not use a CA. For more information, see Manually updating SSL/TLS certificates . Prerequisites Network isolation to define the endpoints for the Public API. The openssl-perl package is installed. You have an SSL/TLS certificate. For more information see Configuring custom SSL/TLS certificates . 7.1. Enabling SSL/TLS To enable SSL/TLS in your overcloud, you must create an environment file that contains parameters for your SSL/TLS certiciates and private key. Procedure Copy the enable-tls.yaml environment file from the heat template collection: Edit this file and make the following changes for these parameters: SSLCertificate Copy the contents of the certificate file ( server.crt.pem ) into the SSLCertificate parameter: Important The certificate contents require the same indentation level for all new lines. SSLIntermediateCertificate If you have an intermediate certificate, copy the contents of the intermediate certificate into the SSLIntermediateCertificate parameter: Important The certificate contents require the same indentation level for all new lines. SSLKey Copy the contents of the private key ( server.key.pem ) into the SSLKey parameter: Important The private key contents require the same indentation level for all new lines. 7.2. Injecting a root certificate If the certificate signer is not in the default trust store on the overcloud image, you must inject the certificate authority into the overcloud image. Procedure Copy the inject-trust-anchor-hiera.yaml environment file from the heat template collection: Edit this file and make the following changes for these parameters: CAMap Lists each certificate authority content (CA) to inject into the overcloud. The overcloud requires the CA files used to sign the certificates for both the undercloud and the overcloud. Copy the contents of the root certificate authority file ( ca.crt.pem ) into an entry. For example, your CAMap parameter might look like the following: Important The certificate authority contents require the same indentation level for all new lines. You can also inject additional CAs with the CAMap parameter. 7.3. Configuring DNS endpoints If you use a DNS hostname to access the overcloud through SSL/TLS, copy the /usr/share/openstack-tripleo-heat-templates/environments/predictable-placement/custom-domain.yaml file into the /home/stack/templates directory. Note It is not possible to redeploy with a TLS-everywhere architecture if this environment file is not included in the initial deployment. Configure the host and domain names for all fields, adding parameters for custom networks if needed: CloudDomain the DNS domain for hosts. CloudName The DNS hostname of the overcloud endpoints. CloudNameCtlplane The DNS name of the provisioning network endpoint. CloudNameInternal The DNS name of the Internal API endpoint. CloudNameStorage The DNS name of the storage endpoint. CloudNameStorageManagement The DNS name of the storage management endpoint. Procedure Use one of the following parameters to add the DNS servers to use: DEFAULT/undercloud_nameservers %SUBNET_SECTION%/dns_nameservers Tip You can use the CloudName{network.name} definition to set the DNS name for an API endpoint on a composable network that uses a virtual IP. For more information, see Adding a composable network in Installing and managing Red Hat OpenStack Platform with director . 7.4. Adding environment files during overcloud creation Use the -e option with the deployment command openstack overcloud deploy to include environment files in the deployment process. Add the environment files from this section in the following order: The environment file to enable SSL/TLS ( enable-tls.yaml ) The environment file to set the DNS hostname ( custom-domain.yaml ) The environment file to inject the root certificate authority ( inject-trust-anchor-hiera.yaml ) The environment file to set the public endpoint mapping: If you use a DNS name for accessing the public endpoints, use /usr/share/openstack-tripleo-heat-templates/environments/ssl/tls-endpoints-public-dns.yaml If you use a IP address for accessing the public endpoints, use /usr/share/openstack-tripleo-heat-templates/environments/ssl/tls-endpoints-public-ip.yaml Procedure Use the following deployment command snippet as an example of how to include your SSL/TLS environment files: 7.5. Manually Updating SSL/TLS Certificates Complete the following steps if you are using your own SSL/TLS certificates that are not auto-generated from the TLS everywhere (TLS-e) process. Procedure Edit your heat templates with the following content: Edit the enable-tls.yaml file and update the SSLCertificate , SSLKey , and SSLIntermediateCertificate parameters. If your certificate authority has changed, edit the inject-trust-anchor-hiera.yaml file and update the CAMap parameter. Rerun the deployment command: Note This procedure uses a combination of --limit and --tags in the openstack overcloud deploy command, in order to reduce impact and completion time.
[ "cp -r /usr/share/openstack-tripleo-heat-templates/environments/ssl/enable-tls.yaml ~/templates/.", "parameter_defaults: SSLCertificate: | -----BEGIN CERTIFICATE----- MIIDgzCCAmugAwIBAgIJAKk46qw6ncJaMA0GCSqGS sFW3S2roS4X0Af/kSSD8mlBBTFTCMBAj6rtLBKLaQ -----END CERTIFICATE-----", "parameter_defaults: SSLIntermediateCertificate: | -----BEGIN CERTIFICATE----- sFW3S2roS4X0Af/kSSD8mlBBTFTCMBAj6rtLBKLaQbIxEpIzrgvpBCwUAMFgxCzAJB MIIDgzCCAmugAwIBAgIJAKk46qw6ncJaMA0GCSqGSIb3DQE -----END CERTIFICATE-----", "parameter_defaults: SSLKey: | -----BEGIN RSA PRIVATE KEY----- MIIEowIBAAKCAQEAqVw8lnQ9RbeI1EdLN5PJP0lVO ctlKn3rAAdyumi4JDjESAXHIKFjJNOLrBmpQyES4X -----END RSA PRIVATE KEY-----", "cp -r /usr/share/openstack-tripleo-heat-templates/environments/ssl/inject-trust-anchor-hiera.yaml ~/templates/.", "parameter_defaults: CAMap: undercloud-ca: content: | -----BEGIN CERTIFICATE----- MIIDlTCCAn2gAwIBAgIJAOnPtx2hHEhrMA0GCS BAYTAlVTMQswCQYDVQQIDAJOQzEQMA4GA1UEBw UmVkIEhhdDELMAkGA1UECwwCUUUxFDASBgNVBA -----END CERTIFICATE----- overcloud-ca: content: | -----BEGIN CERTIFICATE----- MIIDBzCCAe+gAwIBAgIJAIc75A7FD++DMA0GCS BAMMD3d3dy5leGFtcGxlLmNvbTAeFw0xOTAxMz Um54yGCARyp3LpkxvyfMXX1DokpS1uKi7s6CkF -----END CERTIFICATE-----", "openstack overcloud deploy --templates [...] -e /home/stack/templates/enable-tls.yaml -e ~/templates/custom-domain.yaml -e ~/templates/inject-trust-anchor-hiera.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/ssl/tls-endpoints-public-dns.yaml", "openstack overcloud deploy --templates [...] --limit Controller --tags facts,host_prep_steps -e /home/stack/templates/enable-tls.yaml -e ~/templates/custom-domain.yaml -e ~/templates/inject-trust-anchor-hiera.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/ssl/tls-endpoints-public-dns.yaml" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/hardening_red_hat_openstack_platform/assembly_enabling-ssl-tls-on-overcloud-public-endpoints
Chapter 7. Updating hosted control planes
Chapter 7. Updating hosted control planes Updates for hosted control planes involve updating the hosted cluster and the node pools. For a cluster to remain fully operational during an update process, you must meet the requirements of the Kubernetes version skew policy while completing the control plane and node updates. 7.1. Requirements to upgrade hosted control planes The multicluster engine for Kubernetes Operator can manage one or more OpenShift Container Platform clusters. After you create a hosted cluster on OpenShift Container Platform, you must import your hosted cluster in the multicluster engine Operator as a managed cluster. Then, you can use the OpenShift Container Platform cluster as a management cluster. Consider the following requirements before you start updating hosted control planes: You must use the bare metal platform for an OpenShift Container Platform cluster when using OpenShift Virtualization as a provider. You must use bare metal or OpenShift Virtualization as the cloud platform for the hosted cluster. You can find the platform type of your hosted cluster in the spec.Platform.type specification of the HostedCluster custom resource (CR). Important You must update hosted control planes in the following order: Upgrade an OpenShift Container Platform cluster to the latest version. For more information, see "Updating a cluster using the web console" or "Updating a cluster using the CLI". Upgrade the multicluster engine Operator to the latest version. For more information, see "Updating installed Operators". Upgrade the hosted cluster and node pools from the OpenShift Container Platform version to the latest version. For more information, see "Updating a control plane in a hosted cluster" and "Updating node pools in a hosted cluster". Additional resources Updating a cluster using the web console Updating a cluster using the CLI Updating installed Operators Updating a control plane in a hosted cluster Updating node pools in a hosted cluster 7.2. Setting channels in a hosted cluster You can see available updates in the HostedCluster.Status field of the HostedCluster custom resource (CR). The available updates are not fetched from the Cluster Version Operator (CVO) of a hosted cluster. The list of the available updates can be different from the available updates from the following fields of the HostedCluster custom resource (CR): status.version.availableUpdates status.version.conditionalUpdates The initial HostedCluster CR does not have any information in the status.version.availableUpdates and status.version.conditionalUpdates fields. After you set the spec.channel field to the stable OpenShift Container Platform release version, the HyperShift Operator reconciles the HostedCluster CR and updates the status.version field with the available and conditional updates. See the following example of the HostedCluster CR that contains the channel configuration: spec: autoscaling: {} channel: stable-4.y 1 clusterID: d6d42268-7dff-4d37-92cf-691bd2d42f41 configuration: {} controllerAvailabilityPolicy: SingleReplica dns: baseDomain: dev11.red-chesterfield.com privateZoneID: Z0180092I0DQRKL55LN0 publicZoneID: Z00206462VG6ZP0H2QLWK 1 Replace <4.y> with the OpenShift Container Platform release version you specified in spec.release . For example, if you set the spec.release to ocp-release:4.16.4-multi , you must set spec.channel to stable-4.16 . After you configure the channel in the HostedCluster CR, to view the output of the status.version.availableUpdates and status.version.conditionalUpdates fields, run the following command: USD oc get -n <hosted_cluster_namespace> hostedcluster <hosted_cluster_name> -o yaml Example output version: availableUpdates: - channels: - candidate-4.16 - candidate-4.17 - eus-4.16 - fast-4.16 - stable-4.16 image: quay.io/openshift-release-dev/ocp-release@sha256:b7517d13514c6308ae16c5fd8108133754eb922cd37403ed27c846c129e67a9a url: https://access.redhat.com/errata/RHBA-2024:6401 version: 4.16.11 - channels: - candidate-4.16 - candidate-4.17 - eus-4.16 - fast-4.16 - stable-4.16 image: quay.io/openshift-release-dev/ocp-release@sha256:d08e7c8374142c239a07d7b27d1170eae2b0d9f00ccf074c3f13228a1761c162 url: https://access.redhat.com/errata/RHSA-2024:6004 version: 4.16.10 - channels: - candidate-4.16 - candidate-4.17 - eus-4.16 - fast-4.16 - stable-4.16 image: quay.io/openshift-release-dev/ocp-release@sha256:6a80ac72a60635a313ae511f0959cc267a21a89c7654f1c15ee16657aafa41a0 url: https://access.redhat.com/errata/RHBA-2024:5757 version: 4.16.9 - channels: - candidate-4.16 - candidate-4.17 - eus-4.16 - fast-4.16 - stable-4.16 image: quay.io/openshift-release-dev/ocp-release@sha256:ea624ae7d91d3f15094e9e15037244679678bdc89e5a29834b2ddb7e1d9b57e6 url: https://access.redhat.com/errata/RHSA-2024:5422 version: 4.16.8 - channels: - candidate-4.16 - candidate-4.17 - eus-4.16 - fast-4.16 - stable-4.16 image: quay.io/openshift-release-dev/ocp-release@sha256:e4102eb226130117a0775a83769fe8edb029f0a17b6cbca98a682e3f1225d6b7 url: https://access.redhat.com/errata/RHSA-2024:4965 version: 4.16.6 - channels: - candidate-4.16 - candidate-4.17 - eus-4.16 - fast-4.16 - stable-4.16 image: quay.io/openshift-release-dev/ocp-release@sha256:f828eda3eaac179e9463ec7b1ed6baeba2cd5bd3f1dd56655796c86260db819b url: https://access.redhat.com/errata/RHBA-2024:4855 version: 4.16.5 conditionalUpdates: - conditions: - lastTransitionTime: "2024-09-23T22:33:38Z" message: |- Could not evaluate exposure to update risk SRIOVFailedToConfigureVF (creating PromQL round-tripper: unable to load specified CA cert /etc/tls/service-ca/service-ca.crt: open /etc/tls/service-ca/service-ca.crt: no such file or directory) SRIOVFailedToConfigureVF description: OCP Versions 4.14.34, 4.15.25, 4.16.7 and ALL subsequent versions include kernel datastructure changes which are not compatible with older versions of the SR-IOV operator. Please update SR-IOV operator to versions dated 20240826 or newer before updating OCP. SRIOVFailedToConfigureVF URL: https://issues.redhat.com/browse/NHE-1171 reason: EvaluationFailed status: Unknown type: Recommended release: channels: - candidate-4.16 - candidate-4.17 - eus-4.16 - fast-4.16 - stable-4.16 image: quay.io/openshift-release-dev/ocp-release@sha256:fb321a3f50596b43704dbbed2e51fdefd7a7fd488ee99655d03784d0cd02283f url: https://access.redhat.com/errata/RHSA-2024:5107 version: 4.16.7 risks: - matchingRules: - promql: promql: | group(csv_succeeded{_id="d6d42268-7dff-4d37-92cf-691bd2d42f41", name=~"sriov-network-operator[.].*"}) or 0 * group(csv_count{_id="d6d42268-7dff-4d37-92cf-691bd2d42f41"}) type: PromQL message: OCP Versions 4.14.34, 4.15.25, 4.16.7 and ALL subsequent versions include kernel datastructure changes which are not compatible with older versions of the SR-IOV operator. Please update SR-IOV operator to versions dated 20240826 or newer before updating OCP. name: SRIOVFailedToConfigureVF url: https://issues.redhat.com/browse/NHE-1171 7.3. Updating the OpenShift Container Platform version in a hosted cluster Hosted control planes enables the decoupling of updates between the control plane and the data plane. As a cluster service provider or cluster administrator, you can manage the control plane and the data separately. You can update a control plane by modifying the HostedCluster custom resource (CR) and a node by modifying its NodePool CR. Both the HostedCluster and NodePool CRs specify an OpenShift Container Platform release image in a .release field. To keep your hosted cluster fully operational during an update process, the control plane and the node updates must follow the Kubernetes version skew policy . 7.3.1. The multicluster engine Operator hub management cluster The multicluster engine for Kubernetes Operator requires a specific OpenShift Container Platform version for the management cluster to remain in a supported state. You can install the multicluster engine Operator from OperatorHub in the OpenShift Container Platform web console. See the following support matrices for the multicluster engine Operator versions: multicluster engine Operator 2.7 multicluster engine Operator 2.6 multicluster engine Operator 2.5 multicluster engine Operator 2.4 The multicluster engine Operator supports the following OpenShift Container Platform versions: The latest unreleased version The latest released version Two versions before the latest released version You can also get the multicluster engine Operator version as a part of Red Hat Advanced Cluster Management (RHACM). 7.3.2. Supported OpenShift Container Platform versions in a hosted cluster When deploying a hosted cluster, the OpenShift Container Platform version of the management cluster does not affect the OpenShift Container Platform version of a hosted cluster. The HyperShift Operator creates the supported-versions ConfigMap in the hypershift namespace. The supported-versions ConfigMap describes the range of supported OpenShift Container Platform versions that you can deploy. See the following example of the supported-versions ConfigMap: apiVersion: v1 data: server-version: 2f6cfe21a0861dea3130f3bed0d3ae5553b8c28b supported-versions: '{"versions":["4.17","4.16","4.15","4.14"]}' kind: ConfigMap metadata: creationTimestamp: "2024-06-20T07:12:31Z" labels: hypershift.openshift.io/supported-versions: "true" name: supported-versions namespace: hypershift resourceVersion: "927029" uid: f6336f91-33d3-472d-b747-94abae725f70 Important To create a hosted cluster, you must use the OpenShift Container Platform version from the support version range. However, the multicluster engine Operator can manage only between n+1 and n-2 OpenShift Container Platform versions, where n defines the current minor version. You can check the multicluster engine Operator support matrix to ensure the hosted clusters managed by the multicluster engine Operator are within the supported OpenShift Container Platform range. To deploy a higher version of a hosted cluster on OpenShift Container Platform, you must update the multicluster engine Operator to a new minor version release to deploy a new version of the Hypershift Operator. Upgrading the multicluster engine Operator to a new patch, or z-stream, release does not update the HyperShift Operator to the version. See the following example output of the hcp version command that shows the supported OpenShift Container Platform versions for OpenShift Container Platform 4.16 in the management cluster: Client Version: openshift/hypershift: fe67b47fb60e483fe60e4755a02b3be393256343. Latest supported OCP: 4.17.0 Server Version: 05864f61f24a8517731664f8091cedcfc5f9b60d Server Supports OCP Versions: 4.17, 4.16, 4.15, 4.14 7.4. Updates for the hosted cluster The spec.release.image value dictates the version of the control plane. The HostedCluster object transmits the intended spec.release.image value to the HostedControlPlane.spec.releaseImage value and runs the appropriate Control Plane Operator version. The hosted control plane manages the rollout of the new version of the control plane components along with any OpenShift Container Platform components through the new version of the Cluster Version Operator (CVO). Important In hosted control planes, the NodeHealthCheck resource cannot detect the status of the CVO. A cluster administrator must manually pause the remediation triggered by NodeHealthCheck , before performing critical operations, such as updating the cluster, to prevent new remediation actions from interfering with cluster updates. To pause the remediation, enter the array of strings, for example, pause-test-cluster , as a value of the pauseRequests field in the NodeHealthCheck resource. For more information, see About the Node Health Check Operator . After the cluster update is complete, you can edit or delete the remediation. Navigate to the Compute NodeHealthCheck page, click your node health check, and then click Actions , which shows a drop-down list. 7.5. Updates for node pools With node pools, you can configure the software that is running in the nodes by exposing the spec.release and spec.config values. You can start a rolling node pool update in the following ways: Changing the spec.release or spec.config values. Changing any platform-specific field, such as the AWS instance type. The result is a set of new instances with the new type. Changing the cluster configuration, if the change propagates to the node. Node pools support replace updates and in-place updates. The nodepool.spec.release value dictates the version of any particular node pool. A NodePool object completes a replace or an in-place rolling update according to the .spec.management.upgradeType value. After you create a node pool, you cannot change the update type. If you want to change the update type, you must create a node pool and delete the other one. 7.5.1. Replace updates for node pools A replace update creates instances in the new version while it removes old instances from the version. This update type is effective in cloud environments where this level of immutability is cost effective. Replace updates do not preserve any manual changes because the node is entirely re-provisioned. 7.5.2. In place updates for node pools An in-place update directly updates the operating systems of the instances. This type is suitable for environments where the infrastructure constraints are higher, such as bare metal. In-place updates can preserve manual changes, but will report errors if you make manual changes to any file system or operating system configuration that the cluster directly manages, such as kubelet certificates. 7.6. Updating node pools in a hosted cluster You can update your version of OpenShift Container Platform by updating the node pools in your hosted cluster. The node pool version must not surpass the hosted control plane version. The .spec.release field in the NodePool custom resource (CR) shows the version of a node pool. Procedure Change the spec.release.image value in the node pool by entering the following command: USD oc patch nodepool <node_pool_name> -n <hosted_cluster_namespace> \ 1 --type=merge \ -p '{"spec":{"nodeDrainTimeout":"60s","release":{"image":"<openshift_release_image>"}}}' 2 1 Replace <node_pool_name> and <hosted_cluster_namespace> with your node pool name and hosted cluster namespace, respectively. 2 The <openshift_release_image> variable specifies the new OpenShift Container Platform release image that you want to upgrade to, for example, quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 . Replace <4.y.z> with the supported OpenShift Container Platform version. Verification To verify that the new version was rolled out, check the .status.conditions value in the node pool by running the following command: USD oc get -n <hosted_cluster_namespace> nodepool <node_pool_name> -o yaml Example output status: conditions: - lastTransitionTime: "2024-05-20T15:00:40Z" message: 'Using release image: quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64' 1 reason: AsExpected status: "True" type: ValidReleaseImage 1 Replace <4.y.z> with the supported OpenShift Container Platform version. 7.7. Updating a control plane in a hosted cluster On hosted control planes, you can upgrade your version of OpenShift Container Platform by updating the hosted cluster. The .spec.release in the HostedCluster custom resource (CR) shows the version of the control plane. The HostedCluster updates the .spec.release field to the HostedControlPlane.spec.release and runs the appropriate Control Plane Operator version. The HostedControlPlane resource orchestrates the rollout of the new version of the control plane components along with the OpenShift Container Platform component in the data plane through the new version of the Cluster Version Operator (CVO). The HostedControlPlane includes the following artifacts: CVO Cluster Network Operator (CNO) Cluster Ingress Operator Manifests for the Kube API server, scheduler, and manager Machine approver Autoscaler Infrastructure resources to enable ingress for control plane endpoints such as the Kube API server, ignition, and konnectivity You can set the .spec.release field in the HostedCluster CR to update the control plane by using the information from the status.version.availableUpdates and status.version.conditionalUpdates fields. Procedure Add the hypershift.openshift.io/force-upgrade-to=<openshift_release_image> annotation to the hosted cluster by entering the following command: USD oc annotate hostedcluster \ -n <hosted_cluster_namespace> <hosted_cluster_name> \ 1 "hypershift.openshift.io/force-upgrade-to=<openshift_release_image>" \ 2 --overwrite 1 Replace <hosted_cluster_name> and <hosted_cluster_namespace> with your hosted cluster name and hosted cluster namespace, respectively. 2 The <openshift_release_image> variable specifies the new OpenShift Container Platform release image that you want to upgrade to, for example, quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 . Replace <4.y.z> with the supported OpenShift Container Platform version. Change the spec.release.image value in the hosted cluster by entering the following command: USD oc patch hostedcluster <hosted_cluster_name> -n <hosted_cluster_namespace> \ --type=merge \ -p '{"spec":{"release":{"image":"<openshift_release_image>"}}}' Verification To verify that the new version was rolled out, check the .status.conditions and .status.version values in the hosted cluster by running the following command: USD oc get -n <hosted_cluster_namespace> hostedcluster <hosted_cluster_name> \ -o yaml Example output status: conditions: - lastTransitionTime: "2024-05-20T15:01:01Z" message: Payload loaded version="4.y.z" image="quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64" 1 status: "True" type: ClusterVersionReleaseAccepted #... version: availableUpdates: null desired: image: quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 2 version: 4.y.z 1 2 Replace <4.y.z> with the supported OpenShift Container Platform version. 7.8. Updating a hosted cluster by using the multicluster engine Operator console You can update your hosted cluster by using the multicluster engine Operator console. Important Before updating a hosted cluster, you must refer to the available and conditional updates of a hosted cluster. Choosing a wrong release version might break the hosted cluster. Procedure Select All clusters . Navigate to Infrastructure Clusters to view managed hosted clusters. Click the Upgrade available link to update the control plane and node pools.
[ "spec: autoscaling: {} channel: stable-4.y 1 clusterID: d6d42268-7dff-4d37-92cf-691bd2d42f41 configuration: {} controllerAvailabilityPolicy: SingleReplica dns: baseDomain: dev11.red-chesterfield.com privateZoneID: Z0180092I0DQRKL55LN0 publicZoneID: Z00206462VG6ZP0H2QLWK", "oc get -n <hosted_cluster_namespace> hostedcluster <hosted_cluster_name> -o yaml", "version: availableUpdates: - channels: - candidate-4.16 - candidate-4.17 - eus-4.16 - fast-4.16 - stable-4.16 image: quay.io/openshift-release-dev/ocp-release@sha256:b7517d13514c6308ae16c5fd8108133754eb922cd37403ed27c846c129e67a9a url: https://access.redhat.com/errata/RHBA-2024:6401 version: 4.16.11 - channels: - candidate-4.16 - candidate-4.17 - eus-4.16 - fast-4.16 - stable-4.16 image: quay.io/openshift-release-dev/ocp-release@sha256:d08e7c8374142c239a07d7b27d1170eae2b0d9f00ccf074c3f13228a1761c162 url: https://access.redhat.com/errata/RHSA-2024:6004 version: 4.16.10 - channels: - candidate-4.16 - candidate-4.17 - eus-4.16 - fast-4.16 - stable-4.16 image: quay.io/openshift-release-dev/ocp-release@sha256:6a80ac72a60635a313ae511f0959cc267a21a89c7654f1c15ee16657aafa41a0 url: https://access.redhat.com/errata/RHBA-2024:5757 version: 4.16.9 - channels: - candidate-4.16 - candidate-4.17 - eus-4.16 - fast-4.16 - stable-4.16 image: quay.io/openshift-release-dev/ocp-release@sha256:ea624ae7d91d3f15094e9e15037244679678bdc89e5a29834b2ddb7e1d9b57e6 url: https://access.redhat.com/errata/RHSA-2024:5422 version: 4.16.8 - channels: - candidate-4.16 - candidate-4.17 - eus-4.16 - fast-4.16 - stable-4.16 image: quay.io/openshift-release-dev/ocp-release@sha256:e4102eb226130117a0775a83769fe8edb029f0a17b6cbca98a682e3f1225d6b7 url: https://access.redhat.com/errata/RHSA-2024:4965 version: 4.16.6 - channels: - candidate-4.16 - candidate-4.17 - eus-4.16 - fast-4.16 - stable-4.16 image: quay.io/openshift-release-dev/ocp-release@sha256:f828eda3eaac179e9463ec7b1ed6baeba2cd5bd3f1dd56655796c86260db819b url: https://access.redhat.com/errata/RHBA-2024:4855 version: 4.16.5 conditionalUpdates: - conditions: - lastTransitionTime: \"2024-09-23T22:33:38Z\" message: |- Could not evaluate exposure to update risk SRIOVFailedToConfigureVF (creating PromQL round-tripper: unable to load specified CA cert /etc/tls/service-ca/service-ca.crt: open /etc/tls/service-ca/service-ca.crt: no such file or directory) SRIOVFailedToConfigureVF description: OCP Versions 4.14.34, 4.15.25, 4.16.7 and ALL subsequent versions include kernel datastructure changes which are not compatible with older versions of the SR-IOV operator. Please update SR-IOV operator to versions dated 20240826 or newer before updating OCP. SRIOVFailedToConfigureVF URL: https://issues.redhat.com/browse/NHE-1171 reason: EvaluationFailed status: Unknown type: Recommended release: channels: - candidate-4.16 - candidate-4.17 - eus-4.16 - fast-4.16 - stable-4.16 image: quay.io/openshift-release-dev/ocp-release@sha256:fb321a3f50596b43704dbbed2e51fdefd7a7fd488ee99655d03784d0cd02283f url: https://access.redhat.com/errata/RHSA-2024:5107 version: 4.16.7 risks: - matchingRules: - promql: promql: | group(csv_succeeded{_id=\"d6d42268-7dff-4d37-92cf-691bd2d42f41\", name=~\"sriov-network-operator[.].*\"}) or 0 * group(csv_count{_id=\"d6d42268-7dff-4d37-92cf-691bd2d42f41\"}) type: PromQL message: OCP Versions 4.14.34, 4.15.25, 4.16.7 and ALL subsequent versions include kernel datastructure changes which are not compatible with older versions of the SR-IOV operator. Please update SR-IOV operator to versions dated 20240826 or newer before updating OCP. name: SRIOVFailedToConfigureVF url: https://issues.redhat.com/browse/NHE-1171", "apiVersion: v1 data: server-version: 2f6cfe21a0861dea3130f3bed0d3ae5553b8c28b supported-versions: '{\"versions\":[\"4.17\",\"4.16\",\"4.15\",\"4.14\"]}' kind: ConfigMap metadata: creationTimestamp: \"2024-06-20T07:12:31Z\" labels: hypershift.openshift.io/supported-versions: \"true\" name: supported-versions namespace: hypershift resourceVersion: \"927029\" uid: f6336f91-33d3-472d-b747-94abae725f70", "Client Version: openshift/hypershift: fe67b47fb60e483fe60e4755a02b3be393256343. Latest supported OCP: 4.17.0 Server Version: 05864f61f24a8517731664f8091cedcfc5f9b60d Server Supports OCP Versions: 4.17, 4.16, 4.15, 4.14", "oc patch nodepool <node_pool_name> -n <hosted_cluster_namespace> \\ 1 --type=merge -p '{\"spec\":{\"nodeDrainTimeout\":\"60s\",\"release\":{\"image\":\"<openshift_release_image>\"}}}' 2", "oc get -n <hosted_cluster_namespace> nodepool <node_pool_name> -o yaml", "status: conditions: - lastTransitionTime: \"2024-05-20T15:00:40Z\" message: 'Using release image: quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64' 1 reason: AsExpected status: \"True\" type: ValidReleaseImage", "oc annotate hostedcluster -n <hosted_cluster_namespace> <hosted_cluster_name> \\ 1 \"hypershift.openshift.io/force-upgrade-to=<openshift_release_image>\" \\ 2 --overwrite", "oc patch hostedcluster <hosted_cluster_name> -n <hosted_cluster_namespace> --type=merge -p '{\"spec\":{\"release\":{\"image\":\"<openshift_release_image>\"}}}'", "oc get -n <hosted_cluster_namespace> hostedcluster <hosted_cluster_name> -o yaml", "status: conditions: - lastTransitionTime: \"2024-05-20T15:01:01Z\" message: Payload loaded version=\"4.y.z\" image=\"quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64\" 1 status: \"True\" type: ClusterVersionReleaseAccepted # version: availableUpdates: null desired: image: quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 2 version: 4.y.z" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/hosted_control_planes/updating-hosted-control-planes
Chapter 7. Installing a cluster on IBM Cloud into an existing VPC
Chapter 7. Installing a cluster on IBM Cloud into an existing VPC In OpenShift Container Platform version 4.15, you can install a cluster into an existing Virtual Private Cloud (VPC) on IBM Cloud(R). The installation program provisions the rest of the required infrastructure, which you can then further customize. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. 7.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an IBM Cloud(R) account to host the cluster. If you use a firewall, you configured it to allow the sites that your cluster requires access to. You configured the ccoctl utility before you installed the cluster. For more information, see Configuring IAM for IBM Cloud(R) . 7.2. About using a custom VPC In OpenShift Container Platform 4.15, you can deploy a cluster into the subnets of an existing IBM(R) Virtual Private Cloud (VPC). Deploying OpenShift Container Platform into an existing VPC can help you avoid limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. If you cannot obtain the infrastructure creation permissions that are required to create the VPC yourself, use this installation option. Because the installation program cannot know what other components are in your existing subnets, it cannot choose subnet CIDRs and so forth. You must configure networking for the subnets to which you will install the cluster. 7.2.1. Requirements for using your VPC You must correctly configure the existing VPC and its subnets before you install the cluster. The installation program does not create the following components: NAT gateways Subnets Route tables VPC network The installation program cannot: Subdivide network ranges for the cluster to use Set route tables for the subnets Set VPC options like DHCP Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. 7.2.2. VPC validation The VPC and all of the subnets must be in an existing resource group. The cluster is deployed to the existing VPC. As part of the installation, specify the following in the install-config.yaml file: The name of the existing resource group that contains the VPC and subnets ( networkResourceGroupName ) The name of the existing VPC ( vpcName ) The subnets that were created for control plane machines and compute machines ( controlPlaneSubnets and computeSubnets ) Note Additional installer-provisioned cluster resources are deployed to a separate resource group ( resourceGroupName ). You can specify this resource group before installing the cluster. If undefined, a new resource group is created for the cluster. To ensure that the subnets that you provide are suitable, the installation program confirms the following: All of the subnets that you specify exist. For each availability zone in the region, you specify: One subnet for control plane machines. One subnet for compute machines. The machine CIDR that you specified contains the subnets for the compute machines and control plane machines. Note Subnet IDs are not supported. 7.2.3. Isolation between clusters If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is reduced in the following ways: You can install multiple OpenShift Container Platform clusters in the same VPC. ICMP ingress is allowed to the entire network. TCP port 22 ingress (SSH) is allowed to the entire network. Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network. Control plane TCP 22623 ingress (MCS) is allowed to the entire network. 7.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.15, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 7.4. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 7.5. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 7.6. Exporting the API key You must set the API key you created as a global variable; the installation program ingests the variable during startup to set the API key. Prerequisites You have created either a user API key or service ID API key for your IBM Cloud(R) account. Procedure Export your API key for your account as a global variable: USD export IC_API_KEY=<api_key> Important You must set the variable name exactly as specified; the installation program expects the variable name to be present during startup. 7.7. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on IBM Cloud(R). Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select ibmcloud as the platform to target. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster. Enter a descriptive name for your cluster. Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for IBM Cloud(R) 7.7.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 7.1. Minimum resource requirements Machine Operating System vCPU Virtual RAM Storage Input/Output Per Second (IOPS) Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS 2 8 GB 100 GB 300 Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. 7.7.2. Tested instance types for IBM Cloud The following IBM Cloud(R) instance types have been tested with OpenShift Container Platform. Example 7.1. Machine series bx2-8x32 bx2d-4x16 bx3d-4x20 cx2-8x16 cx2d-4x8 cx3d-8x20 gx2-8x64x1v100 gx3-16x80x1l4 mx2-8x64 mx2d-4x32 mx3d-2x20 ox2-4x32 ox2-8x64 ux2d-2x56 vx2d-4x56 Additional resources Optimizing storage 7.7.3. Sample customized install-config.yaml file for IBM Cloud You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and then modify it. apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 3 hyperthreading: Enabled 4 name: master platform: ibmcloud: {} replicas: 3 compute: 5 6 - hyperthreading: Enabled 7 name: worker platform: ibmcloud: {} replicas: 3 metadata: name: test-cluster 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 10 serviceNetwork: - 172.30.0.0/16 platform: ibmcloud: region: eu-gb 11 resourceGroupName: eu-gb-example-cluster-rg 12 networkResourceGroupName: eu-gb-example-existing-network-rg 13 vpcName: eu-gb-example-network-1 14 controlPlaneSubnets: 15 - eu-gb-example-network-1-cp-eu-gb-1 - eu-gb-example-network-1-cp-eu-gb-2 - eu-gb-example-network-1-cp-eu-gb-3 computeSubnets: 16 - eu-gb-example-network-1-compute-eu-gb-1 - eu-gb-example-network-1-compute-eu-gb-2 - eu-gb-example-network-1-compute-eu-gb-3 credentialsMode: Manual publish: External pullSecret: '{"auths": ...}' 17 fips: false 18 sshKey: ssh-ed25519 AAAA... 19 1 8 11 17 Required. The installation program prompts you for this value. 2 5 If you do not provide these parameters and values, the installation program provides the default value. 3 6 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 7 Enables or disables simultaneous multithreading, also known as Hyper-Threading. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger machine types, such as n1-standard-8 , for your machines if you disable simultaneous multithreading. 9 The machine CIDR must contain the subnets for the compute machines and control plane machines. 10 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 12 The name of an existing resource group. All installer-provisioned cluster resources are deployed to this resource group. If undefined, a new resource group is created for the cluster. 13 Specify the name of the resource group that contains the existing virtual private cloud (VPC). The existing VPC and subnets should be in this resource group. The cluster will be installed to this VPC. 14 Specify the name of an existing VPC. 15 Specify the name of the existing subnets to which to deploy the control plane machines. The subnets must belong to the VPC that you specified. Specify a subnet for each availability zone in the region. 16 Specify the name of the existing subnets to which to deploy the compute machines. The subnets must belong to the VPC that you specified. Specify a subnet for each availability zone in the region. 18 Enables or disables FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 19 Optional: provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 7.7.4. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 7.8. Manually creating IAM Installing the cluster requires that the Cloud Credential Operator (CCO) operate in manual mode. While the installation program configures the CCO for manual mode, you must specify the identity and access management secrets for you cloud provider. You can use the Cloud Credential Operator (CCO) utility ( ccoctl ) to create the required IBM Cloud(R) resources. Prerequisites You have configured the ccoctl binary. You have an existing install-config.yaml file. Procedure Edit the install-config.yaml configuration file so that it contains the credentialsMode parameter set to Manual . Example install-config.yaml configuration file apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled 1 This line is added to set the credentialsMode parameter to Manual . To generate the manifests, run the following command from the directory that contains the installation program: USD ./openshift-install create manifests --dir <installation_directory> From the directory that contains the installation program, set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: "1.0" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer Create the service ID for each credential request, assign the policies defined, create an API key, and generate the secret: USD ccoctl ibmcloud create-service-id \ --credentials-requests-dir=<path_to_credential_requests_directory> \ 1 --name=<cluster_name> \ 2 --output-dir=<installation_directory> \ 3 --resource-group-name=<resource_group_name> 4 1 Specify the directory containing the files for the component CredentialsRequest objects. 2 Specify the name of the OpenShift Container Platform cluster. 3 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 4 Optional: Specify the name of the resource group used for scoping the access policies. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. If an incorrect resource group name is provided, the installation fails during the bootstrap phase. To find the correct resource group name, run the following command: USD grep resourceGroupName <installation_directory>/manifests/cluster-infrastructure-02-config.yml Verification Ensure that the appropriate secrets were generated in your cluster's manifests directory. 7.9. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 7.10. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.15. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.15 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 7.11. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources Accessing the web console 7.12. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.15, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources About remote health monitoring 7.13. steps Customize your cluster . Optional: Opt out of remote health reporting .
[ "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "export IC_API_KEY=<api_key>", "./openshift-install create install-config --dir <installation_directory> 1", "apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 3 hyperthreading: Enabled 4 name: master platform: ibmcloud: {} replicas: 3 compute: 5 6 - hyperthreading: Enabled 7 name: worker platform: ibmcloud: {} replicas: 3 metadata: name: test-cluster 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 10 serviceNetwork: - 172.30.0.0/16 platform: ibmcloud: region: eu-gb 11 resourceGroupName: eu-gb-example-cluster-rg 12 networkResourceGroupName: eu-gb-example-existing-network-rg 13 vpcName: eu-gb-example-network-1 14 controlPlaneSubnets: 15 - eu-gb-example-network-1-cp-eu-gb-1 - eu-gb-example-network-1-cp-eu-gb-2 - eu-gb-example-network-1-cp-eu-gb-3 computeSubnets: 16 - eu-gb-example-network-1-compute-eu-gb-1 - eu-gb-example-network-1-compute-eu-gb-2 - eu-gb-example-network-1-compute-eu-gb-3 credentialsMode: Manual publish: External pullSecret: '{\"auths\": ...}' 17 fips: false 18 sshKey: ssh-ed25519 AAAA... 19", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled", "./openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: \"1.0\" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer", "ccoctl ibmcloud create-service-id --credentials-requests-dir=<path_to_credential_requests_directory> \\ 1 --name=<cluster_name> \\ 2 --output-dir=<installation_directory> \\ 3 --resource-group-name=<resource_group_name> 4", "grep resourceGroupName <installation_directory>/manifests/cluster-infrastructure-02-config.yml", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installing_on_ibm_cloud/installing-ibm-cloud-vpc
10.5.23. Options
10.5.23. Options The Options directive controls which server features are available in a particular directory. For example, under the restrictive parameters specified for the root directory, Options is only set to the FollowSymLinks directive. No features are enabled, except that the server is allowed to follow symbolic links in the root directory. By default, in the DocumentRoot directory, Options is set to include Indexes and FollowSymLinks . Indexes permits the server to generate a directory listing for a directory if no DirectoryIndex (for example, index.html ) is specified. FollowSymLinks allows the server to follow symbolic links in that directory. Note Options statements from the main server configuration section need to be replicated to each VirtualHost container individually. Refer to Section 10.5.65, " VirtualHost " for more information.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-apache-options
Chapter 140. KafkaMirrorMaker2ClusterSpec schema reference
Chapter 140. KafkaMirrorMaker2ClusterSpec schema reference Used in: KafkaMirrorMaker2Spec Full list of KafkaMirrorMaker2ClusterSpec schema properties Configures Kafka clusters for mirroring. 140.1. config Use the config properties to configure Kafka options. Standard Apache Kafka configuration may be provided, restricted to those properties not managed directly by Streams for Apache Kafka. For client connection using a specific cipher suite for a TLS version, you can configure allowed ssl properties . You can also configure the ssl.endpoint.identification.algorithm property to enable or disable hostname verification. 140.2. KafkaMirrorMaker2ClusterSpec schema properties Property Property type Description alias string Alias used to reference the Kafka cluster. bootstrapServers string A comma-separated list of host:port pairs for establishing the connection to the Kafka cluster. tls ClientTls TLS configuration for connecting MirrorMaker 2 connectors to a cluster. authentication KafkaClientAuthenticationTls , KafkaClientAuthenticationScramSha256 , KafkaClientAuthenticationScramSha512 , KafkaClientAuthenticationPlain , KafkaClientAuthenticationOAuth Authentication configuration for connecting to the cluster. config map The MirrorMaker 2 cluster config. Properties with the following prefixes cannot be set: ssl., sasl., security., listeners, plugin.path, rest., bootstrap.servers, consumer.interceptor.classes, producer.interceptor.classes (with the exception of: ssl.endpoint.identification.algorithm, ssl.cipher.suites, ssl.protocol, ssl.enabled.protocols).
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-kafkamirrormaker2clusterspec-reference
Chapter 8. Desktop
Chapter 8. Desktop New packages: pidgin and pidgin-sipe This update adds: The pidgin instant messaging client, which supports off-the-record (OTR) messaging and the Microsoft Lync instant messaging application. The pidgin-sipe plug-in, which contains a back-end code that implements support for Lync. The users need both the application and the plug-in to use Microsoft Lync. (BZ# 1066457 , BZ#1297461) Scroll wheel increment configurable in GNOME terminal With this update, the _gnome-terminal packages have been upgraded so that the scroll wheel setting is now configurable in the GNOME terminal. The scrolling preferences include a checkbutton and a spinbutton, which allow to choose between dynamic or fixed scrolling increment. The default option is dynamic scrolling increment, which is based on the number of visible rows. (BZ#1103380) Vinagre user experience improvements The Vinagre remote desktop viewer introduces the following user experience enhancements: A minimize button is available in the fullscreen toolbar, which makes access to custom options easier. It is now possible to scale Remote Desktop Protocol (RDP) sessions. You can set the session size in the Connect dialog. You can now use the secrets service to safely store and retrieve remote credentials. (BZ#1291275) Custom titles for the terminal tabs or windows This update allows users to set custom titles for terminal windows or tabs in gnome-terminal . The titles can be changed directly in the gnome-terminal user interface. (BZ# 1296110 ) Separate menu items for opening tabs and windows restored This update restores separate menu items for opening tabs and windows in gnome-terminal . It is now easier to open a mix of tabs and windows without being familiar with keyboard shortcuts. (BZ#1300826) Native Gnome/GTK+ look for Qt applications Previously, the default Qt style did not provide consistency for Qt applications, causing them not to fit into Gnome desktop. A new adwaita-qt style has been provided for those applications and the visual differences between the Qt and GTK+ applications are now minimal. (BZ#1306307) rhythmbox rebased to version 3.3.1 Rhythmbox is the GNOME default music player. It is easy to use and includes features such as playlists, podcast playback, and audio streaming. The rhythmbox packages have been upgraded to upstream version 3.3.1. The most notable changes include: Better support for Android devices New task progress display below the track list Support for the composer, disc, and track total tags New style for playback controls and the source list A number of bug fixes for various warnings and unexpected termination errors (BZ# 1298233 ) libreoffice rebased to version 5.0.6.2 The libreoffice packages have been upgraded to upstream version 5.0.6.2, which provides a number of bug fixes and enhancements over the version, notably: The status bar and various sidebar decks have been improved. Various toolbars and context menus have been cleaned up or rearranged for better usability. The color selector has been reworked. New templates have been created. Templates now appear directly in the Start Center and can be picked from there. libreoffice now displays an information bar to indicate visibly when a document is being opened in read-only mode. The possibility to embed libreoffice in certain web browsers by using the deprecated NPAPI has been removed. It is possible to connect to SharePoint 2010 and 2013 and OneDrive directly from libreoffice . Support for converting formulas into direct values, Master Document templates, reading Adobe Swatch Exchange color palettes in the .ase format, importing Adobe PageMaker documents, and for exporting digitally signed PDF files. It is now possible to specify references to entire columns or rows using the A:A or 1:1 notation. Interoperability with Microsoft Office document formats has been improved. For a complete list of bug fixes and enhancements provided by this upgrade, see https://wiki.documentfoundation.org/ReleaseNotes/4.4 and https://wiki.documentfoundation.org/ReleaseNotes/5.0 . (BZ# 1290148 ) GNOME boxes support for Windows Server 2012 R2, Windows 10, and Windows 8.1 GNOME boxes now supports creating virtual machines with Windows Server 2012 R2, Windows 10, and Windows 8.1. (BZ# 1257865 , BZ# 1257867 , BZ# 1267869 ) The vmware graphics driver now supports 3D acceleration in VMware Workstation 12 Previously, the vmware graphics driver in Red Hat Enterprise Linux did not support 3D acceleration in VMware Workstation 12 virtual machines (VM). As a consequence, the GNOME desktop was rendered on the host's CPU instead of the GPU. The driver has been updated to support the VMware Workstation 12 virtual graphics adapter. As a result, the GNOME desktop is now rendered using 3D acceleration. (BZ# 1263120 ) libdvdnav rebased to version 5.0.3 The libdvdnav library allows you to navigate DVD menus on any operating system. The libdvdnav packages have been upgraded to version 5.0.3. The most notable changes include: Fixed a bug on menu-less DVDs Fixed playback issues on multi-angle DVDs Fixed unexpected termination when playing a DVD from different region than currently set in the DVD drive Fixed memory bugs when reading certain DVDs (BZ# 1068814 ) GIMP rebased to version 2.8.16 The GNU Image Manipulation Program (GIMP) has been upgraded to version 2.8.16, which provides a number of bug fixes and enhancements over the version. Notable changes include the following: Core: More robust loading of XCF files Improved performance and behavior when writing XCF files GUI: The widget direction automatically matches the direction of language set for GUI Larger scroll area for tags Fixed switching of dock tabs by drag and drop (DND) hovering DND works between images in one dockable No unexpected termination problem in the save dialog Plug-ins: Improved security of the script-fu server Fixed reading and writing of files in the BMP format Fixed exporting of fonts in the PDF plug-in Support of layer groups in OpenRaster files Fixed loading of PSD files with layer groups (BZ# 1298226 ) gimp-help rebased to version 2.8.2 The gimp-help package has been upgraded to upstream version 2.8.2, which provides a number of bug fixes and enhancements over the version. Notably, it also implements a complete translation to Brazilian Portuguese. (BZ# 1370595 ) Qt5 added to Red Hat Enterprise Linux 7 A new version of the Qt library (Qt5) has been added to Red Hat Enterprise Linux 7. This version of Qt brings number of features for developers as well as support for mobile devices, which was missing in the version. (BZ#1272603) Improved UI message when setting a new language in system-config-language Previously, if you selected a new language to install in the Language graphical tool (the system-config-language package), and the selected language group was not available, the error message that was displayed was not clear enough. For example, if you selected Italian (Switzerland) , the message displayed was: With this update, the message is updated and will look similar to the following example: The new message means that the new language has been enabled without having to install any new packages. After the reboot, the system will boot in the selected language. (BZ# 1328068 ) New packages: pavucontrol This update adds the pavucontrol packages, which contain PulseAudio Volume Control, a GTK-based volume control application for the PulseAudio sound server. This application enables to send the output of different audio streams to different output devices, such as headsets or speakers. Individual routing is impossible with the default audio control panel, which sends all audio streams to the same output device. (BZ#1210846) libdvdread rebased to version 5.0.3 The libdvdread packages have been rebased to version 5.0.3. The most notable changes include: Fixes for numerous crashes, assertions and corruptions Fixed compilation in C++ applications Removed the unused feature to remap .MAP files Removed the dvdnavmini library Added the DVDOpenStream API Because of API change, .so version also changed. Third-party software dependent on libdvdread needs to be recompiled against this new version. (BZ# 1326238 ) New weather service for gnome-weather Previously, the gnome-weather application used the METAR services provided by the National Oceanic and Atmospheric Administration (NOAA). However, NOAA stopped to provide the METAR service. This update introduces a new METAR service provided by the Aviation Weather Center (AWC) and gnome-weather now works as expected. (BZ# 1371550 ) libosinfo rebased to version 0.3.0 The libosinfo packages have been updated to version 0.3.0. Notable changes over the version include improving operating system data for several recent versions of Red Hat Enterprise Linux and Ubuntu, and fixing several memory leaks. (BZ#1282919)
[ "Due to comps cleanup italian-support group got removed and no longer exists. Therefore only setting the default system language", "Due to comps cleanup, italian-support group no longer exists and its language packages will not be installed. Therefore only setting Italian as the default system language." ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.3_release_notes/new_features_desktop
Appendix F. Purging storage clusters deployed by Ansible
Appendix F. Purging storage clusters deployed by Ansible If you no longer want to use a Ceph storage cluster, then use the purge-docker-cluster.yml playbook to remove the cluster. Purging a storage cluster is also useful when the installation process failed and you want to start over. Warning After purging a Ceph storage cluster, all data on the OSDs is permanently lost. Prerequisites Root-level access to the Ansible administration node. Access to the ansible user account. For bare-metal deployments: If the osd_auto_discovery option in the /usr/share/ceph-ansible/group-vars/osds.yml file is set to true , then Ansible will fail to purge the storage cluster. Therefore, comment out osd_auto_discovery and declare the OSD devices in the osds.yml file. Ensure that the /var/log/ansible/ansible.log file is writable by the ansible user account. Procedure Navigate to the /usr/share/ceph-ansible/ directory: As the ansible user, run the purge playbook. For bare-metal deployments, use the purge-cluster.yml playbook to purge the Ceph storage cluster: For container deployments: Use the purge-docker-cluster.yml playbook to purge the Ceph storage cluster: Note This playbook removes all packages, containers, configuration files, and all the data created by the Ceph Ansible playbook. To specify a different inventory file other than the default ( /etc/ansible/hosts ), use -i parameter: Syntax Replace INVENTORY_FILE with the path to the inventory file. Example To skip the removal of the Ceph container image, use the --skip-tags="remove_img" option: To skip the removal of the packages that were installed during the installation, use the --skip-tags="with_pkg" option: Additional Resources See the OSD Ansible settings for more details.
[ "cd /usr/share/ceph-ansible", "[ansible@admin ceph-ansible]USD ansible-playbook infrastructure-playbooks/purge-cluster.yml", "[ansible@admin ceph-ansible]USD ansible-playbook infrastructure-playbooks/purge-docker-cluster.yml", "[ansible@admin ceph-ansible]USD ansible-playbook infrastructure-playbooks/purge-docker-cluster.yml -i INVENTORY_FILE", "[ansible@admin ceph-ansible]USD ansible-playbook infrastructure-playbooks/purge-docker-cluster.yml -i ~/ansible/hosts", "[ansible@admin ceph-ansible]USD ansible-playbook --skip-tags=\"remove_img\" infrastructure-playbooks/purge-docker-cluster.yml", "[ansible@admin ceph-ansible]USD ansible-playbook --skip-tags=\"with_pkg\" infrastructure-playbooks/purge-docker-cluster.yml" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/installation_guide/purging-storage-clusters-deployed-by-ansible-install
Chapter 1. Enterprise Contract for Red Hat Trusted Application Pipeline
Chapter 1. Enterprise Contract for Red Hat Trusted Application Pipeline The more complex a software supply chain becomes, the more critical it is to employ reliable checks and best practices to guarantee software artifact integrity and source code dependability. Artifacts such as your image containers. This is where Red Hat Enterprise Contract enters your Red Hat Trusted Application Pipeline build and deploy experience. Enterprise Contract is a policy-driven workflow tool for maintaining software supply chain security by defining and enforcing policies for building and testing container images. For a build system that creates Supply-chain Levels for Software Artifacts (SLSA) provenance attestations, such as Tekton with Tekton Chains and GitHub Actions with the SLSA GitHub Generator, checking the signatures and confirming that the contents of the attestations actually match what is expected is a critical part of verifying and maintaining the integrity of your software supply chain. A secure CI/CD workflow should include artifact verification to detect problems early. It's the job of Enterprise Contract to validate that a container image is signed and attested by a known and trusted build system. The general steps for validating a signed and attested container image are as follows: Create or copy a container image with Red Hat Trusted Application Pipeline. Generate a signing key with Cosign. Sign the container image with Cosign. Attest the image with Cosign. Verify your signed and attested container image with the Enterprise Contract CLI. But what does it mean to sign and attest to the provenance of a software artifact like a container image? Why do it? And how? Signed software artifacts like container images are at a significantly lower risk of several attack vectors than unsigned artifacts. When a container image is signed, various cryptographic techniques bind the image to a specific entity or organization. The result is a digital signature that verifies the authenticity of the image so that you can trace it back to its creator-that entity or organization-and also verify that the image wasn't altered or tampered with after it was signed. For more information about software supply chain threats, see Supply chain threats . Enterprise Contract uses the industry standard Sigstore Cosign as a resource library to validate your container images. With Red Hat Trusted Artifact Signer, Red Hat's supported version of the Sigstore framework, you can use your own on-prem instance of Sigstore's services to sign and attest your container images with the Cosign CLI. For more information about RHTAS, see Red Hat Trusted Artifact Signer . As for software artifact attestation , it can't happen without provenance. Provenance is the verifiable information about software artifacts like container images that describes where, when, and how that artifact was produced. The attestation itself is an authenticated statement, in the form of metadata, that proves that an artifact is intact and trustworthy. Enterprise Contract uses that attestation to cryptographically verify that the build was not tampered with, and to check the build against any set of policies, such as SLSA requirements. For more information about SLSA, see About SLSA . When you push your code from either the RHTAP development namespace to the stage namespace, or from the stage namespace to the production namespace, Enterprise Contract automatically runs its validation checks to make sure your container image was signed and attested by known and trusted build systems. When your image passes the Enterprise Contract check, you can merge your code changes to complete your promotion from one environment to the . For more information about deploying your application to a different namespace, see Trusted Application Pipeline Software Template . For more inforamtion about where RHTAP saves your deployment manifests, see the RHTAP GitOps repository and its YAML files. Additional resources For more information about signing and attesting a container image, see Signing a container image .
null
https://docs.redhat.com/en/documentation/red_hat_trusted_application_pipeline/1.4/html/managing_compliance_with_enterprise_contract/con_enterprise-contract-for-rhtap_enterprise_contract-rhtap
Chapter 3. Removed functionality
Chapter 3. Removed functionality This section lists functionality that has been removed in Red Hat Satellite 6.16. 3.1. Security and authentication OVAL-only contents and policies Management of the Open Vulnerability and Assessment Language (OVAL) contents and policies, which were provided as Technology Previews, are no longer available. If you used an OVAL policy on your clients, you must reconfigure them. Jira:SAT-23806 3.2. Content management Entitlement-based subscription management Entitlement-based subscription management has been removed. You must use Simple Content Access, which simplifies the entitlement experience for administrators. For more information, see the Subscription Management Administration Guide for Red Hat Enterprise Linux on the Red Hat Customer Portal. Jira:SAT-27936 [1] 3.3. Host provisioning and management Telemetry disablement in Convert2RHEL job templates You cannot disable telemetry when you use the Convert2RHEL utility. Jira:SAT-24654 [1] foreman_hooks plugin The foreman_hooks plugin has been removed. You must use the foreman_webhooks plugin instead. Jira:SAT-16036 [1] 3.4. Backup and restore Snapshot backup satellite-maintain no longer supports snapshot backups. You must use online backups instead. Jira:SAT-20955 [1] 3.5. Hammer CLI tool Removed Hammer commands and options The following Hammer command has been removed: hammer simple-content-access The following Hammer options have been removed: --simple-content-access removed from the hammer organization create command --simple-content-access removed from the hammer organization update command --source-url removed from the hammer repository synchronize command Jira:SAT-28141 [1] 3.6. REST API Removed API endpoints and routes The following API endpoints have been removed: /katello/api/organizations/:organization_id/simple_content_access/eligible /katello/api/organizations/:organization_id/simple_content_access/enable /katello/api/organizations/:organization_id/simple_content_access/disable /katello/api/organizations/:organization_id/simple_content_access/status /api/compliance/oval_contents /api/compliance/oval_contents/:id /api/compliance/oval_contents/sync /api/compliance/oval_policies /api/compliance/oval_policies/:id /api/compliance/oval_policies/:id/assign_hostgroups /api/compliance/oval_policies/:id/assign_hosts /api/compliance/oval_policies/:id/oval_content /api/compliance/oval_reports/:cname/:oval_policy_id/:date Jira:SAT-28135 [1]
null
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/release_notes/removed-functionality
Chapter 3. Securing the Undertow HTTP Server
Chapter 3. Securing the Undertow HTTP Server Abstract You can configure the built-in Undertow HTTP server to use SSL/TLS security by editing the contents of the etc/undertow.xml configuration file. In particular, you can add SSL/TLS security to the Fuse Console in this way. 3.1. Undertow server The Fuse container is pre-configured with an Undertow server, which acts as a general-purpose HTTP server and HTTP servlet container. Through a single HTTP port (by default, http://localhost:8181 ), the Undertow container can host multiple services, for example: Fuse Console (by default, http://localhost:8181/hawtio ) Apache CXF Web services endpoints (if the host and port are left unspecified in the endpoint configuration) Some Apache Camel endpoints If you use the default Undertow server for all of your HTTP endpoints, you can conveniently add SSL/TLS security to these HTTP endpoints by following the steps described here. 3.2. Create X.509 certificate and private key Before you can enable SSL/TLS on the Undertow server, you must create an X.509 certificate and private key, where the certificate and private key must be in Java keystore format (JKS format). For details of how to create a signed certificate and private key, see Appendix A, Managing Certificates . 3.3. Enabling SSL/TLS for Undertow in an Apache Karaf container For the following procedure, it is assumed that you have already created a signed X.509 certificate and private key pair in the keystore file, alice.ks , with keystore password, StorePass , and key password, KeyPass . To enable SSL/TLS for Undertow in a Karaf container: Make sure that the Pax Web server is configured to take its configuration from the etc/undertow.xml file. When you look at the contents of the etc/org.ops4j.pax.web.cfg file, you should see the following setting: Open the file, etc/org.ops4j.pax.web.cfg , in a text editor and add the following lines: Save and close the file, etc/org.ops4j.pax.web.cfg . Open the file, etc/undertow.xml , in a text editor. The steps assume you are working with the default undertow.xml file, unchanged since installation time. Search for the XML elements, http-listener and https-listener . Comment out the http-listener element (by enclosing it between <!-- and --> ) and uncomment the https-listener element (spread over two lines). The edited fragment of XML should now look something like this: <!-- HTTP(S) Listener references Socket Binding (and indirectly - Interfaces) --> <!-- http-listener name="http" socket-binding="http" /> --> verify-client: org.xnio.SslClientAuthMode.NOT_REQUESTED, org.xnio.SslClientAuthMode.REQUESTED, org.xnio.SslClientAuthMode.REQUIRED <!--<https-listener name="https" socket-binding="https" security-realm="https" verify-client="NOT_REQUESTED" enabled="true" /> --> <https-listener name="https" socket-binding="https" worker="default" buffer-pool="default" enabled="true" receive-buffer="65536" send-buffer="65536" tcp-backlog="128" tcp-keep-alive="false" read-timeout="-1" write-timeout="-1" max-connections="1000000" resolve-peer-address="false" disallowed-methods="TRACE OPTIONS" secure="true" max-post-size="10485760" buffer-pipelined-data="false" max-header-size="1048576" max-parameters="1000" max-headers="200" max-cookies="200" allow-encoded-slash="false" decode-url="true" url-charset="UTF-8" always-set-keep-alive="true" max-buffered-request-size="16384" record-request-start-time="true" allow-equals-in-cookie-value="false" no-request-timeout="60000" request-parse-timeout="60000" rfc6265-cookie-validation="false" allow-unescaped-characters-in-url="false" certificate-forwarding="false" proxy-address-forwarding="false" enable-http2="false" http2-enable-push="false" http2-header-table-size="4096" http2-initial-window-size="65535" http2-max-concurrent-streams="-1" http2-max-frame-size="16384" http2-max-header-list-size="-1" require-host-http11="false" proxy-protocol="false" security-realm="https" verify-client="NOT_REQUESTED" enabled-cipher-suites="TLS_AES_256_GCM_SHA384" enabled-protocols="TLSv1.3" ssl-session-cache-size="0" ssl-session-timeout="0" /> Search for the w:keystore element. By default, the w:keystore element is configured as follows: <w:keystore path="USD{karaf.etc}/certs/server.keystore" provider="JKS" alias="server" keystore-password="secret" key-password="secret" generate-self-signed-certificate-host="localhost" /> To install the alice certificate as the Undertow server's certificate, modify the w:keystore element attributes as follows: Set path to the absolute location of the alice.ks file on the file system. Set provider to JKS . Set alias to the alice certificate alias in the keystore. Set keystore-password to the value of the password that unlocks the key store. Set key-password to the value of the password that encrypts the alice private key. Delete the generate-self-signed-certificate-host attribute setting. For example, after installing the alice.ks keystore, the modified w:keystore element would look something like this: <w:keystore path="USD{karaf.etc}/certs/alice.ks" provider="JKS" alias="alice" keystore-password="StorePass" key-password="KeyPass" /> Search for the <interface name="secure"> tag, which is used to specify the IP addresses the secure HTTPS port binds to. By default, this element is commented out, as follows: <!--<interface name="secure">--> <!--<w:inet-address value="127.0.0.1" />--> <!--</interface>--> Uncomment the element and customize the value attribute to specify the IP address which the HTTPS port binds to. For example, the wildcard value, 0.0.0.0 , configures HTTPS to bind to all available IP addresses: <interface name="secure"> <w:inet-address value="0.0.0.0" /> </interface> Search for and uncomment the <socket-binding name="https" tag. When this tag is uncommented, it should look something like this: <socket-binding name="https" interface="secure" port="USD{org.osgi.service.http.port.secure}" /> Save and close the file, etc/undertow.xml . Restart the Fuse container, in order for the configuration changes to take effect. 3.4. Customizing allowed TLS protocols and cipher suites You can customize the allowed TLS protocols and cipher suites by modifying the following attributes of the w:engine element in the etc/undertow.xml file: enabled-cipher-suites Specifies the list of allowed TLS/SSL cipher suites. enabled-protocols Specifies the list of allowed TLS/SSL protocols. Warning Do not enable SSL protocol versions, as they are vulnerable to attack. Use only TLS protocol versions. For full details of the available protocols and cipher suites, consult the appropriate JVM documentation and security provider documentation. For example, for Java 8, see Java Cryptography Architecture Oracle Providers Documentation for JDK 8 . 3.5. Connect to the secure console After configuring SSL security for the Undertow server in the Pax Web configuration file, you should be able to open the Fuse Console by browsing to the following URL: Note Remember to type the https: scheme, instead of http: , in this URL. Initially, the browser will warn you that you are using an untrusted certificate. Skip this warning and you will be presented with the login screen for the Fuse Console. 3.6. Advanced Undertow configuration 3.6.1. IO configuration Since PAXWEB-1255 the configuration of the XNIO worker and buffer pool used by the listeners can be altered. In undertow.xml template there is a section that specifies default values of some IO-related parameters: The following buffer-pool parameters may be specified: buffer-size Specifies size of the buffer used for IO operations. When not specified, size is calculated depending on available memory. direct-buffers Determines whether java.nio.ByteBuffer#allocateDirect or java.nio.ByteBuffer#allocate should be used. The following worker parameters may be specified: io-threads The number of I/O threads to create for the worker. If not specified, the number of threads is set to the number of CPUs x 2. task-core-threads The number of threads for the core task thread pool. task-max-threads The maximum number of threads for the worker task thread pool. If not specified, the maximum number of threads is set to the number of CPUs x 16. 3.6.2. Worker IO configuration The Undertow thread pools and their names can be configured on a per-service or bundle basis which helps to make monitoring from Hawtio console and debugging more efficient. In the bundle blueprint configuration file (which is typically stored under the src/main/resources/OSGI-INF/blueprint directory in a Maven project), you can configure the workerIOName and ThreadPool as demonstrated in the following example. Example 3.1. httpu:engine-factory element with workerIOName and ThreadPool configuration The following threadingParameters may be specified: minThreads Specifies the number of "core" threads for the worker task thread pool. Generally this should be reasonably high, at least 10 per CPU core.. maxThreads Specifies the maximum number of threads for the worker task thread pool. The following worker parameters may be specified: workerIOThreads Specifies the number of I/O threads to create for the worker. If not specified, a default will be chosen. One IO thread per CPU core is a reasonable default. workerIOName Specifies the name for the worker. If not specified, the default "XNIO-1" will be chosen.
[ "org.ops4j.pax.web.config.file=USD{karaf.etc}/undertow.xml", "org.osgi.service.http.port.secure=8443 org.osgi.service.http.secure.enabled=true", "<!-- HTTP(S) Listener references Socket Binding (and indirectly - Interfaces) --> <!-- http-listener name=\"http\" socket-binding=\"http\" /> --> verify-client: org.xnio.SslClientAuthMode.NOT_REQUESTED, org.xnio.SslClientAuthMode.REQUESTED, org.xnio.SslClientAuthMode.REQUIRED <!--<https-listener name=\"https\" socket-binding=\"https\" security-realm=\"https\" verify-client=\"NOT_REQUESTED\" enabled=\"true\" /> --> <https-listener name=\"https\" socket-binding=\"https\" worker=\"default\" buffer-pool=\"default\" enabled=\"true\" receive-buffer=\"65536\" send-buffer=\"65536\" tcp-backlog=\"128\" tcp-keep-alive=\"false\" read-timeout=\"-1\" write-timeout=\"-1\" max-connections=\"1000000\" resolve-peer-address=\"false\" disallowed-methods=\"TRACE OPTIONS\" secure=\"true\" max-post-size=\"10485760\" buffer-pipelined-data=\"false\" max-header-size=\"1048576\" max-parameters=\"1000\" max-headers=\"200\" max-cookies=\"200\" allow-encoded-slash=\"false\" decode-url=\"true\" url-charset=\"UTF-8\" always-set-keep-alive=\"true\" max-buffered-request-size=\"16384\" record-request-start-time=\"true\" allow-equals-in-cookie-value=\"false\" no-request-timeout=\"60000\" request-parse-timeout=\"60000\" rfc6265-cookie-validation=\"false\" allow-unescaped-characters-in-url=\"false\" certificate-forwarding=\"false\" proxy-address-forwarding=\"false\" enable-http2=\"false\" http2-enable-push=\"false\" http2-header-table-size=\"4096\" http2-initial-window-size=\"65535\" http2-max-concurrent-streams=\"-1\" http2-max-frame-size=\"16384\" http2-max-header-list-size=\"-1\" require-host-http11=\"false\" proxy-protocol=\"false\" security-realm=\"https\" verify-client=\"NOT_REQUESTED\" enabled-cipher-suites=\"TLS_AES_256_GCM_SHA384\" enabled-protocols=\"TLSv1.3\" ssl-session-cache-size=\"0\" ssl-session-timeout=\"0\" />", "<w:keystore path=\"USD{karaf.etc}/certs/server.keystore\" provider=\"JKS\" alias=\"server\" keystore-password=\"secret\" key-password=\"secret\" generate-self-signed-certificate-host=\"localhost\" />", "<w:keystore path=\"USD{karaf.etc}/certs/alice.ks\" provider=\"JKS\" alias=\"alice\" keystore-password=\"StorePass\" key-password=\"KeyPass\" />", "<!--<interface name=\"secure\">--> <!--<w:inet-address value=\"127.0.0.1\" />--> <!--</interface>-->", "<interface name=\"secure\"> <w:inet-address value=\"0.0.0.0\" /> </interface>", "<socket-binding name=\"https\" interface=\"secure\" port=\"USD{org.osgi.service.http.port.secure}\" />", "https://localhost:8443/hawtio", "<!-- Only \"default\" worker and buffer-pool are supported and can be used to override the default values used by all listeners buffer-pool: - buffer-size defaults to: - when < 64MB of Xmx: 512 - when < 128MB of Xmx: 1024 - when >= 128MB of Xmx: 16K - 20 - direct-buffers defaults to: - when < 64MB of Xmx: false - when >= 64MB of Xmx: true worker: - io-threads defaults to Math.max(Runtime.getRuntime().availableProcessors(), 2); - task-core-threads and task-max-threads default to io-threads * 8 --> <!-- <subsystem xmlns=\"urn:jboss:domain:io:3.0\"> <buffer-pool name=\"default\" buffer-size=\"16364\" direct-buffers=\"true\" /> <worker name=\"default\" io-threads=\"8\" task-core-threads=\"64\" task-max-threads=\"64\" task-keepalive=\"60000\" /> </subsystem> -->", "<httpu:engine-factory> <httpu:engine port=\"9001\"> <httpu:threadingParameters minThreads=\"99\" maxThreads=\"777\" workerIOThreads=\"8\" workerIOName=\"WorkerIOTest\"/> </httpu:engine> </httpu:engine-factory>" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_karaf_security_guide/webconsole
11.4. Configuration Examples
11.4. Configuration Examples 11.4.1. Rsync as a daemon When using Red Hat Enterprise Linux, rsync can be used as a daemon, so that multiple clients can directly communicate with it as a central server in order to house centralized files and keep them synchronized. The following example demonstrates running rsync as a daemon over a network socket in the correct domain, and how SELinux expects this daemon to be running on a pre-defined (in SELinux policy) TCP port. This example then shows how to modify SELinux policy to allow the rsync daemon to run normally on a non-standard port. This example is performed on a single system to demonstrate SELinux policy and its control over local daemons and processes. Note that this is an example only and demonstrates how SELinux can affect rsync . Comprehensive documentation of rsync is beyond the scope of this document. See the official rsync documentation for further details. This example assumes that the rsync , setroubleshoot-server and audit packages are installed, that the SELinux targeted policy is used, and that SELinux is running in enforcing mode. Procedure 11.1. Getting rsync to launch as rsync_t Run the getenforce command to confirm SELinux is running in enforcing mode: The getenforce command returns Enforcing when SELinux is running in enforcing mode. Run the which command to confirm that the rsync binary is in the system path: When running rsync as a daemon, a configuration file should be used and saved as /etc/rsyncd.conf . Note that the following configuration file used in this example is very simple and is not indicative of all the possible options that are available, rather it is just enough to demonstrate the rsync daemon: Now that a simple configuration file exists for rsync to operate in daemon mode, this step demonstrates that simply running the rsync --daemon command is not sufficient for SELinux to offer its protection over rsync . See the following output: Note that in the output from the final ps command, the context shows the rsync daemon running in the unconfined_t domain. This indicates that rsync has not transitioned to the rsync_t domain as it was launched by the rsync --daemon command. At this point, SELinux cannot enforce its rules and policy over this daemon. See the following steps to see how to fix this problem. In the following steps, rsync transitions to the rsync_t domain because it launched it from a properly-labeled init script. Only then can SELinux and its protection mechanisms have an effect over rsync . This rsync process should be killed before proceeding to the step. A custom init script for rsync is needed for this step. Save the following to /etc/rc.d/init.d/rsyncd . The following steps show how to label this script as initrc_exec_t : Run the semanage command to add a context mapping for /etc/rc.d/init.d/rsyncd : This mapping is written to the /etc/selinux/targeted/contexts/files/file_contexts.local file: Now use the restorecon command to apply this context mapping to the running system: Run the ls -lZ command to confirm the script has been labeled appropriately. Note that in the following output, the script has been labeled as initrc_exec_t : Turn on the rsync_server SELinux boolean: Note that this setting is not permanent and as such, it will revert to its original state after a reboot. To make the setting permanent, use the -P option with the setsebool command. Launch rsyncd via the new script. Now that rsync has started from an init script that had been appropriately labeled, the process has started as rsync_t : SELinux can now enforce its protection mechanisms over the rsync daemon as it is now runing in the rsync_t domain. This example demonstrated how to get rsyncd running in the rsync_t domain. The example shows how to get this daemon successfully running on a non-default port. TCP port 10000 is used in the example. Procedure 11.2. Running the rsync daemon on a non-default port Modify the /etc/rsyncd.conf file and add the port = 10000 line at the top of the file in the global configuration area (that is, before any file areas are defined). The new configuration file will look like: After launching rsync from the init script with this new setting, a denial similar to the following is logged by SELinux: Run semanage to add TCP port 10000 to SELinux policy in rsync_port_t : Now that TCP port 10000 has been added to SELinux policy for rsync_port_t , rsyncd will start and operate normally on this port: SELinux has had its policy modified and is now permitting rsyncd to operate on TCP port 10000.
[ "~]USD getenforce Enforcing", "~]USD which rsync /usr/bin/rsync", "log file = /var/log/rsync.log pid file = /var/run/rsyncd.pid lock file = /var/run/rsync.lock [files] path = /srv/files comment = file area read only = false timeout = 300", "~]# rsync --daemon ~]# ps x | grep rsync 8231 ? Ss 0:00 rsync --daemon 8233 pts/3 S+ 0:00 grep rsync ~]# ps -eZ | grep rsync unconfined_u:unconfined_r: unconfined_t :s0-s0:c0.c1023 8231 ? 00:00:00 rsync", "#!/bin/bash Source function library. . /etc/rc.d/init.d/functions [ -f /usr/bin/rsync ] || exit 0 case \"USD1\" in start) action \"Starting rsyncd: \" /usr/bin/rsync --daemon ;; stop) action \"Stopping rsyncd: \" killall rsync ;; *) echo \"Usage: rsyncd {start|stop}\" exit 1 esac exit 0", "~]# semanage fcontext -a -t initrc_exec_t \"/etc/rc.d/init.d/rsyncd\"", "~]# grep rsync /etc/selinux/targeted/contexts/files/file_contexts.local /etc/rc.d/init.d/rsyncd system_u:object_r:initrc_exec_t:s0", "~]# restorecon -R -v /etc/rc.d/init.d/rsyncd", "~]USD ls -lZ /etc/rc.d/init.d/rsyncd -rwxr-xr-x. root root system_u:object_r: initrc_exec_t :s0 /etc/rc.d/init.d/rsyncd", "~]# setsebool rsync_server on", "~]# service rsyncd start Starting rsyncd: [ OK ] ps -eZ | grep rsync unconfined_u:system_r: rsync_t :s0 9794 ? 00:00:00 rsync", "log file = /var/log/rsyncd.log pid file = /var/run/rsyncd.pid lock file = /var/run/rsync.lock port = 10000 [files] path = /srv/files comment = file area read only = false timeout = 300", "Jul 22 10:46:59 localhost setroubleshoot: SELinux is preventing the rsync (rsync_t) from binding to port 10000. For complete SELinux messages, run sealert -l c371ab34-639e-45ae-9e42-18855b5c2de8", "~]# semanage port -a -t rsync_port_t -p tcp 10000", "~]# service rsyncd start Starting rsyncd: [ OK ]", "~]# netstat -lnp | grep 10000 tcp 0 0 0.0.0.0: 10000 0.0.0.0:* LISTEN 9910/rsync" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/managing_confined_services/sect-managing_confined_services-rsync-configuration_examples
5.2. General Properties of Fencing Devices
5.2. General Properties of Fencing Devices Any cluster node can fence any other cluster node with any fence device, regardless of whether the fence resource is started or stopped. Whether the resource is started controls only the recurring monitor for the device, not whether it can be used, with the following exceptions: You can disable a fencing device by running the pcs stonith disable stonith_id command. This will prevent any node from using that device To prevent a specific node from using a fencing device, you can configure location constraints for the fencing resource with the pcs constraint location ... avoids command. Configuring stonith-enabled=false will disable fencing altogether. Note, however, that Red Hat does not support clusters when fencing is disabled, as it is not suitable for a production environment. Table 5.1, "General Properties of Fencing Devices" describes the general properties you can set for fencing devices. Refer to Section 5.3, "Displaying Device-Specific Fencing Options" for information on fencing properties you can set for specific fencing devices. Note For information on more advanced fencing configuration properties, see Section 5.8, "Additional Fencing Configuration Options" Table 5.1. General Properties of Fencing Devices Field Type Default Description pcmk_host_map string A mapping of host names to port numbers for devices that do not support host names. For example: node1:1;node2:2,3 tells the cluster to use port 1 for node1 and ports 2 and 3 for node2 pcmk_host_list string A list of machines controlled by this device (Optional unless pcmk_host_check=static-list ). pcmk_host_check string dynamic-list How to determine which machines are controlled by the device. Allowed values: dynamic-list (query the device), static-list (check the pcmk_host_list attribute), none (assume every device can fence every machine)
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/s1-genfenceprops-haar
9.5. Security for Generated REST Services
9.5. Security for Generated REST Services By default all the generated Rest based services are secured using "HTTPBasic" with security domain "teiid-security" and with security role "rest". However, these properties can be customized by defining the then in vdb.xml file. Example 9.1. Example vdb.xml file security specification security-type - defines the security type. allowed values are "HttpBasic" or "none". If omitted will default to "HttpBasic" security-domain - defines JAAS security domain to be used with HttpBasic. If omitted will default to "teiid-security" security-role - security role that HttpBasic will use to authorize the users. If omitted the value will default to "rest" passthough-auth - when defined the pass-through-authentication is used to login in to JBoss Data Virtualization. When this is set to "true", make sure that the "embedded" transport configuration in standalone.xml has defined a security-domain that can be authenticated against. Failure to add the configuration change will result in authentication error. Defaults to false. Important it is our intention to provide other types of security based on ws-security in future releases.
[ "<vdb name=\"sample\" version=\"1\"> <property name=\"UseConnectorMetadata\" value=\"true\" /> <property name=\"{http://teiid.org/rest}auto-generate\" value=\"true\"/> <property name=\"{http://teiid.org/rest}security-type\" value=\"HttpBasic\"/> <property name=\"{http://teiid.org/rest}security-domain\" value=\"teiid-security\"/> <property name=\"{http://teiid.org/rest}security-role\" value=\"example-role\"/> <property name=\"{http://teiid.org/rest}passthrough-auth\" value=\"true\"/> </vdb>" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_3_reference_material/security_for_generated_rest_services
Chapter 5. Changed features
Chapter 5. Changed features Changed features are not deprecated and will continue to be supported until further notice. The following table provides information about features that are changed in Ansible Automation Platform 2.5: Component Feature Automation hub Error codes are now changed from 403 to 401. Any API client usage relying on specific status code 403 versus 401 will have to update their logic. Standard UI usage will work as expected. Event-Driven Ansible The endpoints /extra_vars are now moved to a property within /activations . Event-Driven Ansible The endpoint /credentials was replaced with /eda-credentials . This is part of an expanded credentials capability for Event-Driven Ansible. For more information, see the chapter Setting up credentials for Event-Driven Ansible controller in the Event-Driven Ansible controller user guide . Event-Driven Ansible Event-Driven Ansible can no longer add, edit, or delete the platform gateway-managed resources. Creating, editing, or deleting organizations, teams, or users is available through platform gateway endpoints only. The platform gateway endpoints also enable you to edit organization or team memberships and configure external authentication. API Auditing of users has now changed. Users are now audited through the platform API, not through the controller API. This change applies to the Ansible Automation Platform in both cloud service and on-premise deployments. Automation controller, automation hub, platform gateway, and Event-Driven Ansible User permission audits the sources of truth for the platform gateway. When an IdP (SSO) is used, then the IdP should be the source of truth for user permission audits. When the Ansible Automation Platform platform gateway is used without SSO, then the platform gateway should be the source of truth for user permissions, not the app-specific UIs or APIs.
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/release_notes/aap-2.5-changed-features
Chapter 3. Installing web console add-ons and creating custom pages
Chapter 3. Installing web console add-ons and creating custom pages Depending on how you want to use your Red Hat Enterprise Linux system, you can add additional available applications to the web console or create custom pages based on your use case. 3.1. Add-ons for the RHEL web console While the cockpit package is a part of Red Hat Enterprise Linux by default, you can install add-on applications on demand by using the following command: In the command, replace <add-on> by a package name from the list of available add-on applications for the RHEL web console. Feature name Package name Usage Composer cockpit-composer Building custom OS images File manager cockpit-files Managing files and directories in the standard web-console interface Machines cockpit-machines Managing libvirt virtual machines PackageKit cockpit-packagekit Software updates and application installation (usually installed by default) PCP cockpit-pcp Persistent and more fine-grained performance data (installed on demand from the UI) Podman cockpit-podman Managing containers and managing container images Session Recording cockpit-session-recording Recording and managing user sessions Storage cockpit-storaged Managing storage through udisks 3.2. Creating new pages in the web console If you want to add customized functions to your Red Hat Enterprise Linux web console, you must add the package directory that contains the HTML and JavaScript files for the page that runs the required function. For detailed information about adding custom pages, see Creating Plugins for the Cockpit User Interface on the Cockpit Project website. Additional resources Cockpit Packages section in the Cockpit Project Developer Guide 3.3. Overriding the manifest settings in the web console You can modify the menu of the web console for a particular user and all users of the system. In the cockpit project, a package name is a directory name. A package contains the manifest.json file along with other files. Default settings are present in the manifest.json file. You can override the default cockpit menu settings by creating a <package-name> .override.json file at a specific location for the specified user. Prerequisites You have installed the RHEL 9 web console. For instructions, see Installing and enabling the web console . Procedure Override manifest settings in the <systemd> .override.json file in a text editor of your choice, for example: To edit for all users, enter: To edit for a single user, enter: Edit the required file with the following details: { "menu": { "services": null, "logs": { "order": -1 } } } The null value hides the services tab The -1 value moves the logs tab to the first place. Restart the cockpit service: Additional resources cockpit(1) man page on your system Manifest overrides
[ "dnf install <add-on>", "vi /etc/cockpit/ <systemd> .override.json", "vi ~/.config/cockpit/ <systemd> .override.json", "{ \"menu\": { \"services\": null, \"logs\": { \"order\": -1 } } }", "systemctl restart cockpit.service" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_systems_using_the_rhel_9_web_console/cockpit-add-ons-_system-management-using-the-rhel-9-web-console
Appendix A. Using your subscription
Appendix A. Using your subscription AMQ Streams is provided through a software subscription. To manage your subscriptions, access your account at the Red Hat Customer Portal. Accessing Your Account Go to access.redhat.com . If you do not already have an account, create one. Log in to your account. Activating a Subscription Go to access.redhat.com . Navigate to My Subscriptions . Navigate to Activate a subscription and enter your 16-digit activation number. Downloading Zip and Tar Files To access zip or tar files, use the customer portal to find the relevant files for download. If you are using RPM packages, this step is not required. Open a browser and log in to the Red Hat Customer Portal Product Downloads page at access.redhat.com/downloads . Locate the AMQ Streams for Apache Kafka entries in the INTEGRATION AND AUTOMATION category. Select the desired AMQ Streams product. The Software Downloads page opens. Click the Download link for your component. Installing packages with DNF To install a package and all the package dependencies, use: dnf install <package_name> To install a previously-downloaded package from a local directory, use: dnf install <path_to_download_package> Revised on 2024-04-29 12:48:44 UTC
[ "dnf install <package_name>", "dnf install <path_to_download_package>" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/using_3scale_api_management_with_the_amq_streams_kafka_bridge/using_your_subscription
4.4. Creating the Replica
4.4. Creating the Replica On the master server, create a replica information file . This contains realm and configuration information taken from the master server which will be used to configure the replica server. Run the ipa-replica-prepare utility on the master IdM server . The utility requires the fully-qualified domain name of the replica machine. Using the --ip-address option automatically creates DNS entries for the replica, including the A and PTR records for the replica to the DNS. Important Only pass the --ip-address option if the IdM server was configured with integrated DNS. Otherwise, there is no DNS record to update, and the attempt to create the replica fails when the DNS record operation fails. Note The ipa-replica-prepare script does not validate the IP address or verify if the IP address of the replica is reachable by other servers. [root@server ~]# ipa-replica-prepare ipareplica.example.com --ip-address 192.168.1.2 Directory Manager (existing master) password: Preparing replica for ipareplica.example.com from ipaserver.example.com Creating SSL certificate for the Directory Server Creating SSL certificate for the dogtag Directory Server Saving dogtag Directory Server port Creating SSL certificate for the Web Server Exporting RA certificate Copying additional files Finalizing configuration Packaging replica information into /var/lib/ipa/replica-info-ipareplica.example.com.gpg Adding DNS records for ipareplica.example.com Using reverse zone 1.168.192.in-addr.arpa. The ipa-replica-prepare command was successful This must be a valid DNS name, which means only numbers, alphabetic characters, and hyphens (-) are allowed. Other characters, like underscores, in the hostname will cause DNS failures. Additionally, the hostname must be all lower-case. No capital letters are allowed. Each replica information file is created in the /var/lib/ipa/ directory as a GPG-encrypted file. Each file is named specifically for the replica server for which it is intended, such as replica-info-ipareplica.example.com.gpg . Note A replica information file cannot be used to create multiple replicas. It can only be used for the specific replica and machine for which it was created. Warning Replica information files contain sensitive information. Take appropriate steps to ensure that they are properly protected. For more options with ipa-replica-prepare , see the ipa-replica-prepare (1) man page. Copy the replica information file to the replica server: [root@server ~]# scp /var/lib/ipa/replica-info-ipareplica.example.com.gpg root@ipaserver:/var/lib/ipa/ On the replica server, run the replica installation script, referencing the replication information file. There are other options for setting up DNS, much like the server installation script. Additionally, there is an option to configure a CA for the replica; while CA's are installed by default for servers, they are optional for replicas. Some information about DNS forwarders is required. A list can be given of configured DNS forwarders using a --forwarder option for each one, or forwarder configuration can be skipped by specifying the --no-forwarders option. For example: [root@ipareplica ~]# ipa-replica-install --setup-ca --setup-dns --no-forwarders /var/lib/ipa/replica-info-ipareplica.example.com.gpg Directory Manager (existing master) password: Warning: Hostname (ipareplica.example.com) not found in DNS Run connection check to master Check connection from replica to remote master 'ipareplica. example.com': Directory Service: Unsecure port (389): OK Directory Service: Secure port (636): OK Kerberos KDC: TCP (88): OK Kerberos Kpasswd: TCP (464): OK HTTP Server: Unsecure port (80): OK HTTP Server: Secure port (443): OK The following list of ports use UDP protocol and would need to be checked manually: Kerberos KDC: UDP (88): SKIPPED Kerberos Kpasswd: UDP (464): SKIPPED Connection from replica to master is OK. Start listening on required ports for remote master check Get credentials to log in to remote master [email protected] password: Execute check on remote master [email protected]'s password: Check connection from master to remote replica 'ipareplica. example.com': Directory Service: Unsecure port (389): OK Directory Service: Secure port (636): OK Kerberos KDC: TCP (88): OK Kerberos KDC: UDP (88): OK Kerberos Kpasswd: TCP (464): OK Kerberos Kpasswd: UDP (464): OK HTTP Server: Unsecure port (80): OK HTTP Server: Secure port (443): OK Connection from master to replica is OK. Connection check OK The replica installation script runs a test to ensure that the replica file being installed matches the current hostname. If they do not match, the script returns a warning message and asks for confirmation. This could occur on a multi-homed machine, for example, where mismatched hostnames may not be an issue. Additional options for the replica installation script are listed in the ipa-replica-install (1) man page. Note One of the options ipa-replica-install accepts is the --ip-address option. When added to ipa-replica-install , this option only accepts IP addresses associated with the local interface. Enter the Directory Manager password when prompted. The script then configures a Directory Server instance based on information in the replica information file and initiates a replication process to copy over data from the master server to the replica, a process called initialization . Verify that the proper DNS entries were created so that IdM clients can discover the new server. DNS entries are required for required domain services: _ldap._tcp _kerberos._tcp _kerberos._udp _kerberos-master._tcp _kerberos-master._udp _ntp._udp If the initial IdM server was created with DNS enabled, then the replica is created with the proper DNS entries. For example: If the initial IdM server was created without DNS enabled, then each DNS entry, including both TCP and UDP entries for some services, should be added manually. For example: Optional. Set up DNS services for the replica. These are not configured by the setup script, even if the master server uses DNS. Use the ipa-dns-install command to install the DNS manually, then use the ipa dnsrecord-add command to add the required DNS records. For example: [root@ipareplica ~]# ipa-dns-install [root@ipareplica ~]# ipa dnsrecord-add example.com @ --ns-rec ipareplica.example.com. Important Use the fully-qualified domain name of the replica, including the final period (.), otherwise BIND will treat the hostname as relative to the domain.
[ "ipa-replica-prepare ipareplica.example.com --ip-address 192.168.1.2 Directory Manager (existing master) password: Preparing replica for ipareplica.example.com from ipaserver.example.com Creating SSL certificate for the Directory Server Creating SSL certificate for the dogtag Directory Server Saving dogtag Directory Server port Creating SSL certificate for the Web Server Exporting RA certificate Copying additional files Finalizing configuration Packaging replica information into /var/lib/ipa/replica-info-ipareplica.example.com.gpg Adding DNS records for ipareplica.example.com Using reverse zone 1.168.192.in-addr.arpa. The ipa-replica-prepare command was successful", "scp /var/lib/ipa/replica-info-ipareplica.example.com.gpg root@ipaserver:/var/lib/ipa/", "ipa-replica-install --setup-ca --setup-dns --no-forwarders /var/lib/ipa/replica-info-ipareplica.example.com.gpg Directory Manager (existing master) password: Warning: Hostname (ipareplica.example.com) not found in DNS Run connection check to master Check connection from replica to remote master 'ipareplica. example.com': Directory Service: Unsecure port (389): OK Directory Service: Secure port (636): OK Kerberos KDC: TCP (88): OK Kerberos Kpasswd: TCP (464): OK HTTP Server: Unsecure port (80): OK HTTP Server: Secure port (443): OK The following list of ports use UDP protocol and would need to be checked manually: Kerberos KDC: UDP (88): SKIPPED Kerberos Kpasswd: UDP (464): SKIPPED Connection from replica to master is OK. Start listening on required ports for remote master check Get credentials to log in to remote master [email protected] password: Execute check on remote master [email protected]'s password: Check connection from master to remote replica 'ipareplica. example.com': Directory Service: Unsecure port (389): OK Directory Service: Secure port (636): OK Kerberos KDC: TCP (88): OK Kerberos KDC: UDP (88): OK Kerberos Kpasswd: TCP (464): OK Kerberos Kpasswd: UDP (464): OK HTTP Server: Unsecure port (80): OK HTTP Server: Secure port (443): OK Connection from master to replica is OK. Connection check OK", "DOMAIN=example.com NAMESERVER=ipareplica for i in _ldap._tcp _kerberos._tcp _kerberos._udp _kerberos-master._tcp _kerberos-master._udp _ntp._udp; do echo \"\"; dig @USD{NAMESERVER} USD{i}.USD{DOMAIN} srv +nocmd +noquestion +nocomments +nostats +noaa +noadditional +noauthority; done | egrep -v \"^;\" | egrep _ _ldap._tcp.example.com. 86400 IN SRV 0 100 389 ipaserver1.example.com. _ldap._tcp.example.com. 86400 IN SRV 0 100 389 ipaserver2.example.com. _kerberos._tcp.example.com. 86400 IN SRV 0 100 88 ipaserver1.example.com. ...8<", "kinit admin ipa dnsrecord-add example.com _ldap._tcp --srv-rec=\"0 100 389 ipareplica.example.com.\"", "ipa-dns-install ipa dnsrecord-add example.com @ --ns-rec ipareplica.example.com." ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/identity_management_guide/creating-the-replica
Chapter 3. Connectivity Link platform engineer workflow
Chapter 3. Connectivity Link platform engineer workflow This section of the walkthrough shows how as a platform engineer you can deploy a Gateway that provides secure communication and is protected and ready for use by application development teams. It also shows how to use this Gateway in multiple clusters in different geographic regions. Prerequisites See Chapter 2, Check your Connectivity Link installation and permissions . Note In multicluster environments, you must perform the following steps in each cluster individually, unless specifically excluded. 3.1. Step 1 - Set your environment variables Procedure Set the following environment variables, which are used for convenience in this guide: In this example, zid is your hosted zone ID displayed in the AWS Route 53 console. rootDomain is the top-level AWS Route 53 domain name that you will use for Connectivity Link. Note This guide uses environment variables for convenience only. If you know the environment variable values, you can set up the required .yaml files in way that suits your needs. 3.2. Step 2 - Set up a DNS provider secret The DNS provider supplies a credential to access the DNS zones that Connectivity Link can use to set up DNS configuration. You must ensure that this credential has access to only the zones that you want managed. Note You must apply the following Secret resources to each cluster. If you are adding an additional cluster, add them to the new cluster. Procedure If your Gateway namespace does not already exist, create it as follows: If the secret for your DNS provider credentials was not already created when installing Connectivity Link, create this secret in your Gateway namespace as follows: Before adding a TLS issuer, you must also create the credentials secret in the cert-manager namespace as follows: 3.3. Step 3 - Add a TLS issuer To secure communication to the Gateways, you will define a TLS issuer for TLS certificates. This example uses Let's Encrypt, but you can use any certificate issuer supported by cert-manager . Procedure Enter the following command to define a TLS issuer. This example uses Let's Encrypt, which you must also apply to all clusters: Wait for the ClusterIssuer to become ready as follows: 3.4. Step 4 - Set up a Gateway For Connectivity Link to balance traffic using DNS across two or more clusters, you must define a Gateway with a shared host. You will define this by using an HTTPS listener with a wildcard hostname based on the root domain. As mentioned earlier, you must apply these resources to all clusters. Note For now, the Gateway is set to accept an HTTPRoute from the same namespace only. This allows you to restrict who can use the Gateway until it is ready for general use. Procedure Enter the following command to create the Gateway: Check the status of your Gateway as follows: Your Gateway should be accepted and programmed, which means valid and assigned an external address. However, if you check your HTTPS listener status as follows, you will see that it is not yet programmed or ready to accept traffic due to bad TLS configuration: Connectivity Link can help with this by using a TLSPolicy, which is described in the step. 3.4.1. Optional: Configure metrics to be scraped from the Gateway instance If you have Prometheus set up in your cluster, you can configure a PodMonitor to scrape metrics directly from the Gateway pod. This configuration is required for metrics such as istio_requests_total . You must add the following configuration in the namespace where the Gateway is running: For more information on configuring metrics, see the Connectivity Link Observability Guide . 3.5. Step 5 - Configure your Gateway policies and HTTP route While your Gateway is now deployed, it has no exposed endpoints and your HTTPS listener is not programmed. , you can set up a TLSPolicy that leverages your CertificateIssuer to set up your HTTPS listener certificates. You will define an AuthPolicy that will set up a default HTTP 403 response for any unprotected endpoints, as well as a RateLimitPolicy that will set up a default artificially low global limit to further protect any endpoints exposed by this Gateway. You will also define a DNSPolicy with a load balancing strategy, and an HTTPRoute for your Gateway to communicate with your backend application API. 3.5.1. Set the TLS policy Procedure Set the TLSPolicy for your Gateway as follows: Check that your TLS policy was accepted by the controller as follows: 3.5.2. Set the Auth policy Procedure Set a default, deny-all AuthPolicy for your Gateway as follows: Check that your auth policy was accepted by the controller as follows: 3.5.3. Set the rate limit policy Procedure Set the default RateLimitPolicy for your Gateway as follows: Note It might take a few minutes for the RateLimitPolicy to be applied depending on your cluster. The limit in this example is artificially low to show it working easily. To check your rate limits have been accepted, enter the following command: 3.5.4. Set the DNS policy Procedure Set the DNSPolicy for your Gateway as follows: Note The DNSPolicy will use the DNS Provider Secret that you defined earlier. The geo in this example is EU , but you can change this to suit your requirements. Check that your DNSPolicy has been accepted as follows: 3.5.5. Create an HTTP route Note For test purposes, this section assumes that the toystore application is deployed. For more information, see Chapter 4, Connectivity Link application developer workflow . Procedure Create an HTTPRoute to test your Gateway as follows: Check your Gateway policies are enforced as follows: Check your HTTPS listener is ready as follows: 3.6. Step 6 - Test connectivity and deny all auth You can use curl to test your endpoint connectivity and auth. Procedure Enter the following command: You should see an HTTP 403 response. 3.7. Step 7 - Open up the Gateway for other namespaces Because you have configured the Gateway, secured it with Connectivity Link policies, and tested it, you can now open it up for use by other teams in other namespaces. Procedure Enter the following command: 3.8. Step 8 - Extend the Gateway to multiple clusters and configure geo-based routing Procedure To distribute this Gateway across multiple clusters, repeat this setup process for each cluster. By default, this will implement a round-robin DNS strategy to distribute traffic evenly across the different clusters. Setting up your Gateways to serve clients based on their geographic location is straightforward with your current configuration. Assuming that you have deployed Gateway instances across multiple clusters as per this guide, the step involves updating the DNS controller with the geographic regions of the visible Gateways. For instance, if you have one cluster in North America and another in the EU, you can direct traffic to these Gateways based on their location by configuring the appropriate policy. For your North American cluster, you can create a DNSPolicy and set the loadBalancing:geo field to US .
[ "export zid=change-to-your-DNS-zone-ID export rootDomain=demo.example.com export gatewayNS=api-gateway export gatewayName=external export devNS=toystore export AWS_ACCESS_KEY_ID=xxxx export AWS_SECRET_ACCESS_KEY=xxxx export AWS_REGION=us-east-1 export clusterIssuerName=lets-encrypt export [email protected]", "create ns USD{gatewayNS}", "-n USD{gatewayNS} create secret generic aws-credentials --type=kuadrant.io/aws --from-literal=AWS_ACCESS_KEY_ID=USDAWS_ACCESS_KEY_ID --from-literal=AWS_SECRET_ACCESS_KEY=USDAWS_SECRET_ACCESS_KEY", "-n cert-manager create secret generic aws-credentials --type=kuadrant.io/aws --from-literal=AWS_ACCESS_KEY_ID=USDAWS_ACCESS_KEY_ID --from-literal=AWS_SECRET_ACCESS_KEY=USDAWS_SECRET_ACCESS_KEY", "apply -f - <<EOF apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: USD{clusterIssuerName} spec: acme: email: USD{EMAIL} privateKeySecretRef: name: le-secret server: https://acme-v02.api.letsencrypt.org/directory solvers: - dns01: route53: hostedZoneID: USD{zid} region: USD{AWS_REGION} accessKeyIDSecretRef: key: AWS_ACCESS_KEY_ID name: aws-credentials secretAccessKeySecretRef: key: AWS_SECRET_ACCESS_KEY name: aws-credentials EOF", "wait clusterissuer/USD{clusterIssuerName} --for=condition=ready=true", "apply -f - <<EOF apiVersion: gateway.networking.k8s.io/v1 kind: Gateway metadata: name: USD{gatewayName} namespace: USD{gatewayNS} labels: kuadrant.io/gateway: \"true\" spec: gatewayClassName: istio listeners: - allowedRoutes: namespaces: from: Same hostname: \"*.USD{rootDomain}\" name: api port: 443 protocol: HTTPS tls: certificateRefs: - group: \"\" kind: Secret name: api-USD{gatewayName}-tls mode: Terminate EOF", "get gateway USD{gatewayName} -n USD{gatewayNS} -o=jsonpath='{.status.conditions[?(@.type==\"Accepted\")].message}' get gateway USD{gatewayName} -n USD{gatewayNS} -o=jsonpath='{.status.conditions[?(@.type==\"Programmed\")].message}'", "get gateway USD{gatewayName} -n USD{gatewayNS} -o=jsonpath='{.status.listeners[0].conditions[?(@.type==\"Programmed\")].message}'", "apply -f - <<EOF apiVersion: monitoring.coreos.com/v1 kind: PodMonitor metadata: name: istio-proxies-monitor namespace: USD{gatewayNS} spec: selector: matchExpressions: - key: istio-prometheus-ignore operator: DoesNotExist podMetricsEndpoints: - path: /stats/prometheus interval: 30s relabelings: - action: keep sourceLabels: [\"__meta_kubernetes_pod_container_name\"] regex: \"istio-proxy\" - action: keep sourceLabels: [\"__meta_kubernetes_pod_annotationpresent_prometheus_io_scrape\"] - action: replace regex: (\\d+);(([A-Fa-f0-9]{1,4}::?){1,7}[A-Fa-f0-9]{1,4}) replacement: \"[\\USD2]:\\USD1\" sourceLabels: [ \"__meta_kubernetes_pod_annotation_prometheus_io_port\", \"__meta_kubernetes_pod_ip\", ] targetLabel: \"__address__\" - action: replace regex: (\\d+);((([0-9]+?)(\\.|USD)){4}) replacement: \"\\USD2:\\USD1\" sourceLabels: [ \"__meta_kubernetes_pod_annotation_prometheus_io_port\", \"__meta_kubernetes_pod_ip\", ] targetLabel: \"__address__\" - action: labeldrop regex: \"__meta_kubernetes_pod_label_(.+)\" - sourceLabels: [\"__meta_kubernetes_namespace\"] action: replace targetLabel: namespace - sourceLabels: [\"__meta_kubernetes_pod_name\"] action: replace targetLabel: pod_name EOF", "apply -f - <<EOF apiVersion: kuadrant.io/v1 kind: TLSPolicy metadata: name: USD{gatewayName}-tls namespace: USD{gatewayNS} spec: targetRef: name: USD{gatewayName} group: gateway.networking.k8s.io kind: Gateway issuerRef: group: cert-manager.io kind: ClusterIssuer name: USD{clusterIssuerName} EOF", "get tlspolicy USD{gatewayName}-tls -n USD{gatewayNS} -o=jsonpath='{.status.conditions[?(@.type==\"Accepted\")].message}'", "apply -f - <<EOF apiVersion: kuadrant.io/v1 kind: AuthPolicy metadata: name: USD{gatewayName}-auth namespace: USD{gatewayNS} spec: targetRef: group: gateway.networking.k8s.io kind: Gateway name: USD{gatewayName} defaults: rules: authorization: \"deny\": opa: rego: \"allow = false\" EOF", "get authpolicy USD{gatewayName}-auth -n USD{gatewayNS} -o=jsonpath='{.status.conditions[?(@.type==\"Accepted\")].message}'", "apply -f - <<EOF apiVersion: kuadrant.io/v1 kind: RateLimitPolicy metadata: name: USD{gatewayName}-rlp namespace: USD{gatewayNS} spec: targetRef: group: gateway.networking.k8s.io kind: Gateway name: USD{gatewayName} defaults: limits: \"low-limit\": rates: - limit: 2 window: 10s EOF", "get ratelimitpolicy USD{gatewayName}-rlp -n USD{gatewayNS} -o=jsonpath='{.status.conditions[?(@.type==\"Accepted\")].message}'", "apply -f - <<EOF apiVersion: kuadrant.io/v1 kind: DNSPolicy metadata: name: USD{gatewayName}-dnspolicy namespace: USD{gatewayNS} spec: targetRef: name: USD{gatewayName} group: gateway.networking.k8s.io kind: Gateway providerRefs: - name: aws-credentials loadBalancing: weight: 120 geo: EU defaultGeo: true EOF", "get dnspolicy USD{gatewayName}-dnspolicy -n USD{gatewayNS} -o=jsonpath='{.status.conditions[?(@.type==\"Accepted\")].message}'", "apply -f - <<EOF apiVersion: gateway.networking.k8s.io/v1 kind: HTTPRoute metadata: name: test namespace: USD{gatewayNS} labels: service: toystore spec: parentRefs: - name: USD{gatewayName} namespace: USD{gatewayNS} hostnames: - \"test.USD{rootDomain}\" rules: - backendRefs: - name: toystore port: 80 EOF", "get dnspolicy USD{gatewayName}-dnspolicy -n USD{gatewayNS} -o=jsonpath='{.status.conditions[?(@.type==\"Enforced\")].message}' get authpolicy USD{gatewayName}-auth -n USD{gatewayNS} -o=jsonpath='{.status.conditions[?(@.type==\"Enforced\")].message}' get ratelimitpolicy USD{gatewayName}-rlp -n USD{gatewayNS} -o=jsonpath='{.status.conditions[?(@.type==\"Enforced\")].message}'", "get gateway USD{gatewayName} -n USD{gatewayNS} -o=jsonpath='{.status.listeners[0].conditions[?(@.type==\"Programmed\")].message}'", "curl -w \"%{http_code}\" https://USD(kubectl get httproute test -n USD{gatewayNS} -o=jsonpath='{.spec.hostnames[0]}')", "patch gateway USD{gatewayName} -n USD{gatewayNS} --type='json' -p='[{\"op\": \"replace\", \"path\": \"/spec/listeners/0/allowedRoutes/namespaces/from\", \"value\":\"All\"}]'" ]
https://docs.redhat.com/en/documentation/red_hat_connectivity_link/1.0/html/configuring_and_deploying_gateway_policies_with_connectivity_link/rhcl_platform_engineer-workflow_rhcl
Preface
Preface The Red Hat Virtualization Manager includes a data warehouse that collects monitoring data about hosts, virtual machines, and storage. Data Warehouse, which includes a database and a service, must be installed and configured along with the Manager setup, either on the same machine or on a separate server. The Red Hat Virtualization installation creates two databases: The Manager database ( engine ) is the primary data store used by the Red Hat Virtualization Manager. Information about the virtualization environment like its state, configuration, and performance are stored in this database. The Data Warehouse database ( ovirt_engine_history ) contains configuration information and statistical data which is collated over time from the Manager database. The configuration data in the Manager database is examined every minute, and changes are replicated to the Data Warehouse database. Tracking the changes to the database provides information on the objects in the database. This enables you to analyze and enhance the performance of your Red Hat Virtualization environment and resolve difficulties. To calculate an estimate of the space and resources the ovirt_engine_history database will use, use the RHV Manager History Database Size Calculator tool. The estimate is based on the number of entities and the length of time you have chosen to retain the history records.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/data_warehouse_guide/pr01
Providing feedback on JBoss EAP documentation
Providing feedback on JBoss EAP documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/release_notes_for_red_hat_jboss_enterprise_application_platform_8.0/proc_providing-feedback-on-red-hat-documentation_assembly-release-notes
Chapter 16. Using TLS certificates for applications accessing RGW
Chapter 16. Using TLS certificates for applications accessing RGW Most of the S3 applications require TLS certificate in the forms such as an option included in the Deployment configuration file, passed as a file in the request, or stored in /etc/pki paths. TLS certificates for RADOS Object Gateway (RGW) are stored as Kubernetes secret and you need to fetch the details from the secret. Prerequisites A running OpenShift Data Foundation cluster. Procedure For internal RGW server Get the TLS certificate and key from the kubernetes secret: <secret_name> The default kubernetes secret name is <objectstore_name>-cos-ceph-rgw-tls-cert . Specify the name of the object store. For external RGW server Get the the TLS certificate from the kubernetes secret: <secret_name> The default kubernetes secret name is ceph-rgw-tls-cert and it is an opaque type of secret. The key value for storing the TLS certificates is cert . 16.1. Accessing External RGW server in OpenShift Data Foundation Accessing External RGW server using Object Bucket Claims The S3 credentials such as AccessKey or Secret Key is stored in the secret generated by the Object Bucket Claim (OBC) creation and you can fetch the same by using the following commands: Similarly, you can fetch the endpoint details from the configmap of OBC: Accessing External RGW server using the Ceph Object Store User CR You can fetch the S3 Credentials and endpoint details from the secret generated as part of the Ceph Object Store User CR: Important For both the access mechanisms, you can either request for new certificates from the administrator or reuse the certificates from the Kubernetes secret, ceph-rgw-tls-cert .
[ "oc get secrets/<secret_name> -o jsonpath='{.data..tls\\.crt}' | base64 -d oc get secrets/<secret_name> -o jsonpath='{.data..tls\\.key}' | base64 -d", "oc get secrets/<secret_name> -o jsonpath='{.data.cert}' | base64 -d", "oc get secret <object bucket claim name> -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' | base64 --decode oc get secret <object bucket claim name> -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' | base64 --decode", "oc get cm <object bucket claim name> -o jsonpath='{.data.BUCKET_HOST}' oc get cm <object bucket claim name> -o jsonpath='{.data.BUCKET_PORT}' oc get cm <object bucket claim name> -o jsonpath='{.data.BUCKET_NAME}'", "oc get secret rook-ceph-object-user-<object-store-cr-name>-<object-user-cr-name> -o jsonpath='{.data.AccessKey}' | base64 --decode oc get secret rook-ceph-object-user-<object-store-cr-name>-<object-user-cr-name> -o jsonpath='{.data.SecretKey}' | base64 --decode oc get secret rook-ceph-object-user-<object-store-cr-name>-<object-user-cr-name> -o jsonpath='{.data.Endpoint}' | base64 --decode" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/managing_hybrid_and_multicloud_resources/using-tls-certificates-for-applications-accessing-rgw_rhodf
probe::signal.send_sig_queue
probe::signal.send_sig_queue Name probe::signal.send_sig_queue - Queuing a signal to a process Synopsis signal.send_sig_queue Values sig The queued signal name Name of the probe point sig_pid The PID of the process to which the signal is queued pid_name Name of the process to which the signal is queued sig_name A string representation of the signal sigqueue_addr The address of the signal queue
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-signal-send-sig-queue
API overview
API overview OpenShift Container Platform 4.15 Overview content for the OpenShift Container Platform API Red Hat OpenShift Documentation Team
[ "oc debug node/<node>", "chroot /host", "systemctl cat kubelet", "/etc/systemd/system/kubelet.service.d/20-logging.conf [Service] Environment=\"KUBELET_LOG_LEVEL=2\"", "echo -e \"[Service]\\nEnvironment=\\\"KUBELET_LOG_LEVEL=8\\\"\" > /etc/systemd/system/kubelet.service.d/30-logging.conf", "systemctl daemon-reload", "systemctl restart kubelet", "rm -f /etc/systemd/system/kubelet.service.d/30-logging.conf", "systemctl daemon-reload", "systemctl restart kubelet", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-master-kubelet-loglevel spec: config: ignition: version: 3.2.0 systemd: units: - name: kubelet.service enabled: true dropins: - name: 30-logging.conf contents: | [Service] Environment=\"KUBELET_LOG_LEVEL=2\"", "oc adm node-logs --role master -u kubelet", "oc adm node-logs --role worker -u kubelet", "journalctl -b -f -u kubelet.service", "sudo tail -f /var/log/containers/*", "- for n in USD(oc get node --no-headers | awk '{print USD1}'); do oc adm node-logs USDn | gzip > USDn.log.gz; done" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html-single/api_overview/index
Chapter 8. Accessing Support Using the Red Hat Support Tool
Chapter 8. Accessing Support Using the Red Hat Support Tool The Red Hat Support Tool , in the redhat-support-tool package, can function as both an interactive shell and as a single-execution program. It can be run over SSH or from any terminal. It enables, for example, searching the Red Hat Knowledgebase from the command line, copying solutions directly on the command line, opening and updating support cases, and sending files to Red Hat for analysis. 8.1. Installing the Red Hat Support Tool The Red Hat Support Tool is installed by default on Red Hat Enterprise Linux. If required, to ensure that it is, enter the following command as root : 8.2. Registering the Red Hat Support Tool Using the Command Line To register the Red Hat Support Tool to the customer portal using the command line, run the following commands: Where username is the user name of the Red Hat Customer Portal account. 8.3. Using the Red Hat Support Tool in Interactive Shell Mode To start the tool in interactive mode, enter the following command: The tool can be run as an unprivileged user, with a consequently reduced set of commands, or as root . The commands can be listed by entering the ? character. The program or menu selection can be exited by entering the q or e character. You will be prompted for your Red Hat Customer Portal user name and password when you first search the Knowledgebase or support cases. Alternately, set the user name and password for your Red Hat Customer Portal account using interactive mode, and optionally save it to the configuration file. 8.4. Configuring the Red Hat Support Tool When in interactive mode, the configuration options can be listed by entering the command config --help : Registering the Red Hat Support Tool Using Interactive Mode To register the Red Hat Support Tool to the customer portal using interactive mode, proceed as follows: Start the tool by entering the following command: Enter your Red Hat Customer Portal user name: To save your user name to the global configuration file, add the -g option. Enter your Red Hat Customer Portal password: 8.4.1. Saving Settings to the Configuration Files The Red Hat Support Tool , unless otherwise directed, stores values and options locally in the home directory of the current user, using the ~/.redhat-support-tool/redhat-support-tool.conf configuration file. If required, it is recommended to save passwords to this file because it is only readable by that particular user. When the tool starts, it will read values from the global configuration file /etc/redhat-support-tool.conf and from the local configuration file. Locally stored values and options take precedence over globally stored settings. Warning It is recommended not to save passwords in the global /etc/redhat-support-tool.conf configuration file because the password is just base64 encoded and can easily be decoded. In addition, the file is world readable. To save a value or option to the global configuration file, add the -g, --global option as follows: Note In order to be able to save settings globally, using the -g, --global option, the Red Hat Support Tool must be run as root because normal users do not have the permissions required to write to /etc/redhat-support-tool.conf . To remove a value or option from the local configuration file, add the -u, --unset option as follows: This will clear, unset, the parameter from the tool and fall back to the equivalent setting in the global configuration file, if available. Note When running as an unprivileged user, values stored in the global configuration file cannot be removed using the -u, --unset option, but they can be cleared, unset, from the current running instance of the tool by using the -g, --global option simultaneously with the -u, --unset option. If running as root , values and options can be removed from the global configuration file using -g, --global simultaneously with the -u, --unset option. 8.5. Opening and Updating Support Cases Using Interactive Mode Opening a New Support Case Using Interactive Mode To open a new support case using interactive mode, proceed as follows: Start the tool by entering the following command: Enter the opencase command: Follow the on screen prompts to select a product and then a version. Enter a summary of the case. Enter a description of the case and press Ctrl + D on an empty line when complete. Select a severity of the case. Optionally chose to see if there is a solution to this problem before opening a support case. Confirm you would still like to open the support case. Optionally chose to attach an SOS report. Optionally chose to attach a file. Viewing and Updating an Existing Support Case Using Interactive Mode To view and update an existing support case using interactive mode, proceed as follows: Start the tool by entering the following command: Enter the getcase command: Where case-number is the number of the case you want to view and update. Follow the on screen prompts to view the case, modify or add comments, and get or add attachments. Modifying an Existing Support Case Using Interactive Mode To modify the attributes of an existing support case using interactive mode, proceed as follows: Start the tool by entering the following command: Enter the modifycase command: Where case-number is the number of the case you want to view and update. The modify selection list appears: Follow the on screen prompts to modify one or more of the options. For example, to modify the status, enter 3 : 8.6. Viewing Support Cases on the Command Line Viewing the contents of a case on the command line provides a quick and easy way to apply solutions from the command line. To view an existing support case on the command line, enter a command as follows: Where case-number is the number of the case you want to download. 8.7. Additional Resources The Red Hat Knowledgebase article Red Hat Support Tool has additional information, examples, and video tutorials.
[ "~]# yum install redhat-support-tool", "~]# redhat-support-tool config user username", "~]# redhat-support-tool config password Please enter the password for username :", "~]USD redhat-support-tool Welcome to the Red Hat Support Tool. Command (? for help):", "~]# redhat-support-tool Welcome to the Red Hat Support Tool. Command (? for help): config --help Usage: config [options] config.option <new option value> Use the 'config' command to set or get configuration file values. Options: -h, --help show this help message and exit -g, --global Save configuration option in /etc/redhat-support-tool.conf. -u, --unset Unset configuration option. The configuration file options which can be set are: user : The Red Hat Customer Portal user. password : The Red Hat Customer Portal password. debug : CRITICAL, ERROR, WARNING, INFO, or DEBUG url : The support services URL. Default=https://api.access.redhat.com proxy_url : A proxy server URL. proxy_user: A proxy server user. proxy_password: A password for the proxy server user. ssl_ca : Path to certificate authorities to trust during communication. kern_debug_dir: Path to the directory where kernel debug symbols should be downloaded and cached. Default=/var/lib/redhat-support-tool/debugkernels Examples: - config user - config user my-rhn-username - config --unset user", "~]# redhat-support-tool", "Command (? for help): config user username", "Command (? for help): config password Please enter the password for username :", "Command (? for help): config setting -g value", "Command (? for help): config setting -u value", "~]# redhat-support-tool", "Command (? for help): opencase", "Support case 0123456789 has successfully been opened", "~]# redhat-support-tool", "Command (? for help): getcase case-number", "~]# redhat-support-tool", "Command (? for help): modifycase case-number", "Type the number of the attribute to modify or 'e' to return to the previous menu. 1 Modify Type 2 Modify Severity 3 Modify Status 4 Modify Alternative-ID 5 Modify Product 6 Modify Version End of options.", "Selection: 3 1 Waiting on Customer 2 Waiting on Red Hat 3 Closed Please select a status (or 'q' to exit):", "~]# redhat-support-tool getcase case-number" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/system_administrators_guide/ch-red_hat_support_tool
Chapter 8. Troubleshooting problems by using log files
Chapter 8. Troubleshooting problems by using log files Log files contain messages about the system, including the kernel, services, and applications running on it. These contain information that helps troubleshoot issues or monitor system functions. The logging system in Red Hat Enterprise Linux is based on the built-in syslog protocol. Particular programs use this system to record events and organize them into log files, which are useful when auditing the operating system and troubleshooting various problems. 8.1. Services handling syslog messages The following two services handle syslog messages: The systemd-journald daemon The systemd-journald daemon collects messages from various sources and forwards them to Rsyslog for further processing. The systemd-journald daemon collects messages from the following sources: Kernel Early stages of the boot process Standard and error output of daemons as they start up and run Syslog The Rsyslog service The Rsyslog service sorts the syslog messages by type and priority and writes them to the files in the /var/log directory. The /var/log directory persistently stores the log messages. 8.2. Log files storing syslog messages The following log files under the /var/log directory store syslog messages. /var/log/messages - all syslog messages except the following /var/log/secure - security and authentication-related messages and errors /var/log/maillog - mail server-related messages and errors /var/log/cron - log files related to periodically executed tasks /var/log/boot.log - log files related to system startup Note The above mentioned list contains only some files and the actual list of files in the /var/log/ directory depends on which services and applications log in to this directory. 8.3. Viewing logs using the command line The Journal is a component of systemd that helps to view and manage log files. It addresses problems connected with traditional logging, closely integrated with the rest of the system, and supports various logging technologies and access management for log entries. You can use the journalctl command to view messages in the system journal using the command line. Table 8.1. Viewing system information Command Description journalctl Shows all collected journal entries. journalctl FILEPATH Shows logs related to a specific file. For example, the journalctl /dev/sda command displays logs related to the /dev/sda file system. journalctl -b Shows logs for the current boot. journalctl -k -b -1 Shows kernel logs for the current boot. Table 8.2. Viewing information about specific services Command Description journalctl -b _SYSTEMD_UNIT= <name.service> Filters log to show entries matching the systemd service. journalctl -b _SYSTEMD_UNIT= <name.service> _PID= <number> Combines matches. For example, this command shows logs for systemd-units that match <name.service> and the PID <number> . journalctl -b _SYSTEMD_UNIT= <name.service> _PID= <number> + _SYSTEMD_UNIT= <name2.service> The plus sign (+) separator combines two expressions in a logical OR. For example, this command shows all messages from the <name.service> service process with the PID plus all messages from the <name2.service> service (from any of its processes). journalctl -b _SYSTEMD_UNIT= <name.service> _SYSTEMD_UNIT= <name2.service> This command shows all entries matching either expression, referring to the same field. Here, this command shows logs matching a systemd-unit <name.service> or a systemd-unit <name2.service> . Table 8.3. Viewing logs related to specific boots Command Description journalctl --list-boots Shows a tabular list of boot numbers, their IDs, and the timestamps of the first and last message pertaining to the boot. You can use the ID in the command to view detailed information. journalctl --boot=ID _SYSTEMD_UNIT= <name.service> Shows information about the specified boot ID. 8.4. Reviewing logs in the web console Learn how to access, review and filter logs in the RHEL web console. 8.4.1. Reviewing logs in the web console The RHEL 9 web console Logs section is a UI for the journalctl utility. You can access system logs in the web console interface. Prerequisites You have installed the RHEL 9 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . Procedure Log in to the RHEL 9 web console. For details, see Logging in to the web console . Click Logs . Open log entry details by clicking on your selected log entry in the list. Note You can use the Pause button to pause new log entries from appearing. Once you resume new log entries, the web console will load all log entries that were reported after you used the Pause button. You can filter the logs by time, priority or identifier. For more information, see Filtering logs in the web console . 8.4.2. Filtering logs in the web console You can filter log entries in the web console. Prerequisites You have installed the RHEL 9 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . Procedure Log in to the RHEL 9 web console. For details, see Logging in to the web console . Click Logs . By default, web console shows the latest log entries. To filter by a specific time range, click the Time drop-down menu and choose a preferred option. Error and above severity logs list is shown by default. To filter by different priority, click the Error and above drop-down menu and choose a preferred priority. By default, web console shows logs for all identifiers. To filter logs for a particular identifier, click the All drop-down menu and select an identifier. To open a log entry, click on a selected log. 8.4.3. Text search options for filtering logs in the web console The text search option functionality provides a big variety of options for filtering logs. If you decide to filter logs by using the text search, you can use the predefined options that are defined in the three drop-down menus, or you can type the whole search yourself. Drop-down menus There are three drop-down menus that you can use to specify the main parameters of your search: Time : This drop-down menu contains predefined searches for different time ranges of your search. Priority : This drop-down menu provides options for different priority levels. It corresponds to the journalctl --priority option. The default priority value is Error and above . It is set every time you do not specify any other priority. Identifier : In this drop-down menu, you can select an identifier that you want to filter. Corresponds to the journalctl --identifier option. Quantifiers There are six quantifiers that you can use to specify your search. They are covered in the Options for filtering logs table. Log fields If you want to search for a specific log field, it is possible to specify the field together with its content. Free-form text search in logs messages You can filter any text string of your choice in the logs messages. The string can also be in the form of a regular expressions. Advanced logs filtering I Filter all log messages identified by 'systemd' that happened since October 22, 2020 midnight and journal field 'JOB_TYPE' is either 'start' or 'restart. Type identifier:systemd since:2020-10-22 JOB_TYPE=start,restart to search field. Check the results. Advanced logs filtering II Filter all log messages that come from 'cockpit.service' systemd unit that happened in the boot before last and the message body contains either "error" or "fail". Type service:cockpit boot:-1 error|fail to the search field. Check the results. 8.4.4. Using a text search box to filter logs in the web console You can filter logs according to different parameters by using the text search box in the web console. The search combines usage of the filtering drop-down menus, quantifiers, log fields, and free-form string search. Prerequisites You have installed the RHEL 9 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . Procedure Log in to the RHEL web console. For details, see Logging in to the web console . Click Logs . Use the drop-down menus to specify the three main quantifiers - time range, priority, and identifier(s) - you want to filter. The Priority quantifier always has to have a value. If you do not specify it, it automatically filters the Error and above priority. Notice that the options you set reflect in the text search box. Specify the log field you want to filter. You can add several log fields. You can use a free-form string to search for anything else. The search box also accepts regular expressions. 8.4.5. Options for logs filtering There are several journalctl options, which you can use for filtering logs in the web console, that may be useful. Some of these are already covered as part of the drop-down menus in the web console interface. Table 8.4. Table Option name Usage Notes priority Filter output by message priorities. Takes a single numeric or textual log level. The log levels are the usual syslog log levels. If a single log level is specified, all messages with this log level or a lower (therefore more important) log level are shown. Covered in the Priority drop-down menu. identifier Show messages for the specified syslog identifier SYSLOG_IDENTIFIER. Can be specified multiple times. Covered in the Identifier drop-down menu. follow Shows only the most recent journal entries, and continuously prints new entries as they are appended to the journal. Not covered in a drop-down. service Show messages for the specified systemd unit. Can be specified multiple times. Is not covered in a drop-down. Corresponds to the journalctl --unit parameter. boot Show messages from a specific boot. A positive integer will look up the boots starting from the beginning of the journal, and an equal-or-less-than zero integer will look up boots starting from the end of the journal. Therefore, 1 means the first boot found in the journal in chronological order, 2 the second and so on; while -0 is the last boot, -1 the boot before last, and so on. Covered only as Current boot or boot in the Time drop-down menu. Other options need to be written manually. since Start showing entries on or newer than the specified date, or on or older than the specified date, respectively. Date specifications should be of the format "2012-10-30 18:17:16". If the time part is omitted, "00:00:00" is assumed. If only the seconds component is omitted, ":00" is assumed. If the date component is omitted, the current day is assumed. Alternatively the strings "yesterday", "today", "tomorrow" are understood, which refer to 00:00:00 of the day before the current day, the current day, or the day after the current day, respectively. "now" refers to the current time. Finally, relative times may be specified, prefixed with "-" or "+", referring to times before or after the current time, respectively. Not covered in a drop-down. 8.5. Additional resources journalctl(1) man page on your system Configuring a remote logging solution
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_basic_system_settings/assembly_troubleshooting-problems-using-log-files_configuring-basic-system-settings
4.7. Activating Logical Volumes on Individual Nodes in a Cluster
4.7. Activating Logical Volumes on Individual Nodes in a Cluster If you have LVM installed in a cluster environment, you may at times need to activate logical volumes exclusively on one node. To activate logical volumes exclusively on one node, use the lvchange -aey command. Alternatively, you can use lvchange -aly command to activate logical volumes only on the local node but not exclusively. You can later activate them on additional nodes concurrently. You can also activate logical volumes on individual nodes by using LVM tags, which are described in Appendix D, LVM Object Tags . You can also specify activation of nodes in the configuration file, which is described in Appendix B, The LVM Configuration Files .
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/logical_volume_manager_administration/cluster_activation
Chapter 5. Additional resources
Chapter 5. Additional resources For more information about Red Hat build of Apache Camel for Quarkus, see the following documentation: Red Hat build of Apache Camel for Quarkus Reference Getting Started with Red Hat build of Apache Camel for Quarkus Developing Applications with Red Hat build of Apache Camel for Quarkus Migrating applications to Red Hat build of Quarkus
null
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/migrating_fuse_7_applications_to_red_hat_build_of_apache_camel_for_quarkus/additional_resources
5.323. system-config-language
5.323. system-config-language 5.323.1. RHBA-2012:1213 - system-config-language bug fix update An updated system-config-language package that fixes one bug is now available for Red Hat Enterprise Linux 6. The system-config-language is a graphical user interface that allows the user to change the default language of the system. Bug Fix BZ# 819811 When using system-config-language in a non-English locale, some of the messages in the GUI were not translated. Consequently, non-English users were presented with untranslated messages. With this update, all message strings have been translated. All users of system-config-language are advised to upgrade to this updated package, which fixes this bug.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/system-config-language
Chapter 3. Using language support for Apache Camel extension
Chapter 3. Using language support for Apache Camel extension Important The VS Code extensions for Apache Camel are listed as development support. For more information about scope of development support, see Development Support Scope of Coverage for Red Hat Build of Apache Camel . The Visual Studio Code language support extension adds the language support for Apache Camel for XML DSL and Java DSL code. 3.1. About language support for Apache Camel extension This extension provides completion, validation and documentation features for Apache Camel URI elements directly in your Visual Studio Code editor. It works as a client using the Microsoft Language Server Protocol which communicates with Camel Language Server to provide all functionalities. 3.2. Features of language support for Apache Camel extension The important features of the language support extension are listed below: Language service support for Apache Camel URIs. Quick reference documentation when you hover the cursor over a Camel component. Diagnostics for Camel URIs. Navigation for Java and XML langauges. Creating a Camel Route specified with Yaml DSL using Camel CLI. Create a Camel Quarkus project Create a Camel on SpringBoot project Specific Camel Catalog Version Specific Runtime provider for the Camel Catalog 3.3. Requirements Following points must be considered when using the Apache Camel Language Server: Java 17 is currently required to launch the Apache Camel Language Server. The java.home VS Code option is used to use a different version of JDK than the default one installed on the machine. For some features, JBang must be available on a system command line. For an XML DSL files: Use an .xml file extension. Specify the Camel namespace, for reference, see http://camel.apache.org/schema/blueprint or http://camel.apache.org/schema/spring . For a Java DSL files: Use a .java file extension. Specify the Camel package(usually from an imported package), for example, import org.apache.camel.builder.RouteBuilder . To reference the Camel component, use from or to and a string without a space. The string cannot be a variable. For example, from("timer:timerName") works, but from( "timer:timerName") and from(aVariable) do not work. 3.4. Installing Language support for Apache Camel extension You can download the Language support for Apache Camel extension from the VS Code Extension Marketplace and the Open VSX Registry. You can also install the Language Support for Apache Camel extension directly in the Microsoft VS Code. Procedure Open the VS Code editor. In the VS Code editor, select View > Extensions . In the search bar, type Camel . Select the Language Support for Apache Camel option from the search results and then click Install. This installs the language support extension in your editor. 3.5. Using specific Camel catalog version You can use the specific Camel catalog version. Click File > Preferences > Settings > Apache Camel Tooling > Camel catalog version . For Red Hat productized version that contains redhat in its version identifier, the Maven Red Hat repository is automatically added. Note For the first time a version is used, it takes several seconds/minutes to have it available depending on the time to download the dependencies in the background. Limitations The Kamelet catalog used is community supported version only. For the list of supported Kamelets, see link: Supported Kamelets Modeline configuration is based on community only. Not all traits and modeline parameters are supported. Additional resources Language Support for Apache Camel by Red Hat
null
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/tooling_guide_for_red_hat_build_of_apache_camel/using-vscode-language-support-extension
function::kernel_string2
function::kernel_string2 Name function::kernel_string2 - Retrieves string from kernel memory with alternative error string Synopsis Arguments addr The kernel address to retrieve the string from err_msg The error message to return when data isn't available Description This function returns the null terminated C string from a given kernel memory address. Reports the given error message on string copy fault.
[ "kernel_string2:string(addr:long,err_msg:string)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-kernel-string2
Chapter 4. Stretch clusters for Ceph storage
Chapter 4. Stretch clusters for Ceph storage As a storage administrator, you can configure stretch clusters by entering stretch mode with 2-site clusters. Red Hat Ceph Storage is capable of withstanding the loss of Ceph OSDs because of its network and cluster, which are equally reliable with failures randomly distributed across the CRUSH map. If a number of OSDs is shut down, the remaining OSDs and monitors still manage to operate. However, this might not be the best solution for some stretched cluster configurations where a significant part of the Ceph cluster can use only a single network component. The example is a single cluster located in multiple data centers, for which the user wants to sustain a loss of a full data center. The standard configuration is with two data centers. Other configurations are in clouds or availability zones. Each site holds two copies of the data, therefore, the replication size is four. The third site should have a tiebreaker monitor, this can be a virtual machine or high-latency compared to the main sites. This monitor chooses one of the sites to restore data if the network connection fails and both data centers remain active. Note The standard Ceph configuration survives many failures of the network or data centers and it never compromises data consistency. If you restore enough Ceph servers following a failure, it recovers. Ceph maintains availability if you lose a data center, but can still form a quorum of monitors and have all the data available with enough copies to satisfy pools' min_size , or CRUSH rules that replicate again to meet the size. Note There are no additional steps to power down a stretch cluster. You can see the Powering down and rebooting Red Hat Ceph Storage cluster for more information. Stretch cluster failures Red Hat Ceph Storage never compromises on data integrity and consistency. If there is a network failure or a loss of nodes and the services can still be restored, Ceph returns to normal functionality on its own. However, there are situations where you lose data availability even if you have enough servers available to meet Ceph's consistency and sizing constraints, or where you unexpectedly do not meet the constraints. First important type of failure is caused by inconsistent networks. If there is a network split, Ceph might be unable to mark OSD as down to remove it from the acting placement group (PG) sets despite the primary OSD being unable to replicate data. When this happens, the I/O is not permitted because Ceph cannot meet its durability guarantees. The second important category of failures is when it appears that you have data replicated across data enters, but the constraints are not sufficient to guarantee this. For example, you might have data centers A and B, and the CRUSH rule targets three copies and places a copy in each data center with a min_size of 2 . The PG might go active with two copies in site A and no copies in site B, which means that if you lose site A, you lose the data and Ceph cannot operate on it. This situation is difficult to avoid with standard CRUSH rules. 4.1. Stretch mode for a storage cluster To configure stretch clusters, you must enter the stretch mode. When stretch mode is enabled, the Ceph OSDs only take PGs as active when they peer across data centers, or whichever other CRUSH bucket type you specified, assuming both are active. Pools increase in size from the default three to four, with two copies on each site. In stretch mode, Ceph OSDs are only allowed to connect to monitors within the same data center. New monitors are not allowed to join the cluster without specified location. If all the OSDs and monitors from a data center become inaccessible at once, the surviving data center will enter a degraded stretch mode. This issues a warning, reduces the min_size to 1 , and allows the cluster to reach an active state with the data from the remaining site. Note The degraded state also triggers warnings that the pools are too small, because the pool size does not get changed. However, a special stretch mode flag prevents the OSDs from creating extra copies in the remaining data center, therefore it still keeps 2 copies. When the missing data center becomes accesible again, the cluster enters recovery stretch mode. This changes the warning and allows peering, but still requires only the OSDs from the data center, which was up the whole time. When all PGs are in a known state and are not degraded or incomplete, the cluster goes back to the regular stretch mode, ends the warning, and restores min_size to its starting value 2 . The cluster again requires both sites to peer, not only the site that stayed up the whole time, therefore you can fail over to the other site, if necessary. Stretch mode limitations It is not possible to exit from stretch mode once it is entered. You cannot use erasure-coded pools with clusters in stretch mode. You can neither enter the stretch mode with erasure-coded pools, nor create an erasure-coded pool when the stretch mode is active. Stretch mode with no more than two sites is supported. The weights of the two sites should be the same. If they are not, you receive the following error: Example To achieve same weights on both sites, the Ceph OSDs deployed in the two sites should be of equal size, that is, storage capacity in the first site is equivalent to storage capacity in the second site. While it is not enforced, you should run two Ceph monitors on each site and a tiebreaker, for a total of five. This is because OSDs can only connect to monitors in their own site when in stretch mode. You have to create your own CRUSH rule, which provides two copies on each site, which totals to four on both sites. You cannot enable stretch mode if you have existing pools with non-default size or min_size . Because the cluster runs with min_size 1 when degraded, you should only use stretch mode with all-flash OSDs. This minimizes the time needed to recover once connectivity is restored, and minimizes the potential for data loss. Additional Resources See Troubleshooting clusters in stretch mode for troubleshooting steps. 4.1.1. Setting the CRUSH location for the daemons Before you enter the stretch mode, you need to prepare the cluster by setting the CRUSH location to the daemons in the Red Hat Ceph Storage cluster. There are two ways to do this: Bootstrap the cluster through a service configuration file, where the locations are added to the hosts as part of deployment. Set the locations manually through ceph osd crush add-bucket and ceph osd crush move commands after the cluster is deployed. Method 1: Bootstrapping the cluster Prerequisites Root-level access to the nodes. Procedure If you are bootstrapping your new storage cluster, you can create the service configuration .yaml file that adds the nodes to the Red Hat Ceph Storage cluster and also sets specific labels for where the services should run: Example Bootstrap the storage cluster with the --apply-spec option: Syntax Example Important You can use different command options with the cephadm bootstrap command. However, always include the --apply-spec option to use the service configuration file and configure the host locations. Additional Resources See Bootstrapping a new storage cluster for more information about Ceph bootstrapping and different cephadm bootstrap command options. Method 2: Setting the locations after the deployment Prerequisites Root-level access to the nodes. Procedure Add two buckets to which you plan to set the location of your non-tiebreaker monitors to the CRUSH map, specifying the bucket type as as datacenter : Syntax Example Move the buckets under root=default : Syntax Example Move the OSD hosts according to the required CRUSH placement: Syntax Example 4.1.2. Entering the stretch mode The new stretch mode is designed to handle two sites. There is a lower risk of component availability outages with 2-site clusters. Prerequisites Root-level access to the nodes. The CRUSH location is set to the hosts. Procedure Set the location of each monitor, matching your CRUSH map: Syntax Example Generate a CRUSH rule which places two copies on each data center: Syntax Example Edit the decompiled CRUSH map file to add a new rule: Example 1 The rule id has to be unique. In this example, there is only one more rule with id 0 , thereby the id 1 is used, however you might need to use a different rule ID depending on the number of existing rules. 2 3 In this example, there are two data center buckets named DC1 and DC2 . Note This rule makes the cluster have read-affinity towards data center DC1 . Therefore, all the reads or writes happen through Ceph OSDs placed in DC1 . If this is not desirable, and reads or writes are to be distributed evenly across the zones, the CRUSH rule is the following: Example In this rule, the data center is selected randomly and automatically. See CRUSH rules for more information on firstn and indep options. Inject the CRUSH map to make the rule available to the cluster: Syntax Example If you do not run the monitors in connectivity mode, set the election strategy to connectivity : Example Enter stretch mode by setting the location of the tiebreaker monitor to split across the data centers: Syntax Example In this example the monitor mon.host07 is the tiebreaker. Important The location of the tiebreaker monitor should differ from the data centers to which you previously set the non-tiebreaker monitors. In the example above, it is data center DC3 . Important Do not add this data center to the CRUSH map as it results in the following error when you try to enter stretch mode: Note If you are writing your own tooling for deploying Ceph, you can use a new --set-crush-location option when booting monitors, instead of running the ceph mon set_location command. This option accepts only a single bucket=location pair, for example ceph-mon --set-crush-location 'datacenter=DC1' , which must match the bucket type you specified when running the enable_stretch_mode command. Verify that the stretch mode is enabled successfully: Example The stretch_mode_enabled should be set to true . You can also see the number of stretch buckets, stretch mode buckets, and if the stretch mode is degraded or recovering. Verify that the monitors are in an appropriate locations: Example You can also see which monitor is the tiebreaker, and the monitor election strategy. Additional Resources See Configuring monitor election strategy for more information about monitor election strategy. 4.1.3. Adding OSD hosts in stretch mode You can add Ceph OSDs in the stretch mode. The procedure is similar to the addition of the OSD hosts on a cluster where stretch mode is not enabled. Prerequisites A running Red Hat Ceph Storage cluster. Stretch mode in enabled on a cluster. Root-level access to the nodes. Procedure List the available devices to deploy OSDs: Syntax Example Deploy the OSDs on specific hosts or on all the available devices: Create an OSD from a specific device on a specific host: Syntax Example Deploy OSDs on any available and unused devices: Important This command creates collocated WAL and DB devices. If you want to create non-collocated devices, do not use this command. Example Move the OSD hosts under the CRUSH bucket: Syntax Example Note Ensure you add the same topology nodes on both sites. Issues might arise if hosts are added only on one site. Additional Resources See Adding OSDs for more information about the addition of Ceph OSDs. 4.2. Read affinity in stretch clusters Read Affinity reduces cross-zone traffic by keeping the data access within the respective data centers. For stretched clusters deployed in multi-zone environments, the read affinity topology implementation provides a mechanism to help keep traffic within the data center it originated from. Ceph Object Gateway volumes have the ability to read data from an OSD in proximity to the client, according to OSD locations defined in the CRUSH map and topology labels on nodes. For example, a stretch cluster contains a Ceph Object Gateway primary OSD and replicated OSDs spread across two data centers A and B. If a GET action is performed on an Object in data center A, the READ operation is performed on the data of the OSDs closest to the client in data center A. 4.2.1. Performing localized reads You can perform a localized read on a replicated pool in a stretch cluster. When a localized read request is made on a replicated pool, Ceph selects the local OSDs closest to the client based on the client location specified in crush_location. Prerequisites A stretch cluster with two data centers and Ceph Object Gateway configured on both. A user created with a bucket having primary and replicated OSDs. Procedure To perform a localized read, set rados_replica_read_policy to 'localize' in the OSD daemon configuration using the ceph config set command. Verification : Perform the below steps to verify the localized read from an OSD set. Run the ceph osd tree command to view the OSDs and the data centers. Example Run the ceph orch command to identify the Ceph Object Gateway daemons in the data centers. Example Verify if a default read has happened by running the vim command on the Ceph Object Gateway logs. Example You can see in the logs that a localized read has taken place. Important To be able to view the debug logs, you must first enable debug_ms 1 in the configuration by running the ceph config set command. 4.2.2. Performing balanced reads You can perform a balanced read on a pool to retrieve evenly distributed OSDs across data centers. When a balanced READ is issued on a pool, the read operations are distributed evenly across all OSDs that are spread across the data centers. Prerequisites A stretch cluster with two data centers and Ceph Object Gateway configured on both. A user created with a bucket and OSDs - primary and replicated OSDs. Procedure To perform a balanced read, set rados_replica_read_policy to 'balance' in the OSD daemon configuration using the ceph config set command. Verification : Perform the below steps to verify the balance read from an OSD set. Run the ceph osd tree command to view the OSDs and the data centers. Example Run the ceph orch command to identify the Ceph Object Gateway daemons in the data centers. Example Verify if a balanced read has happened by running the vim command on the Ceph Object Gateway logs. Example You can see in the logs that a balanced read has taken place. Important To be able to view the debug logs, you must first enable debug_ms 1 in the configuration by running the ceph config set command. 4.2.3. Performing default reads You can perform a default read on a pool to retrieve data from primary data centers. When a default READ is issued on a pool, the IO operations are retrieved directly from each OSD in the data center. Prerequisites A stretch cluster with two data centers and Ceph Object Gateway configured on both. A user created with a bucket and OSDs - primary and replicated OSDs. Procedure To perform a default read, set rados_replica_read_policy to 'default' in the OSD daemon configuration by using the ceph config set command. Example The IO operations from the closest OSD in a data center are retrieved when a GET operation is performed. Verification : Perform the below steps to verify the localized read from an OSD set. Run the ceph osd tree command to view the OSDs and the data centers. Example Run the ceph orch command to identify the Ceph Object Gateway daemons in the data centers. Example Verify if a default read has happened by running the vim command on the Ceph Object Gateway logs. Example You can see in the logs that a default read has taken place. Important To be able to view the debug logs, you must first enable debug_ms 1 in the configuration by running the ceph config set command.
[ "ceph mon enable_stretch_mode host05 stretch_rule datacenter Error EINVAL: the 2 datacenter instances in the cluster have differing weights 25947 and 15728 but stretch mode currently requires they be the same!", "service_type: host addr: host01 hostname: host01 location: root: default datacenter: DC1 labels: - osd - mon - mgr --- service_type: host addr: host02 hostname: host02 location: datacenter: DC1 labels: - osd - mon --- service_type: host addr: host03 hostname: host03 location: datacenter: DC1 labels: - osd - mds - rgw --- service_type: host addr: host04 hostname: host04 location: root: default datacenter: DC2 labels: - osd - mon - mgr --- service_type: host addr: host05 hostname: host05 location: datacenter: DC2 labels: - osd - mon --- service_type: host addr: host06 hostname: host06 location: datacenter: DC2 labels: - osd - mds - rgw --- service_type: host addr: host07 hostname: host07 labels: - mon --- service_type: mon placement: label: \"mon\" --- service_id: cephfs placement: label: \"mds\" --- service_type: mgr service_name: mgr placement: label: \"mgr\" --- service_type: osd service_id: all-available-devices service_name: osd.all-available-devices placement: label: \"osd\" spec: data_devices: all: true --- service_type: rgw service_id: objectgw service_name: rgw.objectgw placement: count: 2 label: \"rgw\" spec: rgw_frontend_port: 8080", "cephadm bootstrap --apply-spec CONFIGURATION_FILE_NAME --mon-ip MONITOR_IP_ADDRESS --ssh-private-key PRIVATE_KEY --ssh-public-key PUBLIC_KEY --registry-url REGISTRY_URL --registry-username USER_NAME --registry-password PASSWORD", "cephadm bootstrap --apply-spec initial-config.yaml --mon-ip 10.10.128.68 --ssh-private-key /home/ceph/.ssh/id_rsa --ssh-public-key /home/ceph/.ssh/id_rsa.pub --registry-url registry.redhat.io --registry-username myuser1 --registry-password mypassword1", "ceph osd crush add-bucket BUCKET_NAME BUCKET_TYPE", "ceph osd crush add-bucket DC1 datacenter ceph osd crush add-bucket DC2 datacenter", "ceph osd crush move BUCKET_NAME root=default", "ceph osd crush move DC1 root=default ceph osd crush move DC2 root=default", "ceph osd crush move HOST datacenter= DATACENTER", "ceph osd crush move host01 datacenter=DC1", "ceph mon set_location HOST datacenter= DATACENTER", "ceph mon set_location host01 datacenter=DC1 ceph mon set_location host02 datacenter=DC1 ceph mon set_location host04 datacenter=DC2 ceph mon set_location host05 datacenter=DC2 ceph mon set_location host07 datacenter=DC3", "ceph osd getcrushmap > COMPILED_CRUSHMAP_FILENAME crushtool -d COMPILED_CRUSHMAP_FILENAME -o DECOMPILED_CRUSHMAP_FILENAME", "ceph osd getcrushmap > crush.map.bin crushtool -d crush.map.bin -o crush.map.txt", "rule stretch_rule { id 1 1 type replicated min_size 1 max_size 10 step take DC1 2 step chooseleaf firstn 2 type host step emit step take DC2 3 step chooseleaf firstn 2 type host step emit }", "rule stretch_rule { id 1 type replicated min_size 1 max_size 10 step take default step choose firstn 0 type datacenter step chooseleaf firstn 2 type host step emit }", "crushtool -c DECOMPILED_CRUSHMAP_FILENAME -o COMPILED_CRUSHMAP_FILENAME ceph osd setcrushmap -i COMPILED_CRUSHMAP_FILENAME", "crushtool -c crush.map.txt -o crush2.map.bin ceph osd setcrushmap -i crush2.map.bin", "ceph mon set election_strategy connectivity", "ceph mon set_location HOST datacenter= DATACENTER ceph mon enable_stretch_mode HOST stretch_rule datacenter", "ceph mon set_location host07 datacenter=DC3 ceph mon enable_stretch_mode host07 stretch_rule datacenter", "Error EINVAL: there are 3 datacenters in the cluster but stretch mode currently only works with 2!", "ceph osd dump epoch 361 fsid 1234ab78-1234-11ed-b1b1-de456ef0a89d created 2023-01-16T05:47:28.482717+0000 modified 2023-01-17T17:36:50.066183+0000 flags sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit crush_version 31 full_ratio 0.95 backfillfull_ratio 0.92 nearfull_ratio 0.85 require_min_compat_client luminous min_compat_client luminous require_osd_release quincy stretch_mode_enabled true stretch_bucket_count 2 degraded_stretch_mode 0 recovering_stretch_mode 0 stretch_mode_bucket 8", "ceph mon dump epoch 19 fsid 1234ab78-1234-11ed-b1b1-de456ef0a89d last_changed 2023-01-17T04:12:05.709475+0000 created 2023-01-16T05:47:25.631684+0000 min_mon_release 16 (pacific) election_strategy: 3 stretch_mode_enabled 1 tiebreaker_mon host07 disallowed_leaders host07 0: [v2:132.224.169.63:3300/0,v1:132.224.169.63:6789/0] mon.host07; crush_location {datacenter=DC3} 1: [v2:220.141.179.34:3300/0,v1:220.141.179.34:6789/0] mon.host04; crush_location {datacenter=DC2} 2: [v2:40.90.220.224:3300/0,v1:40.90.220.224:6789/0] mon.host01; crush_location {datacenter=DC1} 3: [v2:60.140.141.144:3300/0,v1:60.140.141.144:6789/0] mon.host02; crush_location {datacenter=DC1} 4: [v2:186.184.61.92:3300/0,v1:186.184.61.92:6789/0] mon.host05; crush_location {datacenter=DC2} dumped monmap epoch 19", "ceph orch device ls [--hostname= HOST_1 HOST_2 ] [--wide] [--refresh]", "ceph orch device ls", "ceph orch daemon add osd HOST : DEVICE_PATH", "ceph orch daemon add osd host03:/dev/sdb", "ceph orch apply osd --all-available-devices", "ceph osd crush move HOST datacenter= DATACENTER", "ceph osd crush move host03 datacenter=DC1 ceph osd crush move host06 datacenter=DC2", "ceph config set client.rgw.rgw.1 rados_replica_read_policy localize", "ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.58557 root default -3 0.29279 datacenter DC1 -2 0.09760 host ceph-ci-fbv67y-ammmck-node2 2 hdd 0.02440 osd.2 up 1.00000 1.00000 11 hdd 0.02440 osd.11 up 1.00000 1.00000 17 hdd 0.02440 osd.17 up 1.00000 1.00000 22 hdd 0.02440 osd.22 up 1.00000 1.00000 -4 0.09760 host ceph-ci-fbv67y-ammmck-node3 0 hdd 0.02440 osd.0 up 1.00000 1.00000 6 hdd 0.02440 osd.6 up 1.00000 1.00000 12 hdd 0.02440 osd.12 up 1.00000 1.00000 18 hdd 0.02440 osd.18 up 1.00000 1.00000 -5 0.09760 host ceph-ci-fbv67y-ammmck-node4 5 hdd 0.02440 osd.5 up 1.00000 1.00000 10 hdd 0.02440 osd.10 up 1.00000 1.00000 16 hdd 0.02440 osd.16 up 1.00000 1.00000 23 hdd 0.02440 osd.23 up 1.00000 1.00000 -7 0.29279 datacenter DC2 -6 0.09760 host ceph-ci-fbv67y-ammmck-node5 3 hdd 0.02440 osd.3 up 1.00000 1.00000 8 hdd 0.02440 osd.8 up 1.00000 1.00000 14 hdd 0.02440 osd.14 up 1.00000 1.00000 20 hdd 0.02440 osd.20 up 1.00000 1.00000 -8 0.09760 host ceph-ci-fbv67y-ammmck-node6 4 hdd 0.02440 osd.4 up 1.00000 1.00000 9 hdd 0.02440 osd.9 up 1.00000 1.00000 15 hdd 0.02440 osd.15 up 1.00000 1.00000 21 hdd 0.02440 osd.21 up 1.00000 1.00000 -9 0.09760 host ceph-ci-fbv67y-ammmck-node7 1 hdd 0.02440 osd.1 up 1.00000 1.00000 7 hdd 0.02440 osd.7 up 1.00000 1.00000 13 hdd 0.02440 osd.13 up 1.00000 1.00000 19 hdd 0.02440 osd.19 up 1.00000 1.00000", "ceph orch ps | grep rg rgw.rgw.1.ceph-ci-fbv67y-ammmck-node4.dmsmex ceph-ci-fbv67y-ammmck-node4 *:80 running (4h) 10m ago 22h 93.3M - 19.1.0-55.el9cp 0ee0a0ad94c7 34f27723ccd2 rgw.rgw.1.ceph-ci-fbv67y-ammmck-node7.pocecp ceph-ci-fbv67y-ammmck-node7 *:80 running (4h) 10m ago 22h 96.4M - 19.1.0-55.el9cp 0ee0a0ad94c7 40e4f2a6d4c4", "vim /var/log/ceph/<fsid>/<ceph-client-rgw>.log 2024-08-26T08:07:45.471+0000 7fc623e63640 1 ====== starting new request req=0x7fc5b93694a0 ===== 2024-08-26T08:07:45.471+0000 7fc623e63640 1 -- 10.0.67.142:0/279982082 --> [v2:10.0.66.23:6816/73244434,v1:10.0.66.23:6817/73244434] -- osd_op(unknown.0.0:9081 11.55 11:ab26b168:::3acf4091-c54c-43b5-a495-c505fe545d25.27842.1_f1:head [getxattrs,stat] snapc 0=[] ondisk+read+localize_reads+known_if_redirected+supports_pool_eio e3533) -- 0x55f781bd2000 con 0x55f77f0e8c00", "ceph config set client.rgw.rgw.1.ceph-ci-gune2w-mysx73-node4.dgvrmx advanced debug_ms 1/1 ceph config set client.rgw.rgw.1.ceph-ci-gune2w-mysx73-node7.rfkqqq advanced debug_ms 1/1", "ceph config set client.rgw.rgw.1 rados_replica_read_policy balance", "ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.58557 root default -3 0.29279 datacenter DC1 -2 0.09760 host ceph-ci-fbv67y-ammmck-node2 2 hdd 0.02440 osd.2 up 1.00000 1.00000 11 hdd 0.02440 osd.11 up 1.00000 1.00000 17 hdd 0.02440 osd.17 up 1.00000 1.00000 22 hdd 0.02440 osd.22 up 1.00000 1.00000 -4 0.09760 host ceph-ci-fbv67y-ammmck-node3 0 hdd 0.02440 osd.0 up 1.00000 1.00000 6 hdd 0.02440 osd.6 up 1.00000 1.00000 12 hdd 0.02440 osd.12 up 1.00000 1.00000 18 hdd 0.02440 osd.18 up 1.00000 1.00000 -5 0.09760 host ceph-ci-fbv67y-ammmck-node4 5 hdd 0.02440 osd.5 up 1.00000 1.00000 10 hdd 0.02440 osd.10 up 1.00000 1.00000 16 hdd 0.02440 osd.16 up 1.00000 1.00000 23 hdd 0.02440 osd.23 up 1.00000 1.00000 -7 0.29279 datacenter DC2 -6 0.09760 host ceph-ci-fbv67y-ammmck-node5 3 hdd 0.02440 osd.3 up 1.00000 1.00000 8 hdd 0.02440 osd.8 up 1.00000 1.00000 14 hdd 0.02440 osd.14 up 1.00000 1.00000 20 hdd 0.02440 osd.20 up 1.00000 1.00000 -8 0.09760 host ceph-ci-fbv67y-ammmck-node6 4 hdd 0.02440 osd.4 up 1.00000 1.00000 9 hdd 0.02440 osd.9 up 1.00000 1.00000 15 hdd 0.02440 osd.15 up 1.00000 1.00000 21 hdd 0.02440 osd.21 up 1.00000 1.00000 -9 0.09760 host ceph-ci-fbv67y-ammmck-node7 1 hdd 0.02440 osd.1 up 1.00000 1.00000 7 hdd 0.02440 osd.7 up 1.00000 1.00000 13 hdd 0.02440 osd.13 up 1.00000 1.00000 19 hdd 0.02440 osd.19 up 1.00000 1.00000", "ceph orch ps | grep rg rgw.rgw.1.ceph-ci-fbv67y-ammmck-node4.dmsmex ceph-ci-fbv67y-ammmck-node4 *:80 running (4h) 10m ago 22h 93.3M - 19.1.0-55.el9cp 0ee0a0ad94c7 34f27723ccd2 rgw.rgw.1.ceph-ci-fbv67y-ammmck-node7.pocecp ceph-ci-fbv67y-ammmck-node7 *:80 running (4h) 10m ago 22h 96.4M - 19.1.0-55.el9cp 0ee0a0ad94c7 40e4f2a6d4c4", "vim /var/log/ceph/<fsid>/<ceph-client-rgw>.log 2024-08-27T09:32:25.510+0000 7f2a7a284640 1 ====== starting new request req=0x7f2a31fcf4a0 ===== 2024-08-27T09:32:25.510+0000 7f2a7a284640 1 -- 10.0.67.142:0/3116867178 --> [v2:10.0.64.146:6816/2838383288,v1:10.0.64.146:6817/2838383288] -- osd_op(unknown.0.0:268731 11.55 11:ab26b168:::3acf4091-c54c-43b5-a495-c505fe545d25.27842.1_f1:head [getxattrs,stat] snapc 0=[] ondisk+read+balance_reads+known_if_redirected+supports_pool_eio e3554) -- 0x55cd1b88dc00 con 0x55cd18dd6000", "ceph config set client.rgw.rgw.1.ceph-ci-gune2w-mysx73-node4.dgvrmx advanced debug_ms 1/1 ceph config set client.rgw.rgw.1.ceph-ci-gune2w-mysx73-node7.rfkqqq advanced debug_ms 1/1", "ceph config set client.rgw.rgw.1 advanced rados_replica_read_policy default", "ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.58557 root default -3 0.29279 datacenter DC1 -2 0.09760 host ceph-ci-fbv67y-ammmck-node2 2 hdd 0.02440 osd.2 up 1.00000 1.00000 11 hdd 0.02440 osd.11 up 1.00000 1.00000 17 hdd 0.02440 osd.17 up 1.00000 1.00000 22 hdd 0.02440 osd.22 up 1.00000 1.00000 -4 0.09760 host ceph-ci-fbv67y-ammmck-node3 0 hdd 0.02440 osd.0 up 1.00000 1.00000 6 hdd 0.02440 osd.6 up 1.00000 1.00000 12 hdd 0.02440 osd.12 up 1.00000 1.00000 18 hdd 0.02440 osd.18 up 1.00000 1.00000 -5 0.09760 host ceph-ci-fbv67y-ammmck-node4 5 hdd 0.02440 osd.5 up 1.00000 1.00000 10 hdd 0.02440 osd.10 up 1.00000 1.00000 16 hdd 0.02440 osd.16 up 1.00000 1.00000 23 hdd 0.02440 osd.23 up 1.00000 1.00000 -7 0.29279 datacenter DC2 -6 0.09760 host ceph-ci-fbv67y-ammmck-node5 3 hdd 0.02440 osd.3 up 1.00000 1.00000 8 hdd 0.02440 osd.8 up 1.00000 1.00000 14 hdd 0.02440 osd.14 up 1.00000 1.00000 20 hdd 0.02440 osd.20 up 1.00000 1.00000 -8 0.09760 host ceph-ci-fbv67y-ammmck-node6 4 hdd 0.02440 osd.4 up 1.00000 1.00000 9 hdd 0.02440 osd.9 up 1.00000 1.00000 15 hdd 0.02440 osd.15 up 1.00000 1.00000 21 hdd 0.02440 osd.21 up 1.00000 1.00000 -9 0.09760 host ceph-ci-fbv67y-ammmck-node7 1 hdd 0.02440 osd.1 up 1.00000 1.00000 7 hdd 0.02440 osd.7 up 1.00000 1.00000 13 hdd 0.02440 osd.13 up 1.00000 1.00000 19 hdd 0.02440 osd.19 up 1.00000 1.00000", "ceph orch ps | grep rg rgw.rgw.1.ceph-ci-fbv67y-ammmck-node4.dmsmex ceph-ci-fbv67y-ammmck-node4 *:80 running (4h) 10m ago 22h 93.3M - 19.1.0-55.el9cp 0ee0a0ad94c7 34f27723ccd2 rgw.rgw.1.ceph-ci-fbv67y-ammmck-node7.pocecp ceph-ci-fbv67y-ammmck-node7 *:80 running (4h) 10m ago 22h 96.4M - 19.1.0-55.el9cp 0ee0a0ad94c7 40e4f2a6d4c4", "vim /var/log/ceph/<fsid>/<ceph-client-rgw>.log 2024-08-28T10:26:05.155+0000 7fe6b03dd640 1 ====== starting new request req=0x7fe6879674a0 ===== 2024-08-28T10:26:05.156+0000 7fe6b03dd640 1 -- 10.0.64.251:0/2235882725 --> [v2:10.0.65.171:6800/4255735352,v1:10.0.65.171:6801/4255735352] -- osd_op(unknown.0.0:1123 11.6d 11:b69767fc:::699c2d80-5683-43c5-bdcd-e8912107c176.24827.3_f1:head [getxattrs,stat] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e4513) -- 0x5639da653800 con 0x5639d804d800", "ceph config set client.rgw.rgw.1.ceph-ci-gune2w-mysx73-node4.dgvrmx advanced debug_ms 1/1 ceph config set client.rgw.rgw.1.ceph-ci-gune2w-mysx73-node7.rfkqqq advanced debug_ms 1/1" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/administration_guide/stretch-clusters-for-ceph-storage
Chapter 12. Installing a cluster on Azure in a restricted network
Chapter 12. Installing a cluster on Azure in a restricted network In OpenShift Container Platform version 4.16, you can install a cluster on Microsoft Azure in a restricted network by creating an internal mirror of the installation release content on an existing Azure Virtual Network (VNet). Important You can install an OpenShift Container Platform cluster by using mirrored installation release content, but your cluster requires internet access to use the Azure APIs. 12.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an Azure account to host the cluster and determined the tested and validated region to deploy the cluster. You mirrored the images for a disconnected installation to your registry and obtained the imageContentSources data for your version of OpenShift Container Platform. Important Because the installation media is on the mirror host, you can use that computer to complete all installation steps. You have an existing VNet in Azure. While installing a cluster in a restricted network that uses installer-provisioned infrastructure, you cannot use the installer-provisioned VNet. You must use a user-provisioned VNet that satisfies one of the following requirements: The VNet contains the mirror registry The VNet has firewall rules or a peering connection to access the mirror registry hosted elsewhere If you use a firewall, you configured it to allow the sites that your cluster requires access to. If you use customer-managed encryption keys, you prepared your Azure environment for encryption . 12.2. About installations in restricted networks In OpenShift Container Platform 4.16, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware, Nutanix, or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. 12.2.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 12.2.2. User-defined outbound routing In OpenShift Container Platform, you can choose your own outbound routing for a cluster to connect to the internet. This allows you to skip the creation of public IP addresses and the public load balancer. You can configure user-defined routing by modifying parameters in the install-config.yaml file before installing your cluster. A pre-existing VNet is required to use outbound routing when installing a cluster; the installation program is not responsible for configuring this. When configuring a cluster to use user-defined routing, the installation program does not create the following resources: Outbound rules for access to the internet. Public IPs for the public load balancer. Kubernetes Service object to add the cluster machines to the public load balancer for outbound requests. You must ensure the following items are available before setting user-defined routing: Egress to the internet is possible to pull container images, unless using an OpenShift image registry mirror. The cluster can access Azure APIs. Various allowlist endpoints are configured. You can reference these endpoints in the Configuring your firewall section. There are several pre-existing networking setups that are supported for internet access using user-defined routing. Restricted cluster with Azure Firewall You can use Azure Firewall to restrict the outbound routing for the Virtual Network (VNet) that is used to install the OpenShift Container Platform cluster. For more information, see providing user-defined routing with Azure Firewall . You can create a OpenShift Container Platform cluster in a restricted network by using VNet with Azure Firewall and configuring the user-defined routing. Important If you are using Azure Firewall for restricting internet access, you must set the publish field to Internal in the install-config.yaml file. This is because Azure Firewall does not work properly with Azure public load balancers . 12.3. About reusing a VNet for your OpenShift Container Platform cluster In OpenShift Container Platform 4.16, you can deploy a cluster into an existing Azure Virtual Network (VNet) in Microsoft Azure. If you do, you must also use existing subnets within the VNet and routing rules. By deploying OpenShift Container Platform into an existing Azure VNet, you might be able to avoid service limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. This is a good option to use if you cannot obtain the infrastructure creation permissions that are required to create the VNet. 12.3.1. Requirements for using your VNet When you deploy a cluster by using an existing VNet, you must perform additional network configuration before you install the cluster. In installer-provisioned infrastructure clusters, the installer usually creates the following components, but it does not create them when you install into an existing VNet: Subnets Route tables VNets Network Security Groups Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. If you use a custom VNet, you must correctly configure it and its subnets for the installation program and the cluster to use. The installation program cannot subdivide network ranges for the cluster to use, set route tables for the subnets, or set VNet options like DHCP, so you must do so before you install the cluster. The cluster must be able to access the resource group that contains the existing VNet and subnets. While all of the resources that the cluster creates are placed in a separate resource group that it creates, some network resources are used from a separate group. Some cluster Operators must be able to access resources in both resource groups. For example, the Machine API controller attaches NICS for the virtual machines that it creates to subnets from the networking resource group. Your VNet must meet the following characteristics: The VNet's CIDR block must contain the Networking.MachineCIDR range, which is the IP address pool for cluster machines. The VNet and its subnets must belong to the same resource group, and the subnets must be configured to use Azure-assigned DHCP IP addresses instead of static IP addresses. You must provide two subnets within your VNet, one for the control plane machines and one for the compute machines. Because Azure distributes machines in different availability zones within the region that you specify, your cluster will have high availability by default. Note By default, if you specify availability zones in the install-config.yaml file, the installation program distributes the control plane machines and the compute machines across these availability zones within a region . To ensure high availability for your cluster, select a region with at least three availability zones. If your region contains fewer than three availability zones, the installation program places more than one control plane machine in the available zones. To ensure that the subnets that you provide are suitable, the installation program confirms the following data: All the specified subnets exist. There are two private subnets, one for the control plane machines and one for the compute machines. The subnet CIDRs belong to the machine CIDR that you specified. Machines are not provisioned in availability zones that you do not provide private subnets for. If required, the installation program creates public load balancers that manage the control plane and worker nodes, and Azure allocates a public IP address to them. Note If you destroy a cluster that uses an existing VNet, the VNet is not deleted. 12.3.1.1. Network security group requirements The network security groups for the subnets that host the compute and control plane machines require specific access to ensure that the cluster communication is correct. You must create rules to allow access to the required cluster communication ports. Important The network security group rules must be in place before you install the cluster. If you attempt to install a cluster without the required access, the installation program cannot reach the Azure APIs, and installation fails. Table 12.1. Required ports Port Description Control plane Compute 80 Allows HTTP traffic x 443 Allows HTTPS traffic x 6443 Allows communication to the control plane machines x 22623 Allows internal communication to the machine config server for provisioning machines x * Allows connections to Azure APIs. You must set a Destination Service Tag to AzureCloud . [1] x x * Denies connections to the internet. You must set a Destination Service Tag to Internet . [1] x x If you are using Azure Firewall to restrict the internet access, then you can configure Azure Firewall to allow the Azure APIs . A network security group rule is not needed. Important Currently, there is no supported way to block or restrict the machine config server endpoint. The machine config server must be exposed to the network so that newly-provisioned machines, which have no existing configuration or state, are able to fetch their configuration. In this model, the root of trust is the certificate signing requests (CSR) endpoint, which is where the kubelet sends its certificate signing request for approval to join the cluster. Because of this, machine configs should not be used to distribute sensitive information, such as secrets and certificates. To ensure that the machine config server endpoints, ports 22623 and 22624, are secured in bare metal scenarios, customers must configure proper network policies. Because cluster components do not modify the user-provided network security groups, which the Kubernetes controllers update, a pseudo-network security group is created for the Kubernetes controller to modify without impacting the rest of the environment. Table 12.2. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If you configure an external NTP time server, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 12.3. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports Additional resources About the OpenShift SDN network plugin Configuring your firewall 12.3.2. Division of permissions Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resources in your clouds than others. For example, you might be able to create application-specific items, like instances, storage, and load balancers, but not networking-related components such as VNets, subnet, or ingress rules. The Azure credentials that you use when you create your cluster do not need the networking permissions that are required to make VNets and core networking components within the VNet, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as load balancers, security groups, storage accounts, and nodes. 12.3.3. Isolation between clusters Because the cluster is unable to modify network security groups in an existing subnet, there is no way to isolate clusters from each other on the VNet. 12.4. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.16, you require access to the internet to obtain the images that are necessary to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. 12.5. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 12.6. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Microsoft Azure. Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. For a restricted network installation, these files are on your mirror host. You have the imageContentSources values that were generated during mirror registry creation. You have obtained the contents of the certificate for your mirror registry. You have retrieved a Red Hat Enterprise Linux CoreOS (RHCOS) image and uploaded it to an accessible location. You have an Azure subscription ID and tenant ID. If you are installing the cluster using a service principal, you have its application ID and password. If you are installing the cluster using a system-assigned managed identity, you have enabled it on the virtual machine that you will run the installation program from. If you are installing the cluster using a user-assigned managed identity, you have met these prerequisites: You have its client ID. You have assigned it to the virtual machine that you will run the installation program from. Procedure Optional: If you have run the installation program on this computer before, and want to use an alternative service principal or managed identity, go to the ~/.azure/ directory and delete the osServicePrincipal.json configuration file. Deleting this file prevents the installation program from automatically reusing subscription and authentication values from a installation. Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select azure as the platform to target. If the installation program cannot locate the osServicePrincipal.json configuration file from a installation, you are prompted for Azure subscription and authentication values. Enter the following Azure parameter values for your subscription: azure subscription id : Enter the subscription ID to use for the cluster. azure tenant id : Enter the tenant ID. Depending on the Azure identity you are using to deploy the cluster, do one of the following when prompted for the azure service principal client id : If you are using a service principal, enter its application ID. If you are using a system-assigned managed identity, leave this value blank. If you are using a user-assigned managed identity, specify its client ID. Depending on the Azure identity you are using to deploy the cluster, do one of the following when prompted for the azure service principal client secret : If you are using a service principal, enter its password. If you are using a system-assigned managed identity, leave this value blank. If you are using a user-assigned managed identity, leave this value blank. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the Azure DNS Zone that you created for your cluster. Enter a descriptive name for your cluster. Important All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation. Paste the pull secret from Red Hat OpenShift Cluster Manager . Edit the install-config.yaml file to give the additional information that is required for an installation in a restricted network. Update the pullSecret value to contain the authentication information for your registry: pullSecret: '{"auths":{"<mirror_host_name>:5000": {"auth": "<credentials>","email": "[email protected]"}}}' For <mirror_host_name> , specify the registry domain name that you specified in the certificate for your mirror registry, and for <credentials> , specify the base64-encoded user name and password for your mirror registry. Add the additionalTrustBundle parameter and value. additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority, or the self-signed certificate that you generated for the mirror registry. Define the network and subnets for the VNet to install the cluster under the platform.azure field: networkResourceGroupName: <vnet_resource_group> 1 virtualNetwork: <vnet> 2 controlPlaneSubnet: <control_plane_subnet> 3 computeSubnet: <compute_subnet> 4 1 Replace <vnet_resource_group> with the resource group name that contains the existing virtual network (VNet). 2 Replace <vnet> with the existing virtual network name. 3 Replace <control_plane_subnet> with the existing subnet name to deploy the control plane machines. 4 Replace <compute_subnet> with the existing subnet name to deploy compute machines. Add the image content resources, which resemble the following YAML excerpt: imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release For these values, use the imageContentSources that you recorded during mirror registry creation. Optional: Set the publishing strategy to Internal : publish: Internal By setting this option, you create an internal Ingress Controller and a private load balancer. Important Azure Firewall does not work seamlessly with Azure Public Load balancers. Thus, when using Azure Firewall for restricting internet access, the publish field in install-config.yaml should be set to Internal . Make any other modifications to the install-config.yaml file that you require. For more information about the parameters, see "Installation configuration parameters". Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. If previously not detected, the installation program creates an osServicePrincipal.json configuration file and stores this file in the ~/.azure/ directory on your computer. This ensures that the installation program can load the profile when it is creating an OpenShift Container Platform cluster on the target platform. Additional resources Installation configuration parameters for Azure 12.6.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 12.4. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). Important You are required to use Azure virtual machines that have the premiumIO parameter set to true . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. 12.6.2. Tested instance types for Azure The following Microsoft Azure instance types have been tested with OpenShift Container Platform. Example 12.1. Machine types based on 64-bit x86 architecture standardBSFamily standardBsv2Family standardDADSv5Family standardDASv4Family standardDASv5Family standardDCACCV5Family standardDCADCCV5Family standardDCADSv5Family standardDCASv5Family standardDCSv3Family standardDCSv2Family standardDDCSv3Family standardDDSv4Family standardDDSv5Family standardDLDSv5Family standardDLSv5Family standardDSFamily standardDSv2Family standardDSv2PromoFamily standardDSv3Family standardDSv4Family standardDSv5Family standardEADSv5Family standardEASv4Family standardEASv5Family standardEBDSv5Family standardEBSv5Family standardECACCV5Family standardECADCCV5Family standardECADSv5Family standardECASv5Family standardEDSv4Family standardEDSv5Family standardEIADSv5Family standardEIASv4Family standardEIASv5Family standardEIBDSv5Family standardEIBSv5Family standardEIDSv5Family standardEISv3Family standardEISv5Family standardESv3Family standardESv4Family standardESv5Family standardFXMDVSFamily standardFSFamily standardFSv2Family standardGSFamily standardHBrsv2Family standardHBSFamily standardHBv4Family standardHCSFamily standardHXFamily standardLASv3Family standardLSFamily standardLSv2Family standardLSv3Family standardMDSMediumMemoryv2Family standardMDSMediumMemoryv3Family standardMIDSMediumMemoryv2Family standardMISMediumMemoryv2Family standardMSFamily standardMSMediumMemoryv2Family standardMSMediumMemoryv3Family StandardNCADSA100v4Family Standard NCASv3_T4 Family standardNCSv3Family standardNDSv2Family standardNPSFamily StandardNVADSA10v5Family standardNVSv3Family standardXEISv4Family 12.6.3. Tested instance types for Azure on 64-bit ARM infrastructures The following Microsoft Azure ARM64 instance types have been tested with OpenShift Container Platform. Example 12.2. Machine types based on 64-bit ARM architecture standardBpsv2Family standardDPSv5Family standardDPDSv5Family standardDPLDSv5Family standardDPLSv5Family standardEPSv5Family standardEPDSv5Family 12.6.4. Enabling trusted launch for Azure VMs You can enable two trusted launch features when installing your cluster on Azure: secure boot and virtualized Trusted Platform Modules . See the Azure documentation about virtual machine sizes to learn what sizes of virtual machines support these features. Important Trusted launch is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Prerequisites You have created an install-config.yaml file. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add the following stanza: controlPlane: 1 platform: azure: settings: securityType: TrustedLaunch 2 trustedLaunch: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4 1 Specify controlPlane.platform.azure or compute.platform.azure to enable trusted launch on only control plane or compute nodes respectively. Specify platform.azure.defaultMachinePlatform to enable trusted launch on all nodes. 2 Enable trusted launch features. 3 Enable secure boot. For more information, see the Azure documentation about secure boot . 4 Enable the virtualized Trusted Platform Module. For more information, see the Azure documentation about virtualized Trusted Platform Modules . 12.6.5. Enabling confidential VMs You can enable confidential VMs when installing your cluster. You can enable confidential VMs for compute nodes, control plane nodes, or all nodes. Important Using confidential VMs is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can use confidential VMs with the following VM sizes: DCasv5-series DCadsv5-series ECasv5-series ECadsv5-series Important Confidential VMs are currently not supported on 64-bit ARM architectures. Prerequisites You have created an install-config.yaml file. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add the following stanza: controlPlane: 1 platform: azure: settings: securityType: ConfidentialVM 2 confidentialVM: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4 osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly 5 1 Specify controlPlane.platform.azure or compute.platform.azure to deploy confidential VMs on only control plane or compute nodes respectively. Specify platform.azure.defaultMachinePlatform to deploy confidential VMs on all nodes. 2 Enable confidential VMs. 3 Enable secure boot. For more information, see the Azure documentation about secure boot . 4 Enable the virtualized Trusted Platform Module. For more information, see the Azure documentation about virtualized Trusted Platform Modules . 5 Specify VMGuestStateOnly to encrypt the VM guest state. 12.6.6. Sample customized install-config.yaml file for Azure You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 8 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version zones: 9 - "1" - "2" - "3" replicas: 5 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: osImage: 12 publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 13 region: centralus 14 resourceGroupName: existing_resource_group 15 networkResourceGroupName: vnet_resource_group 16 virtualNetwork: vnet 17 controlPlaneSubnet: control_plane_subnet 18 computeSubnet: compute_subnet 19 outboundType: UserDefinedRouting 20 cloudName: AzurePublicCloud pullSecret: '{"auths": ...}' 21 fips: false 22 sshKey: ssh-ed25519 AAAA... 23 additionalTrustBundle: | 24 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- imageContentSources: 25 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev publish: Internal 26 1 10 14 21 Required. The installation program prompts you for this value. 2 6 If you do not provide these parameters and values, the installation program provides the default value. 3 7 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger virtual machine types, such as Standard_D8s_v3 , for your machines if you disable simultaneous multithreading. 5 8 You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes is 1024 GB. 9 Specify a list of zones to deploy your machines to. For high availability, specify at least two zones. 11 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 12 Optional: A custom Red Hat Enterprise Linux CoreOS (RHCOS) image that should be used to boot control plane and compute machines. The publisher , offer , sku , and version parameters under platform.azure.defaultMachinePlatform.osImage apply to both control plane and compute machines. If the parameters under controlPlane.platform.azure.osImage or compute.platform.azure.osImage are set, they override the platform.azure.defaultMachinePlatform.osImage parameters. 13 Specify the name of the resource group that contains the DNS zone for your base domain. 15 Specify the name of an already existing resource group to install your cluster to. If undefined, a new resource group is created for the cluster. 16 If you use an existing VNet, specify the name of the resource group that contains it. 17 If you use an existing VNet, specify its name. 18 If you use an existing VNet, specify the name of the subnet to host the control plane machines. 19 If you use an existing VNet, specify the name of the subnet to host the compute machines. 20 When using Azure Firewall to restrict Internet access, you must configure outbound routing to send traffic through the Azure Firewall. Configuring user-defined routing prevents exposing external endpoints in your cluster. 22 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 23 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 24 Provide the contents of the certificate file that you used for your mirror registry. 25 Provide the imageContentSources section from the output of the command to mirror the repository. 26 How to publish the user-facing endpoints of your cluster. When using Azure Firewall to restrict Internet access, set publish to Internal to deploy a private cluster. The user-facing endpoints then cannot be accessed from the internet. The default value is External . 12.6.7. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 12.7. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.16. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.16 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 12.8. Alternatives to storing administrator-level secrets in the kube-system project By default, administrator secrets are stored in the kube-system project. If you configured the credentialsMode parameter in the install-config.yaml file to Manual , you must use one of the following alternatives: To manage long-term cloud credentials manually, follow the procedure in Manually creating long-term credentials . To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in Configuring an Azure cluster to use short-term credentials . 12.8.1. Manually creating long-term credentials The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. 12.8.2. Configuring an Azure cluster to use short-term credentials To install a cluster that uses Microsoft Entra Workload ID, you must configure the Cloud Credential Operator utility and create the required Azure resources for your cluster. 12.8.2.1. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). You have created a global Microsoft Azure account for the ccoctl utility to use with the following permissions: Example 12.3. Required Azure permissions Microsoft.Resources/subscriptions/resourceGroups/read Microsoft.Resources/subscriptions/resourceGroups/write Microsoft.Resources/subscriptions/resourceGroups/delete Microsoft.Authorization/roleAssignments/read Microsoft.Authorization/roleAssignments/delete Microsoft.Authorization/roleAssignments/write Microsoft.Authorization/roleDefinitions/read Microsoft.Authorization/roleDefinitions/write Microsoft.Authorization/roleDefinitions/delete Microsoft.Storage/storageAccounts/listkeys/action Microsoft.Storage/storageAccounts/delete Microsoft.Storage/storageAccounts/read Microsoft.Storage/storageAccounts/write Microsoft.Storage/storageAccounts/blobServices/containers/write Microsoft.Storage/storageAccounts/blobServices/containers/delete Microsoft.Storage/storageAccounts/blobServices/containers/read Microsoft.ManagedIdentity/userAssignedIdentities/delete Microsoft.ManagedIdentity/userAssignedIdentities/read Microsoft.ManagedIdentity/userAssignedIdentities/write Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/read Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/write Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/delete Microsoft.Storage/register/action Microsoft.ManagedIdentity/register/action Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE \ --file="/usr/bin/ccoctl.<rhel_version>" \ 1 -a ~/.pull-secret 1 For <rhel_version> , specify the value that corresponds to the version of Red Hat Enterprise Linux (RHEL) that the host uses. If no value is specified, ccoctl.rhel8 is used by default. The following values are valid: rhel8 : Specify this value for hosts that use RHEL 8. rhel9 : Specify this value for hosts that use RHEL 9. Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl.<rhel_version> Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 12.8.2.2. Creating Azure resources with the Cloud Credential Operator utility You can use the ccoctl azure create-all command to automate the creation of Azure resources. Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Access to your Microsoft Azure account by using the Azure CLI. Procedure Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Note This command might take a few moments to run. To enable the ccoctl utility to detect your Azure credentials automatically, log in to the Azure CLI by running the following command: USD az login Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl azure create-all \ --name=<azure_infra_name> \ 1 --output-dir=<ccoctl_output_dir> \ 2 --region=<azure_region> \ 3 --subscription-id=<azure_subscription_id> \ 4 --credentials-requests-dir=<path_to_credentials_requests_directory> \ 5 --dnszone-resource-group-name=<azure_dns_zone_resource_group_name> \ 6 --tenant-id=<azure_tenant_id> 7 1 Specify the user-defined name for all created Azure resources used for tracking. 2 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 3 Specify the Azure region in which cloud resources will be created. 4 Specify the Azure subscription ID to use. 5 Specify the directory containing the files for the component CredentialsRequest objects. 6 Specify the name of the resource group containing the cluster's base domain Azure DNS zone. 7 Specify the Azure tenant ID to use. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. To see additional optional parameters and explanations of how to use them, run the azure create-all --help command. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output azure-ad-pod-identity-webhook-config.yaml cluster-authentication-02-config.yaml openshift-cloud-controller-manager-azure-cloud-credentials-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capz-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-disk-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-file-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-azure-cloud-credentials-credentials.yaml You can verify that the Microsoft Entra ID service accounts are created by querying Azure. For more information, refer to Azure documentation on listing Entra ID service accounts. 12.8.2.3. Incorporating the Cloud Credential Operator utility manifests To implement short-term security credentials managed outside the cluster for individual components, you must move the manifest files that the Cloud Credential Operator utility ( ccoctl ) created to the correct directories for the installation program. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have configured the Cloud Credential Operator utility ( ccoctl ). You have created the cloud provider resources that are required for your cluster with the ccoctl utility. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you used the ccoctl utility to create a new Azure resource group instead of using an existing resource group, modify the resourceGroupName parameter in the install-config.yaml as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com # ... platform: azure: resourceGroupName: <azure_infra_name> 1 # ... 1 This value must match the user-defined name for Azure resources that was specified with the --name argument of the ccoctl azure create-all command. If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Copy the manifests that the ccoctl utility generated to the manifests directory that the installation program created by running the following command: USD cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/ Copy the tls directory that contains the private key to the installation directory: USD cp -a /<path_to_ccoctl_output_dir>/tls . 12.9. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have an Azure subscription ID and tenant ID. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 12.10. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 12.11. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.16, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 12.12. steps Customize your cluster . If necessary, you can opt out of remote health reporting .
[ "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "./openshift-install create install-config --dir <installation_directory> 1", "pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'", "additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----", "networkResourceGroupName: <vnet_resource_group> 1 virtualNetwork: <vnet> 2 controlPlaneSubnet: <control_plane_subnet> 3 computeSubnet: <compute_subnet> 4", "imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release", "publish: Internal", "controlPlane: 1 platform: azure: settings: securityType: TrustedLaunch 2 trustedLaunch: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4", "controlPlane: 1 platform: azure: settings: securityType: ConfidentialVM 2 confidentialVM: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4 osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly 5", "apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 8 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version zones: 9 - \"1\" - \"2\" - \"3\" replicas: 5 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: osImage: 12 publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 13 region: centralus 14 resourceGroupName: existing_resource_group 15 networkResourceGroupName: vnet_resource_group 16 virtualNetwork: vnet 17 controlPlaneSubnet: control_plane_subnet 18 computeSubnet: compute_subnet 19 outboundType: UserDefinedRouting 20 cloudName: AzurePublicCloud pullSecret: '{\"auths\": ...}' 21 fips: false 22 sshKey: ssh-ed25519 AAAA... 23 additionalTrustBundle: | 24 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- imageContentSources: 25 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev publish: Internal 26", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor secretRef: name: <component_secret> namespace: <component_namespace>", "apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl.<rhel_version>\" \\ 1 -a ~/.pull-secret", "chmod 775 ccoctl.<rhel_version>", "./ccoctl.rhel9", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "az login", "ccoctl azure create-all --name=<azure_infra_name> \\ 1 --output-dir=<ccoctl_output_dir> \\ 2 --region=<azure_region> \\ 3 --subscription-id=<azure_subscription_id> \\ 4 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 5 --dnszone-resource-group-name=<azure_dns_zone_resource_group_name> \\ 6 --tenant-id=<azure_tenant_id> 7", "ls <path_to_ccoctl_output_dir>/manifests", "azure-ad-pod-identity-webhook-config.yaml cluster-authentication-02-config.yaml openshift-cloud-controller-manager-azure-cloud-credentials-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capz-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-disk-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-file-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-azure-cloud-credentials-credentials.yaml", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "apiVersion: v1 baseDomain: example.com platform: azure: resourceGroupName: <azure_infra_name> 1", "openshift-install create manifests --dir <installation_directory>", "cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/", "cp -a /<path_to_ccoctl_output_dir>/tls .", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/installing_on_azure/installing-restricted-networks-azure-installer-provisioned
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Submitting feedback through Jira (account required) Log in to the Jira website. Click Create in the top navigation bar. Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Click Create at the bottom of the dialogue.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/deploying_different_types_of_servers/proc_providing-feedback-on-red-hat-documentation_deploying-different-types-of-servers
6.6. Booleans for Users Executing Applications
6.6. Booleans for Users Executing Applications Not allowing Linux users to execute applications (which inherit users' permissions) in their home directories and the /tmp directory, which they have write access to, helps prevent flawed or malicious applications from modifying files that users own. Booleans are available to change this behavior, and are configured with the setsebool utility, which must be run as root. The setsebool -P command makes persistent changes. Do not use the -P option if you do not want changes to persist across reboots: guest_t To prevent Linux users in the guest_t domain from executing applications in their home directories and /tmp : xguest_t To prevent Linux users in the xguest_t domain from executing applications in their home directories and /tmp : user_t To prevent Linux users in the user_t domain from executing applications in their home directories and /tmp : staff_t To prevent Linux users in the staff_t domain from executing applications in their home directories and /tmp : To turn the staff_exec_content boolean on and to allow Linux users in the staff_t domain to execute applications in their home directories and /tmp :
[ "~]# setsebool -P guest_exec_content off", "~]# setsebool -P xguest_exec_content off", "~]# setsebool -P user_exec_content off", "~]# setsebool -P staff_exec_content off", "~]# setsebool -P staff_exec_content on" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/selinux_users_and_administrators_guide/sect-security-enhanced_linux-confining_users-booleans_for_users_executing_applications
Chapter 8. Triggering and modifying builds
Chapter 8. Triggering and modifying builds The following sections outline how to trigger builds and modify builds using build hooks. 8.1. Build triggers When defining a BuildConfig , you can define triggers to control the circumstances in which the BuildConfig should be run. The following build triggers are available: Webhook Image change Configuration change 8.1.1. Webhook triggers Webhook triggers allow you to trigger a new build by sending a request to the OpenShift Container Platform API endpoint. You can define these triggers using GitHub, GitLab, Bitbucket, or Generic webhooks. Currently, OpenShift Container Platform webhooks only support the analogous versions of the push event for each of the Git-based Source Code Management (SCM) systems. All other event types are ignored. When the push events are processed, the OpenShift Container Platform control plane host confirms if the branch reference inside the event matches the branch reference in the corresponding BuildConfig . If so, it then checks out the exact commit reference noted in the webhook event on the OpenShift Container Platform build. If they do not match, no build is triggered. Note oc new-app and oc new-build create GitHub and Generic webhook triggers automatically, but any other needed webhook triggers must be added manually. You can manually add triggers by setting triggers. For all webhooks, you must define a secret with a key named WebHookSecretKey and the value being the value to be supplied when invoking the webhook. The webhook definition must then reference the secret. The secret ensures the uniqueness of the URL, preventing others from triggering the build. The value of the key is compared to the secret provided during the webhook invocation. For example here is a GitHub webhook with a reference to a secret named mysecret : type: "GitHub" github: secretReference: name: "mysecret" The secret is then defined as follows. Note that the value of the secret is base64 encoded as is required for any data field of a Secret object. - kind: Secret apiVersion: v1 metadata: name: mysecret creationTimestamp: data: WebHookSecretKey: c2VjcmV0dmFsdWUx 8.1.1.1. Using GitHub webhooks GitHub webhooks handle the call made by GitHub when a repository is updated. When defining the trigger, you must specify a secret, which is part of the URL you supply to GitHub when configuring the webhook. Example GitHub webhook definition: type: "GitHub" github: secretReference: name: "mysecret" Note The secret used in the webhook trigger configuration is not the same as secret field you encounter when configuring webhook in GitHub UI. The former is to make the webhook URL unique and hard to predict, the latter is an optional string field used to create HMAC hex digest of the body, which is sent as an X-Hub-Signature header. The payload URL is returned as the GitHub Webhook URL by the oc describe command (see Displaying Webhook URLs), and is structured as follows: Example output https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/github Prerequisites Create a BuildConfig from a GitHub repository. Procedure To configure a GitHub Webhook: After creating a BuildConfig from a GitHub repository, run: USD oc describe bc/<name-of-your-BuildConfig> This generates a webhook GitHub URL that looks like: Example output <https://api.starter-us-east-1.openshift.com:443/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/github Cut and paste this URL into GitHub, from the GitHub web console. In your GitHub repository, select Add Webhook from Settings Webhooks . Paste the URL output into the Payload URL field. Change the Content Type from GitHub's default application/x-www-form-urlencoded to application/json . Click Add webhook . You should see a message from GitHub stating that your webhook was successfully configured. Now, when you push a change to your GitHub repository, a new build automatically starts, and upon a successful build a new deployment starts. Note Gogs supports the same webhook payload format as GitHub. Therefore, if you are using a Gogs server, you can define a GitHub webhook trigger on your BuildConfig and trigger it by your Gogs server as well. Given a file containing a valid JSON payload, such as payload.json , you can manually trigger the webhook with curl : USD curl -H "X-GitHub-Event: push" -H "Content-Type: application/json" -k -X POST --data-binary @payload.json https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/github The -k argument is only necessary if your API server does not have a properly signed certificate. Note The build will only be triggered if the ref value from GitHub webhook event matches the ref value specified in the source.git field of the BuildConfig resource. Additional resources Gogs 8.1.1.2. Using GitLab webhooks GitLab webhooks handle the call made by GitLab when a repository is updated. As with the GitHub triggers, you must specify a secret. The following example is a trigger definition YAML within the BuildConfig : type: "GitLab" gitlab: secretReference: name: "mysecret" The payload URL is returned as the GitLab Webhook URL by the oc describe command, and is structured as follows: Example output https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/gitlab Procedure To configure a GitLab Webhook: Describe the BuildConfig to get the webhook URL: USD oc describe bc <name> Copy the webhook URL, replacing <secret> with your secret value. Follow the GitLab setup instructions to paste the webhook URL into your GitLab repository settings. Given a file containing a valid JSON payload, such as payload.json , you can manually trigger the webhook with curl : USD curl -H "X-GitLab-Event: Push Hook" -H "Content-Type: application/json" -k -X POST --data-binary @payload.json https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/gitlab The -k argument is only necessary if your API server does not have a properly signed certificate. 8.1.1.3. Using Bitbucket webhooks Bitbucket webhooks handle the call made by Bitbucket when a repository is updated. Similar to the triggers, you must specify a secret. The following example is a trigger definition YAML within the BuildConfig : type: "Bitbucket" bitbucket: secretReference: name: "mysecret" The payload URL is returned as the Bitbucket Webhook URL by the oc describe command, and is structured as follows: Example output https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/bitbucket Procedure To configure a Bitbucket Webhook: Describe the 'BuildConfig' to get the webhook URL: USD oc describe bc <name> Copy the webhook URL, replacing <secret> with your secret value. Follow the Bitbucket setup instructions to paste the webhook URL into your Bitbucket repository settings. Given a file containing a valid JSON payload, such as payload.json , you can manually trigger the webhook with curl : USD curl -H "X-Event-Key: repo:push" -H "Content-Type: application/json" -k -X POST --data-binary @payload.json https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/bitbucket The -k argument is only necessary if your API server does not have a properly signed certificate. 8.1.1.4. Using generic webhooks Generic webhooks are invoked from any system capable of making a web request. As with the other webhooks, you must specify a secret, which is part of the URL that the caller must use to trigger the build. The secret ensures the uniqueness of the URL, preventing others from triggering the build. The following is an example trigger definition YAML within the BuildConfig : type: "Generic" generic: secretReference: name: "mysecret" allowEnv: true 1 1 Set to true to allow a generic webhook to pass in environment variables. Procedure To set up the caller, supply the calling system with the URL of the generic webhook endpoint for your build: Example output https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/generic The caller must invoke the webhook as a POST operation. To invoke the webhook manually you can use curl : USD curl -X POST -k https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/generic The HTTP verb must be set to POST . The insecure -k flag is specified to ignore certificate validation. This second flag is not necessary if your cluster has properly signed certificates. The endpoint can accept an optional payload with the following format: git: uri: "<url to git repository>" ref: "<optional git reference>" commit: "<commit hash identifying a specific git commit>" author: name: "<author name>" email: "<author e-mail>" committer: name: "<committer name>" email: "<committer e-mail>" message: "<commit message>" env: 1 - name: "<variable name>" value: "<variable value>" 1 Similar to the BuildConfig environment variables, the environment variables defined here are made available to your build. If these variables collide with the BuildConfig environment variables, these variables take precedence. By default, environment variables passed by webhook are ignored. Set the allowEnv field to true on the webhook definition to enable this behavior. To pass this payload using curl , define it in a file named payload_file.yaml and run: USD curl -H "Content-Type: application/yaml" --data-binary @payload_file.yaml -X POST -k https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/generic The arguments are the same as the example with the addition of a header and a payload. The -H argument sets the Content-Type header to application/yaml or application/json depending on your payload format. The --data-binary argument is used to send a binary payload with newlines intact with the POST request. Note OpenShift Container Platform permits builds to be triggered by the generic webhook even if an invalid request payload is presented, for example, invalid content type, unparsable or invalid content, and so on. This behavior is maintained for backwards compatibility. If an invalid request payload is presented, OpenShift Container Platform returns a warning in JSON format as part of its HTTP 200 OK response. 8.1.1.5. Displaying webhook URLs You can use the following command to display webhook URLs associated with a build configuration. If the command does not display any webhook URLs, then no webhook trigger is defined for that build configuration. Procedure To display any webhook URLs associated with a BuildConfig , run: USD oc describe bc <name> 8.1.2. Using image change triggers As a developer, you can configure your build to run automatically every time a base image changes. You can use image change triggers to automatically invoke your build when a new version of an upstream image is available. For example, if a build is based on a RHEL image, you can trigger that build to run any time the RHEL image changes. As a result, the application image is always running on the latest RHEL base image. Note Image streams that point to container images in v1 container registries only trigger a build once when the image stream tag becomes available and not on subsequent image updates. This is due to the lack of uniquely identifiable images in v1 container registries. Procedure Define an ImageStream that points to the upstream image you want to use as a trigger: kind: "ImageStream" apiVersion: "v1" metadata: name: "ruby-20-centos7" This defines the image stream that is tied to a container image repository located at <system-registry> / <namespace> /ruby-20-centos7 . The <system-registry> is defined as a service with the name docker-registry running in OpenShift Container Platform. If an image stream is the base image for the build, set the from field in the build strategy to point to the ImageStream : strategy: sourceStrategy: from: kind: "ImageStreamTag" name: "ruby-20-centos7:latest" In this case, the sourceStrategy definition is consuming the latest tag of the image stream named ruby-20-centos7 located within this namespace. Define a build with one or more triggers that point to ImageStreams : type: "ImageChange" 1 imageChange: {} type: "ImageChange" 2 imageChange: from: kind: "ImageStreamTag" name: "custom-image:latest" 1 An image change trigger that monitors the ImageStream and Tag as defined by the build strategy's from field. The imageChange object here must be empty. 2 An image change trigger that monitors an arbitrary image stream. The imageChange part, in this case, must include a from field that references the ImageStreamTag to monitor. When using an image change trigger for the strategy image stream, the generated build is supplied with an immutable docker tag that points to the latest image corresponding to that tag. This new image reference is used by the strategy when it executes for the build. For other image change triggers that do not reference the strategy image stream, a new build is started, but the build strategy is not updated with a unique image reference. Since this example has an image change trigger for the strategy, the resulting build is: strategy: sourceStrategy: from: kind: "DockerImage" name: "172.30.17.3:5001/mynamespace/ruby-20-centos7:<immutableid>" This ensures that the triggered build uses the new image that was just pushed to the repository, and the build can be re-run any time with the same inputs. You can pause an image change trigger to allow multiple changes on the referenced image stream before a build is started. You can also set the paused attribute to true when initially adding an ImageChangeTrigger to a BuildConfig to prevent a build from being immediately triggered. type: "ImageChange" imageChange: from: kind: "ImageStreamTag" name: "custom-image:latest" paused: true In addition to setting the image field for all Strategy types, for custom builds, the OPENSHIFT_CUSTOM_BUILD_BASE_IMAGE environment variable is checked. If it does not exist, then it is created with the immutable image reference. If it does exist, then it is updated with the immutable image reference. If a build is triggered due to a webhook trigger or manual request, the build that is created uses the <immutableid> resolved from the ImageStream referenced by the Strategy . This ensures that builds are performed using consistent image tags for ease of reproduction. Additional resources v1 container registries 8.1.3. Identifying the image change trigger of a build As a developer, if you have image change triggers, you can identify which image change initiated the last build. This can be useful for debugging or troubleshooting builds. Example BuildConfig apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: bc-ict-example namespace: bc-ict-example-namespace spec: # ... triggers: - imageChange: from: kind: ImageStreamTag name: input:latest namespace: bc-ict-example-namespace - imageChange: from: kind: ImageStreamTag name: input2:latest namespace: bc-ict-example-namespace type: ImageChange status: imageChangeTriggers: - from: name: input:latest namespace: bc-ict-example-namespace lastTriggerTime: "2021-06-30T13:47:53Z" lastTriggeredImageID: image-registry.openshift-image-registry.svc:5000/bc-ict-example-namespace/input@sha256:0f88ffbeb9d25525720bfa3524cb1bf0908b7f791057cf1acfae917b11266a69 - from: name: input2:latest namespace: bc-ict-example-namespace lastTriggeredImageID: image-registry.openshift-image-registry.svc:5000/bc-ict-example-namespace/input2@sha256:0f88ffbeb9d25525720bfa3524cb2ce0908b7f791057cf1acfae917b11266a69 lastVersion: 1 Note This example omits elements that are not related to image change triggers. Prerequisites You have configured multiple image change triggers. These triggers have triggered one or more builds. Procedure In buildConfig.status.imageChangeTriggers to identify the lastTriggerTime that has the latest timestamp. This ImageChangeTriggerStatus Under imageChangeTriggers , compare timestamps to identify the latest Image change triggers In your build configuration, buildConfig.spec.triggers is an array of build trigger policies, BuildTriggerPolicy . Each BuildTriggerPolicy has a type field and set of pointers fields. Each pointer field corresponds to one of the allowed values for the type field. As such, you can only set BuildTriggerPolicy to only one pointer field. For image change triggers, the value of type is ImageChange . Then, the imageChange field is the pointer to an ImageChangeTrigger object, which has the following fields: lastTriggeredImageID : This field, which is not shown in the example, is deprecated in OpenShift Container Platform 4.8 and will be ignored in a future release. It contains the resolved image reference for the ImageStreamTag when the last build was triggered from this BuildConfig . paused : You can use this field, which is not shown in the example, to temporarily disable this particular image change trigger. from : You use this field to reference the ImageStreamTag that drives this image change trigger. Its type is the core Kubernetes type, OwnerReference . The from field has the following fields of note: kind : For image change triggers, the only supported value is ImageStreamTag . namespace : You use this field to specify the namespace of the ImageStreamTag . ** name : You use this field to specify the ImageStreamTag . Image change trigger status In your build configuration, buildConfig.status.imageChangeTriggers is an array of ImageChangeTriggerStatus elements. Each ImageChangeTriggerStatus element includes the from , lastTriggeredImageID , and lastTriggerTime elements shown in the preceding example. The ImageChangeTriggerStatus that has the most recent lastTriggerTime triggered the most recent build. You use its name and namespace to identify the image change trigger in buildConfig.spec.triggers that triggered the build. The lastTriggerTime with the most recent timestamp signifies the ImageChangeTriggerStatus of the last build. This ImageChangeTriggerStatus has the same name and namespace as the image change trigger in buildConfig.spec.triggers that triggered the build. Additional resources v1 container registries 8.1.4. Configuration change triggers A configuration change trigger allows a build to be automatically invoked as soon as a new BuildConfig is created. The following is an example trigger definition YAML within the BuildConfig : type: "ConfigChange" Note Configuration change triggers currently only work when creating a new BuildConfig . In a future release, configuration change triggers will also be able to launch a build whenever a BuildConfig is updated. 8.1.4.1. Setting triggers manually Triggers can be added to and removed from build configurations with oc set triggers . Procedure To set a GitHub webhook trigger on a build configuration, use: USD oc set triggers bc <name> --from-github To set an imagechange trigger, use: USD oc set triggers bc <name> --from-image='<image>' To remove a trigger, add --remove : USD oc set triggers bc <name> --from-bitbucket --remove Note When a webhook trigger already exists, adding it again regenerates the webhook secret. For more information, consult the help documentation with by running: USD oc set triggers --help 8.2. Build hooks Build hooks allow behavior to be injected into the build process. The postCommit field of a BuildConfig object runs commands inside a temporary container that is running the build output image. The hook is run immediately after the last layer of the image has been committed and before the image is pushed to a registry. The current working directory is set to the image's WORKDIR , which is the default working directory of the container image. For most images, this is where the source code is located. The hook fails if the script or command returns a non-zero exit code or if starting the temporary container fails. When the hook fails it marks the build as failed and the image is not pushed to a registry. The reason for failing can be inspected by looking at the build logs. Build hooks can be used to run unit tests to verify the image before the build is marked complete and the image is made available in a registry. If all tests pass and the test runner returns with exit code 0 , the build is marked successful. In case of any test failure, the build is marked as failed. In all cases, the build log contains the output of the test runner, which can be used to identify failed tests. The postCommit hook is not only limited to running tests, but can be used for other commands as well. Since it runs in a temporary container, changes made by the hook do not persist, meaning that running the hook cannot affect the final image. This behavior allows for, among other uses, the installation and usage of test dependencies that are automatically discarded and are not present in the final image. 8.2.1. Configuring post commit build hooks There are different ways to configure the post build hook. All forms in the following examples are equivalent and run bundle exec rake test --verbose . Procedure Shell script: postCommit: script: "bundle exec rake test --verbose" The script value is a shell script to be run with /bin/sh -ic . Use this when a shell script is appropriate to execute the build hook. For example, for running unit tests as above. To control the image entry point, or if the image does not have /bin/sh , use command and/or args . Note The additional -i flag was introduced to improve the experience working with CentOS and RHEL images, and may be removed in a future release. Command as the image entry point: postCommit: command: ["/bin/bash", "-c", "bundle exec rake test --verbose"] In this form, command is the command to run, which overrides the image entry point in the exec form, as documented in the Dockerfile reference . This is needed if the image does not have /bin/sh , or if you do not want to use a shell. In all other cases, using script might be more convenient. Command with arguments: postCommit: command: ["bundle", "exec", "rake", "test"] args: ["--verbose"] This form is equivalent to appending the arguments to command . Note Providing both script and command simultaneously creates an invalid build hook. 8.2.2. Using the CLI to set post commit build hooks The oc set build-hook command can be used to set the build hook for a build configuration. Procedure To set a command as the post-commit build hook: USD oc set build-hook bc/mybc \ --post-commit \ --command \ -- bundle exec rake test --verbose To set a script as the post-commit build hook: USD oc set build-hook bc/mybc --post-commit --script="bundle exec rake test --verbose"
[ "type: \"GitHub\" github: secretReference: name: \"mysecret\"", "- kind: Secret apiVersion: v1 metadata: name: mysecret creationTimestamp: data: WebHookSecretKey: c2VjcmV0dmFsdWUx", "type: \"GitHub\" github: secretReference: name: \"mysecret\"", "https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/github", "oc describe bc/<name-of-your-BuildConfig>", "<https://api.starter-us-east-1.openshift.com:443/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/github", "curl -H \"X-GitHub-Event: push\" -H \"Content-Type: application/json\" -k -X POST --data-binary @payload.json https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/github", "type: \"GitLab\" gitlab: secretReference: name: \"mysecret\"", "https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/gitlab", "oc describe bc <name>", "curl -H \"X-GitLab-Event: Push Hook\" -H \"Content-Type: application/json\" -k -X POST --data-binary @payload.json https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/gitlab", "type: \"Bitbucket\" bitbucket: secretReference: name: \"mysecret\"", "https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/bitbucket", "oc describe bc <name>", "curl -H \"X-Event-Key: repo:push\" -H \"Content-Type: application/json\" -k -X POST --data-binary @payload.json https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/bitbucket", "type: \"Generic\" generic: secretReference: name: \"mysecret\" allowEnv: true 1", "https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/generic", "curl -X POST -k https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/generic", "git: uri: \"<url to git repository>\" ref: \"<optional git reference>\" commit: \"<commit hash identifying a specific git commit>\" author: name: \"<author name>\" email: \"<author e-mail>\" committer: name: \"<committer name>\" email: \"<committer e-mail>\" message: \"<commit message>\" env: 1 - name: \"<variable name>\" value: \"<variable value>\"", "curl -H \"Content-Type: application/yaml\" --data-binary @payload_file.yaml -X POST -k https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/generic", "oc describe bc <name>", "kind: \"ImageStream\" apiVersion: \"v1\" metadata: name: \"ruby-20-centos7\"", "strategy: sourceStrategy: from: kind: \"ImageStreamTag\" name: \"ruby-20-centos7:latest\"", "type: \"ImageChange\" 1 imageChange: {} type: \"ImageChange\" 2 imageChange: from: kind: \"ImageStreamTag\" name: \"custom-image:latest\"", "strategy: sourceStrategy: from: kind: \"DockerImage\" name: \"172.30.17.3:5001/mynamespace/ruby-20-centos7:<immutableid>\"", "type: \"ImageChange\" imageChange: from: kind: \"ImageStreamTag\" name: \"custom-image:latest\" paused: true", "apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: bc-ict-example namespace: bc-ict-example-namespace spec: triggers: - imageChange: from: kind: ImageStreamTag name: input:latest namespace: bc-ict-example-namespace - imageChange: from: kind: ImageStreamTag name: input2:latest namespace: bc-ict-example-namespace type: ImageChange status: imageChangeTriggers: - from: name: input:latest namespace: bc-ict-example-namespace lastTriggerTime: \"2021-06-30T13:47:53Z\" lastTriggeredImageID: image-registry.openshift-image-registry.svc:5000/bc-ict-example-namespace/input@sha256:0f88ffbeb9d25525720bfa3524cb1bf0908b7f791057cf1acfae917b11266a69 - from: name: input2:latest namespace: bc-ict-example-namespace lastTriggeredImageID: image-registry.openshift-image-registry.svc:5000/bc-ict-example-namespace/input2@sha256:0f88ffbeb9d25525720bfa3524cb2ce0908b7f791057cf1acfae917b11266a69 lastVersion: 1", "Then you use the `name` and `namespace` from that build to find the corresponding image change trigger in `buildConfig.spec.triggers`.", "type: \"ConfigChange\"", "oc set triggers bc <name> --from-github", "oc set triggers bc <name> --from-image='<image>'", "oc set triggers bc <name> --from-bitbucket --remove", "oc set triggers --help", "postCommit: script: \"bundle exec rake test --verbose\"", "postCommit: command: [\"/bin/bash\", \"-c\", \"bundle exec rake test --verbose\"]", "postCommit: command: [\"bundle\", \"exec\", \"rake\", \"test\"] args: [\"--verbose\"]", "oc set build-hook bc/mybc --post-commit --command -- bundle exec rake test --verbose", "oc set build-hook bc/mybc --post-commit --script=\"bundle exec rake test --verbose\"" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/builds/triggering-builds-build-hooks
Chapter 8. Asynchronous errata updates
Chapter 8. Asynchronous errata updates 8.1. RHBA-2024:6399 OpenShift Data Foundation 4.13.11 bug fixes and security updates OpenShift Data Foundation release 4.13.11 is now available. The bug fixes that are included in the update are listed in the RHBA-2024:6399 advisory. 8.2. RHBA-2024:4358 OpenShift Data Foundation 4.13.10 bug fixes and security updates OpenShift Data Foundation release 4.13.10 is now available. The bug fixes that are included in the update are listed in the RHBA-2024:4358 advisory. 8.3. RHBA-2024:3865 OpenShift Data Foundation 4.13.9 bug fixes and security updates OpenShift Data Foundation release 4.13.9 is now available. The bug fixes that are included in the update are listed in the RHBA-2024:3865 advisory. 8.4. RHBA-2024:1657 OpenShift Data Foundation 4.13.8 bug fixes and security updates OpenShift Data Foundation release 4.13.8 is now available. The bug fixes that are included in the update are listed in the RHBA-2024:1657 advisory. 8.5. RHBA-2024:0540 OpenShift Data Foundation 4.13.7 bug fixes and security updates OpenShift Data Foundation release 4.13.7 is now available. The bug fixes that are included in the update are listed in the RHBA-2024:0540 advisory. 8.6. RHBA-2023:7870 OpenShift Data Foundation 4.13.6 bug fixes and security updates OpenShift Data Foundation release 4.13.6 is now available. The bug fixes that are included in the update are listed in the RHBA-2023:7870 advisory. 8.7. RHBA-2023:7775 OpenShift Data Foundation 4.13.5 bug fixes and security updates OpenShift Data Foundation release 4.13.5 is now available. The bug fixes that are included in the update are listed in the RHBA-2023:7775 advisory. 8.8. RHBA-2023:6146 OpenShift Data Foundation 4.13.4 bug fixes and security updates OpenShift Data Foundation release 4.13.4 is now available. The bug fixes that are included in the update are listed in the RHBA-2023:6146 advisory. 8.9. RHSA-2023:5376 OpenShift Data Foundation 4.13.3 bug fixes and security updates OpenShift Data Foundation release 4.13.3 is now available. The bug fixes that are included in the update are listed in the RHSA-2023:5376 advisory. 8.10. RHBA-2023:4716 OpenShift Data Foundation 4.13.2 bug fixes and security updates OpenShift Data Foundation release 4.13.2 is now available. The bug fixes that are included in the update are listed in the RHBA-2023:4716 advisory. 8.11. RHSA-2023:4437 OpenShift Data Foundation 4.13.1 bug fixes and security updates OpenShift Data Foundation release 4.13.1 is now available. The bug fixes that are included in the update are listed in the RHSA-2023:4437 advisory.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/4.13_release_notes/asynchronous_errata_updates
Chapter 23. Optimizing the JBoss EAP server configuration
Chapter 23. Optimizing the JBoss EAP server configuration Once you have installed the JBoss EAP server , and you have created a management user , Red Hat recommends that you optimize your server configuration. Make sure you review information in the Performance tuning for JBoss EAP guide for information about how to optimize the server configuration to avoid common problems when deploying applications in a production environment. Common optimizations include setting ulimits , enabling garbage collection , creating Java heap dumps , and adjusting the thread pool size . It is also a good idea to keep your instance of JBoss EAP up to date with the latest bug fixes. For more information, see Updating Red Hat JBoss Enterprise Application Platform .
null
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/configuration_guide/optimize-server-config
B.81. resource-agents
B.81. resource-agents B.81.1. RHBA-2010:0835 - resource-agents bug fix update Updated resource-agents packages that provide a fix for a bug are now available for Red Hat Enterprise Linux 6. The resource-agents packages contain the cluster resource agents for use by rgmanager and pacemaker. These agents allow users to build highly available services. Bug Fix BZ# 640190 The config-utils library did not work correctly with certain references, causing problems with several agents. All users of the resource-agents package are advised to upgrade to these updated packages, which address this issue and add this enhancement.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/resource-agents
probe::ipmib.InAddrErrors
probe::ipmib.InAddrErrors Name probe::ipmib.InAddrErrors - Count arriving packets with an incorrect address Synopsis ipmib.InAddrErrors Values skb pointer to the struct sk_buff being acted on op value to be added to the counter (default value of 1) Description The packet pointed to by skb is filtered by the function ipmib_filter_key . If the packet passes the filter is is counted in the global InAddrErrors (equivalent to SNMP's MIB IPSTATS_MIB_INADDRERRORS)
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-ipmib-inaddrerrors
Chapter 7. Installing a cluster in an LPAR on IBM Z and IBM LinuxONE in a restricted network
Chapter 7. Installing a cluster in an LPAR on IBM Z and IBM LinuxONE in a restricted network In OpenShift Container Platform version 4.16, you can install a cluster in a logical partition (LPAR) on IBM Z(R) or IBM(R) LinuxONE infrastructure that you provision in a restricted network. Note While this document refers to only IBM Z(R), all information in it also applies to IBM(R) LinuxONE. Important Additional considerations exist for non-bare metal platforms. Review the information in the guidelines for deploying OpenShift Container Platform on non-tested platforms before you install an OpenShift Container Platform cluster. 7.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You created a mirror registry for installation in a restricted network and obtained the imageContentSources data for your version of OpenShift Container Platform. Before you begin the installation process, you must move or remove any existing installation files. This ensures that the required installation files are created and updated during the installation process. Important Ensure that installation steps are done from a machine with access to the installation media. You provisioned persistent storage using OpenShift Data Foundation or other supported storage protocols for your cluster. To deploy a private image registry, you must set up persistent storage with ReadWriteMany access. If you use a firewall and plan to use the Telemetry service, you configured the firewall to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 7.2. About installations in restricted networks In OpenShift Container Platform 4.16, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware, Nutanix, or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. Important Because of the complexity of the configuration for user-provisioned installations, consider completing a standard user-provisioned infrastructure installation before you attempt a restricted network installation using user-provisioned infrastructure. Completing this test installation might make it easier to isolate and troubleshoot any issues that might arise during your installation in a restricted network. 7.2.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 7.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.16, you require access to the internet to obtain the images that are necessary to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. 7.4. Requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. 7.4.1. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 7.1. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To maintain high availability of your cluster, use separate physical hosts for these cluster machines. The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. However, the compute machines can choose between Red Hat Enterprise Linux CoreOS (RHCOS), Red Hat Enterprise Linux (RHEL) 8.6 and later. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 9.2 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 7.4.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 7.2. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) Bootstrap RHCOS 4 16 GB 100 GB N/A Control plane RHCOS 4 16 GB 100 GB N/A Compute RHCOS 2 8 GB 100 GB N/A One physical core (IFL) provides two logical cores (threads) when SMT-2 is enabled. The hypervisor can provide two or more vCPUs. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 7.4.3. Minimum IBM Z system environment You can install OpenShift Container Platform version 4.16 on the following IBM(R) hardware: IBM(R) z16 (all models), IBM(R) z15 (all models), IBM(R) z14 (all models) IBM(R) LinuxONE 4 (all models), IBM(R) LinuxONE III (all models), IBM(R) LinuxONE Emperor II, IBM(R) LinuxONE Rockhopper II Important When running OpenShift Container Platform on IBM Z(R) without a hypervisor use the Dynamic Partition Manager (DPM) to manage your machine. Hardware requirements The equivalent of six Integrated Facilities for Linux (IFL), which are SMT2 enabled, for each cluster. At least one network connection to both connect to the LoadBalancer service and to serve data for traffic outside the cluster. Note You can use dedicated or shared IFLs to assign sufficient compute resources. Resource sharing is one of the key strengths of IBM Z(R). However, you must adjust capacity correctly on each hypervisor layer and ensure sufficient resources for every OpenShift Container Platform cluster. Important Since the overall performance of the cluster can be impacted, the LPARs that are used to set up the OpenShift Container Platform clusters must provide sufficient compute capacity. In this context, LPAR weight management, entitlements, and CPU shares on the hypervisor level play an important role. Operating system requirements Five logical partitions (LPARs) Three LPARs for OpenShift Container Platform control plane machines Two LPARs for OpenShift Container Platform compute machines One machine for the temporary OpenShift Container Platform bootstrap machine IBM Z network connectivity requirements To install on IBM Z(R) in an LPAR, you need: A direct-attached OSA or RoCE network adapter For a preferred setup, use OSA link aggregation. Disk storage FICON attached disk storage (DASDs). These can be dedicated DASDs that must be formatted as CDL, which is the default. To reach the minimum required DASD size for Red Hat Enterprise Linux CoreOS (RHCOS) installations, you need extended address volumes (EAV). If available, use HyperPAV to ensure optimal performance. FCP attached disk storage Storage / Main Memory 16 GB for OpenShift Container Platform control plane machines 8 GB for OpenShift Container Platform compute machines 16 GB for the temporary OpenShift Container Platform bootstrap machine Additional resources Processors Resource/Systems Manager Planning Guide in IBM(R) Documentation for PR/SM mode considerations. IBM Dynamic Partition Manager (DPM) Guide in IBM(R) Documentation for DPM mode considerations. Topics in LPAR performance for LPAR weight management and entitlements. Recommended host practices for IBM Z(R) & IBM(R) LinuxONE environments 7.4.4. Preferred IBM Z system environment Hardware requirements Three LPARS that each have the equivalent of six IFLs, which are SMT2 enabled, for each cluster. Two network connections to both connect to the LoadBalancer service and to serve data for traffic outside the cluster. HiperSockets that are attached to a node directly as a device. To directly connect HiperSockets to a node, you must set up a gateway to the external network via a RHEL 8 guest to bridge to the HiperSockets network. Operating system requirements Three LPARs for OpenShift Container Platform control plane machines. At least six LPARs for OpenShift Container Platform compute machines. One machine or LPAR for the temporary OpenShift Container Platform bootstrap machine. IBM Z network connectivity requirements To install on IBM Z(R) in an LPAR, you need: A direct-attached OSA or RoCE network adapter For a preferred setup, use OSA link aggregation. Disk storage FICON attached disk storage (DASDs). These can be dedicated DASDs that must be formatted as CDL, which is the default. To reach the minimum required DASD size for Red Hat Enterprise Linux CoreOS (RHCOS) installations, you need extended address volumes (EAV). If available, use HyperPAV to ensure optimal performance. FCP attached disk storage Storage / Main Memory 16 GB for OpenShift Container Platform control plane machines 8 GB for OpenShift Container Platform compute machines 16 GB for the temporary OpenShift Container Platform bootstrap machine 7.4.5. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 7.4.6. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. During the initial boot, the machines require an IP address configuration that is set either through a DHCP server or statically by providing the required boot options. After a network connection is established, the machines download their Ignition config files from an HTTP or HTTPS server. The Ignition config files are then used to set the exact state of each machine. The Machine Config Operator completes more changes to the machines, such as the application of new certificates or keys, after installation. It is recommended to use a DHCP server for long-term management of the cluster machines. Ensure that the DHCP server is configured to provide persistent IP addresses, DNS server information, and hostnames to the cluster machines. Note If a DHCP service is not available for your user-provisioned infrastructure, you can instead provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests. 7.4.6.1. Setting the cluster node hostnames through DHCP On Red Hat Enterprise Linux CoreOS (RHCOS) machines, the hostname is set through NetworkManager. By default, the machines obtain their hostname through DHCP. If the hostname is not provided by DHCP, set statically through kernel arguments, or another method, it is obtained through a reverse DNS lookup. Reverse DNS lookup occurs after the network has been initialized on a node and can take time to resolve. Other system services can start prior to this and detect the hostname as localhost or similar. You can avoid this by using DHCP to provide the hostname for each cluster node. Additionally, setting the hostnames through DHCP can bypass any manual DNS record name configuration errors in environments that have a DNS split-horizon implementation. 7.4.6.2. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Table 7.3. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If an external NTP time server is configured, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 7.4. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 7.5. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports NTP configuration for user-provisioned infrastructure OpenShift Container Platform clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for Configuring chrony time service . Additional resources Configuring chrony time service 7.4.7. User-provisioned DNS requirements In OpenShift Container Platform deployments, DNS name resolution is required for the following components: The Kubernetes API The OpenShift Container Platform application wildcard The bootstrap, control plane, and compute machines Reverse DNS resolution is also required for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the hostnames for all the nodes, unless the hostnames are provided by DHCP. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate. The following DNS records are required for a user-provisioned OpenShift Container Platform cluster and they must be in place before installation. In each record, <cluster_name> is the cluster name and <base_domain> is the base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 7.6. Required DNS records Component Record Description Kubernetes API api.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. api-int.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to internally identify the API load balancer. These records must be resolvable from all the nodes within the cluster. Important The API server must be able to resolve the worker nodes by the hostnames that are recorded in Kubernetes. If the API server cannot resolve the node names, then proxied API calls can fail, and you cannot retrieve logs from pods. Routes *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that refers to the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. For example, console-openshift-console.apps.<cluster_name>.<base_domain> is used as a wildcard route to the OpenShift Container Platform console. Bootstrap machine bootstrap.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the bootstrap machine. These records must be resolvable by the nodes within the cluster. Control plane machines <control_plane><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the control plane nodes. These records must be resolvable by the nodes within the cluster. Compute machines <compute><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the worker nodes. These records must be resolvable by the nodes within the cluster. Note In OpenShift Container Platform 4.4 and later, you do not need to specify etcd host and SRV records in your DNS configuration. Tip You can use the dig command to verify name and reverse name resolution. See the section on Validating DNS resolution for user-provisioned infrastructure for detailed validation steps. 7.4.7.1. Example DNS configuration for user-provisioned clusters This section provides A and PTR record configuration samples that meet the DNS requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. The samples are not meant to provide advice for choosing one DNS solution over another. In the examples, the cluster name is ocp4 and the base domain is example.com . Example DNS A record configuration for a user-provisioned cluster The following example is a BIND zone file that shows sample A records for name resolution in a user-provisioned cluster. Example 7.1. Sample DNS zone database USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF 1 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer. 2 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer and is used for internal cluster communications. 3 Provides name resolution for the wildcard routes. The record refers to the IP address of the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. 4 Provides name resolution for the bootstrap machine. 5 6 7 Provides name resolution for the control plane machines. 8 9 Provides name resolution for the compute machines. Example DNS PTR record configuration for a user-provisioned cluster The following example BIND zone file shows sample PTR records for reverse name resolution in a user-provisioned cluster. Example 7.2. Sample DNS zone database for reverse records USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF 1 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer. 2 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer and is used for internal cluster communications. 3 Provides reverse DNS resolution for the bootstrap machine. 4 5 6 Provides reverse DNS resolution for the control plane machines. 7 8 Provides reverse DNS resolution for the compute machines. Note A PTR record is not required for the OpenShift Container Platform application wildcard. 7.4.8. Load balancing requirements for user-provisioned infrastructure Before you install OpenShift Container Platform, you must provision the API and application Ingress load balancing infrastructure. In production scenarios, you can deploy the API and application Ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you want to deploy the API and application Ingress load balancers with a Red Hat Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately. The load balancing infrastructure must meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Important Do not configure session persistence for an API load balancer. Configuring session persistence for a Kubernetes API server might cause performance issues from excess application traffic for your OpenShift Container Platform cluster and the Kubernetes API that runs inside the cluster. Configure the following ports on both the front and back of the load balancers: Table 7.7. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application Ingress load balancer : Provides an ingress point for application traffic flowing in from outside the cluster. A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Tip If the true IP address of the client can be seen by the application Ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Configure the following ports on both the front and back of the load balancers: Table 7.8. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTP traffic Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. 7.4.8.1. Example load balancer configuration for user-provisioned clusters This section provides an example API and application Ingress load balancer configuration that meets the load balancing requirements for user-provisioned clusters. The sample is an /etc/haproxy/haproxy.cfg configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another. In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you are using HAProxy as a load balancer and SELinux is set to enforcing , you must ensure that the HAProxy service can bind to the configured TCP port by running setsebool -P haproxy_connect_any=1 . Example 7.3. Sample API and application Ingress load balancer configuration global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s 1 Port 6443 handles the Kubernetes API traffic and points to the control plane machines. 2 4 The bootstrap entries must be in place before the OpenShift Container Platform cluster installation and they must be removed after the bootstrap process is complete. 3 Port 22623 handles the machine config server traffic and points to the control plane machines. 5 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. 6 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Tip If you are using HAProxy as a load balancer, you can check that the haproxy process is listening on ports 6443 , 22623 , 443 , and 80 by running netstat -nltupe on the HAProxy node. 7.5. Preparing the user-provisioned infrastructure Before you install OpenShift Container Platform on user-provisioned infrastructure, you must prepare the underlying infrastructure. This section provides details about the high-level steps required to set up your cluster infrastructure in preparation for an OpenShift Container Platform installation. This includes configuring IP networking and network connectivity for your cluster nodes, preparing a web server for the Ignition files, enabling the required ports through your firewall, and setting up the required DNS and load balancing infrastructure. After preparation, your cluster infrastructure must meet the requirements outlined in the Requirements for a cluster with user-provisioned infrastructure section. Prerequisites You have reviewed the OpenShift Container Platform 4.x Tested Integrations page. You have reviewed the infrastructure requirements detailed in the Requirements for a cluster with user-provisioned infrastructure section. Procedure Set up static IP addresses. Set up an HTTP or HTTPS server to provide Ignition files to the cluster nodes. Ensure that your network infrastructure provides the required network connectivity between the cluster components. See the Networking requirements for user-provisioned infrastructure section for details about the requirements. Configure your firewall to enable the ports required for the OpenShift Container Platform cluster components to communicate. See Networking requirements for user-provisioned infrastructure section for details about the ports that are required. Important By default, port 1936 is accessible for an OpenShift Container Platform cluster, because each control plane node needs access to this port. Avoid using the Ingress load balancer to expose this port, because doing so might result in the exposure of sensitive information, such as statistics and metrics, related to Ingress Controllers. Setup the required DNS infrastructure for your cluster. Configure DNS name resolution for the Kubernetes API, the application wildcard, the bootstrap machine, the control plane machines, and the compute machines. Configure reverse DNS resolution for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. See the User-provisioned DNS requirements section for more information about the OpenShift Container Platform DNS requirements. Validate your DNS configuration. From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses in the responses correspond to the correct components. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names in the responses correspond to the correct components. See the Validating DNS resolution for user-provisioned infrastructure section for detailed DNS validation steps. Provision the required API and application ingress load balancing infrastructure. See the Load balancing requirements for user-provisioned infrastructure section for more information about the requirements. Note Some load balancing solutions require the DNS name resolution for the cluster nodes to be in place before the load balancing is initialized. 7.6. Validating DNS resolution for user-provisioned infrastructure You can validate your DNS configuration before installing OpenShift Container Platform on user-provisioned infrastructure. Important The validation steps detailed in this section must succeed before you install your cluster. Prerequisites You have configured the required DNS records for your user-provisioned infrastructure. Procedure From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses contained in the responses correspond to the correct components. Perform a lookup against the Kubernetes API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1 1 Replace <nameserver_ip> with the IP address of the nameserver, <cluster_name> with your cluster name, and <base_domain> with your base domain name. Example output api.ocp4.example.com. 604800 IN A 192.168.1.5 Perform a lookup against the Kubernetes internal API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain> Example output api-int.ocp4.example.com. 604800 IN A 192.168.1.5 Test an example *.apps.<cluster_name>.<base_domain> DNS wildcard lookup. All of the application wildcard lookups must resolve to the IP address of the application ingress load balancer: USD dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain> Example output random.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Note In the example outputs, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. You can replace random with another wildcard value. For example, you can query the route to the OpenShift Container Platform console: USD dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain> Example output console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Run a lookup against the bootstrap DNS record name. Check that the result points to the IP address of the bootstrap node: USD dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain> Example output bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96 Use this method to perform lookups against the DNS record names for the control plane and compute nodes. Check that the results correspond to the IP addresses of each node. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names contained in the responses correspond to the correct components. Perform a reverse lookup against the IP address of the API load balancer. Check that the response includes the record names for the Kubernetes API and the Kubernetes internal API: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.5 Example output 5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2 1 Provides the record name for the Kubernetes internal API. 2 Provides the record name for the Kubernetes API. Note A PTR record is not required for the OpenShift Container Platform application wildcard. No validation step is needed for reverse DNS resolution against the IP address of the application ingress load balancer. Perform a reverse lookup against the IP address of the bootstrap node. Check that the result points to the DNS record name of the bootstrap node: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.96 Example output 96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com. Use this method to perform reverse lookups against the IP addresses for the control plane and compute nodes. Check that the results correspond to the DNS record names of each node. 7.7. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 7.8. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for IBM Z(R) 7.8.1. Sample install-config.yaml file for IBM Z You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: s390x controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: s390x metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{"auths":{"<local_registry>": {"auth": "<credentials>","email": "[email protected]"}}}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 additionalTrustBundle: | 17 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 18 - mirrors: - <local_repository>/ocp4/openshift4 source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_repository>/ocp4/openshift4 source: quay.io/openshift-release-dev/ocp-v4.0-art-dev 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 5 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 3 6 Specifies whether to enable or disable simultaneous multithreading (SMT), or hyperthreading. By default, SMT is enabled to increase the performance of the cores in your machines. You can disable it by setting the parameter value to Disabled . If you disable SMT, you must disable it in all cluster machines; this includes both control plane and compute machines. Note Simultaneous multithreading (SMT) is enabled by default. If SMT is not available on your OpenShift Container Platform nodes, the hyperthreading parameter has no effect. Important If you disable hyperthreading , whether on your OpenShift Container Platform nodes or in the install-config.yaml file, ensure that your capacity planning accounts for the dramatically decreased machine performance. 4 You must set this value to 0 when you install OpenShift Container Platform on user-provisioned infrastructure. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. In user-provisioned installations, you must manually deploy the compute machines before you finish installing the cluster. Note If you are installing a three-node cluster, do not deploy any compute machines when you install the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 7 The number of control plane machines that you add to the cluster. Because the cluster uses these values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 8 The cluster name that you specified in your DNS records. 9 A block of IP addresses from which pod IP addresses are allocated. This block must not overlap with existing physical networks. These IP addresses are used for the pod network. If you need to access the pods from an external network, you must configure load balancers and routers to manage the traffic. Note Class E CIDR range is reserved for a future use. To use the Class E CIDR range, you must ensure your networking environment accepts the IP addresses within the Class E CIDR range. 10 The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 , then each node is assigned a /23 subnet out of the given cidr , which allows for 510 (2^(32 - 23) - 2) pod IP addresses. If you are required to provide access to nodes from an external network, configure load balancers and routers to manage the traffic. 11 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 12 The IP address pool to use for service IP addresses. You can enter only one IP address pool. This block must not overlap with existing physical networks. If you need to access the services from an external network, configure load balancers and routers to manage the traffic. 13 You must set the platform to none . You cannot provide additional platform configuration variables for IBM Z(R) infrastructure. Important Clusters that are installed with the platform type none are unable to use some features, such as managing compute machines with the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that would normally support the feature. This parameter cannot be changed after installation. 14 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 15 For <local_registry> , specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example, registry.example.com or registry.example.com:5000 . For <credentials> , specify the base64-encoded user name and password for your mirror registry. 16 The SSH public key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 17 Add the additionalTrustBundle parameter and value. The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority or the self-signed certificate that you generated for the mirror registry. 18 Provide the imageContentSources section according to the output of the command that you used to mirror the repository. Important When using the oc adm release mirror command, use the output from the imageContentSources section. When using oc mirror command, use the repositoryDigestMirrors section of the ImageContentSourcePolicy file that results from running the command. ImageContentSourcePolicy is deprecated. For more information see Configuring image registry repository mirroring . 7.8.2. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 7.8.3. Configuring a three-node cluster Optionally, you can deploy zero compute machines in a bare metal cluster that consists of three control plane machines only. This provides smaller, more resource efficient clusters for cluster administrators and developers to use for testing, development, and production. In three-node OpenShift Container Platform environments, the three control plane machines are schedulable, which means that your application workloads are scheduled to run on them. Prerequisites You have an existing install-config.yaml file. Procedure Ensure that the number of compute replicas is set to 0 in your install-config.yaml file, as shown in the following compute stanza: compute: - name: worker platform: {} replicas: 0 Note You must set the value of the replicas parameter for the compute machines to 0 when you install OpenShift Container Platform on user-provisioned infrastructure, regardless of the number of compute machines you are deploying. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. This does not apply to user-provisioned installations, where the compute machines are deployed manually. For three-node cluster installations, follow these steps: If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. See the Load balancing requirements for user-provisioned infrastructure section for more information. When you create the Kubernetes manifest files in the following procedure, ensure that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml file is set to true . This enables your application workloads to run on the control plane nodes. Do not deploy any compute nodes when you create the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 7.9. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network plugin. OVNKubernetes is the only supported plugin during installation. You can specify the cluster network plugin configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 7.9.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 7.9. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 spec.serviceNetwork array A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.defaultNetwork object Configures the network plugin for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network plugin, the kube-proxy configuration has no effect. Important For a cluster that needs to deploy objects across multiple networks, ensure that you specify the same value for the clusterNetwork.hostPrefix parameter for each network type that is defined in the install-config.yaml file. Setting a different value for each clusterNetwork.hostPrefix parameter can impact the OVN-Kubernetes network plugin, where the plugin cannot effectively route object traffic among different nodes. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 7.10. defaultNetwork object Field Type Description type string OVNKubernetes . The Red Hat OpenShift Networking network plugin is selected during installation. This value cannot be changed after cluster installation. Note OpenShift Container Platform uses the OVN-Kubernetes network plugin by default. OpenShift SDN is no longer available as an installation choice for new clusters. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes network plugin. Configuration for the OVN-Kubernetes network plugin The following table describes the configuration fields for the OVN-Kubernetes network plugin: Table 7.11. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . genevePort integer The port to use for all Geneve packets. The default value is 6081 . This value cannot be changed after cluster installation. ipsecConfig object Specify a configuration object for customizing the IPsec configuration. ipv4 object Specifies a configuration object for IPv4 settings. ipv6 object Specifies a configuration object for IPv6 settings. policyAuditConfig object Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. gatewayConfig object Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway. Note While migrating egress traffic, you can expect some disruption to workloads and service traffic until the Cluster Network Operator (CNO) successfully rolls out the changes. Table 7.12. ovnKubernetesConfig.ipv4 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the 100.88.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. The subnet for the distributed transit switch that enables east-west traffic. This subnet cannot overlap with any other subnets used by OVN-Kubernetes or on the host itself. It must be large enough to accommodate one IP address per node in your cluster. The default value is 100.88.0.0/16 . internalJoinSubnet string If your existing network infrastructure overlaps with the 100.64.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. For example, if the clusterNetwork.cidr value is 10.128.0.0/14 and the clusterNetwork.hostPrefix value is /23 , then the maximum number of nodes is 2^(23-14)=512 . The default value is 100.64.0.0/16 . Table 7.13. ovnKubernetesConfig.ipv6 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the fd97::/64 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. The subnet for the distributed transit switch that enables east-west traffic. This subnet cannot overlap with any other subnets used by OVN-Kubernetes or on the host itself. It must be large enough to accommodate one IP address per node in your cluster. The default value is fd97::/64 . internalJoinSubnet string If your existing network infrastructure overlaps with the fd98::/64 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. The default value is fd98::/64 . Table 7.14. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. maxLogFiles integer The maximum number of log files that are retained. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . Table 7.15. gatewayConfig object Field Type Description routingViaHost boolean Set this field to true to send egress traffic from pods to the host networking stack. For highly-specialized installations and applications that rely on manually configured routes in the kernel routing table, you might want to route egress traffic to the host networking stack. By default, egress traffic is processed in OVN to exit the cluster and is not affected by specialized routes in the kernel routing table. The default value is false . This field has an interaction with the Open vSwitch hardware offloading feature. If you set this field to true , you do not receive the performance benefits of the offloading because egress traffic is processed by the host networking stack. ipForwarding object You can control IP forwarding for all traffic on OVN-Kubernetes managed interfaces by using the ipForwarding specification in the Network resource. Specify Restricted to only allow IP forwarding for Kubernetes related traffic. Specify Global to allow forwarding of all IP traffic. For new installations, the default is Restricted . For updates to OpenShift Container Platform 4.14 or later, the default is Global . ipv4 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv4 addresses. ipv6 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv6 addresses. Table 7.16. gatewayConfig.ipv4 object Field Type Description internalMasqueradeSubnet string The masquerade IPv4 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is 169.254.169.0/29 . Table 7.17. gatewayConfig.ipv6 object Field Type Description internalMasqueradeSubnet string The masquerade IPv6 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is fd69::/125 . Table 7.18. ipsecConfig object Field Type Description mode string Specifies the behavior of the IPsec implementation. Must be one of the following values: Disabled : IPsec is not enabled on cluster nodes. External : IPsec is enabled for network traffic with external hosts. Full : IPsec is enabled for pod traffic and network traffic with external hosts. Example OVN-Kubernetes configuration with IPSec enabled defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full Important Using OVNKubernetes can lead to a stack exhaustion problem on IBM Power(R). kubeProxyConfig object configuration (OpenShiftSDN container network interface only) The values for the kubeProxyConfig object are defined in the following table: Table 7.19. kubeProxyConfig object Field Type Description iptablesSyncPeriod string The refresh period for iptables rules. The default value is 30s . Valid suffixes include s , m , and h and are described in the Go time package documentation. Note Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer necessary. proxyArguments.iptables-min-sync-period array The minimum duration before refreshing iptables rules. This field ensures that the refresh does not happen too frequently. Valid suffixes include s , m , and h and are described in the Go time package . The default value is: kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s 7.10. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Note The installation program that generates the manifest and Ignition files is architecture specific and can be obtained from the client image mirror . The Linux version of the installation program runs on s390x only. This installer program is also available as a Mac OS version. Prerequisites You obtained the OpenShift Container Platform installation program. For a restricted network installation, these files are on your mirror host. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Warning If you are installing a three-node cluster, skip the following step to allow the control plane nodes to be schedulable. Important When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become compute nodes. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: 7.11. Configuring NBDE with static IP in an IBM Z or IBM LinuxONE environment Enabling NBDE disk encryption in an IBM Z(R) or IBM(R) LinuxONE environment requires additional steps, which are described in detail in this section. Prerequisites You have set up the External Tang Server. See Network-bound disk encryption for instructions. You have installed the butane utility. You have reviewed the instructions for how to create machine configs with Butane. Procedure Create Butane configuration files for the control plane and compute nodes. The following example of a Butane configuration for a control plane node creates a file named master-storage.bu for disk encryption: variant: openshift version: 4.16.0 metadata: name: master-storage labels: machineconfiguration.openshift.io/role: master storage: luks: - clevis: tang: - thumbprint: QcPr_NHFJammnRCA3fFMVdNBwjs url: http://clevis.example.com:7500 options: 1 - --cipher - aes-cbc-essiv:sha256 device: /dev/disk/by-partlabel/root 2 label: luks-root name: root wipe_volume: true filesystems: - device: /dev/mapper/root format: xfs label: root wipe_filesystem: true openshift: fips: true 3 1 The cipher option is only required if FIPS mode is enabled. Omit the entry if FIPS is disabled. 2 For installations on DASD-type disks, replace with device: /dev/disk/by-label/root . 3 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Create a customized initramfs file to boot the machine, by running the following command: USD coreos-installer pxe customize \ /root/rhcos-bootfiles/rhcos-<release>-live-initramfs.s390x.img \ --dest-device /dev/disk/by-id/scsi-<serial_number> --dest-karg-append \ ip=<ip_address>::<gateway_ip>:<subnet_mask>::<network_device>:none \ --dest-karg-append nameserver=<nameserver_ip> \ --dest-karg-append rd.neednet=1 -o \ /root/rhcos-bootfiles/<node_name>-initramfs.s390x.img Note Before first boot, you must customize the initramfs for each node in the cluster, and add PXE kernel parameters. Create a parameter file that includes ignition.platform.id=metal and ignition.firstboot . Example kernel parameter file for the control plane machine cio_ignore=all,!condev rd.neednet=1 \ console=ttysclp0 \ coreos.inst.install_dev=/dev/<block_device> \ 1 ignition.firstboot ignition.platform.id=metal \ coreos.inst.ignition_url=http://<http_server>/master.ign \ 2 coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \ 3 ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> \ rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 \ rd.zfcp=0.0.5677,0x600606680g7f0056,0x034F000000000000 \ 4 zfcp.allow_lun_scan=0 1 Specify the block device type. For installations on DASD-type disks, specify /dev/dasda . For installations on FCP-type disks, specify /dev/sda . 2 Specify the location of the Ignition config file. Use master.ign or worker.ign . Only HTTP and HTTPS protocols are supported. 3 Specify the location of the rootfs artifact for the kernel and initramfs you are booting. Only HTTP and HTTPS protocols are supported. 4 For installations on DASD-type disks, replace with rd.dasd=0.0.xxxx to specify the DASD device. Note Write all options in the parameter file as a single line and make sure you have no newline characters. Additional resources Creating machine configs with Butane 7.12. Installing RHCOS and starting the OpenShift Container Platform bootstrap process To install OpenShift Container Platform on IBM Z(R) infrastructure that you provision, you must install Red Hat Enterprise Linux CoreOS (RHCOS) in an LPAR. When you install RHCOS, you must provide the Ignition config file that was generated by the OpenShift Container Platform installation program for the type of machine you are installing. If you have configured suitable networking, DNS, and load balancing infrastructure, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS guest machines have rebooted. Complete the following steps to create the machines. Prerequisites An HTTP or HTTPS server running on your provisioning machine that is accessible to the machines you create. If you want to enable secure boot, you have obtained the appropriate Red Hat Product Signing Key and read Secure boot on IBM Z and IBM LinuxONE in IBM documentation. Procedure Log in to Linux on your provisioning machine. Obtain the Red Hat Enterprise Linux CoreOS (RHCOS) kernel, initramfs, and rootfs files from the RHCOS image mirror . Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Only use the appropriate kernel, initramfs, and rootfs artifacts described in the following procedure. The file names contain the OpenShift Container Platform version number. They resemble the following examples: kernel: rhcos-<version>-live-kernel-<architecture> initramfs: rhcos-<version>-live-initramfs.<architecture>.img rootfs: rhcos-<version>-live-rootfs.<architecture>.img Note The rootfs image is the same for FCP and DASD. Create parameter files. The following parameters are specific for a particular virtual machine: For ip= , specify the following seven entries: The IP address for the machine. An empty string. The gateway. The netmask. The machine host and domain name in the form hostname.domainname . Omit this value to let RHCOS decide. The network interface name. Omit this value to let RHCOS decide. If you use static IP addresses, specify none . For coreos.inst.ignition_url= , specify the Ignition file for the machine role. Use bootstrap.ign , master.ign , or worker.ign . Only HTTP and HTTPS protocols are supported. For coreos.live.rootfs_url= , specify the matching rootfs artifact for the kernel and initramfs you are booting. Only HTTP and HTTPS protocols are supported. Optional: To enable secure boot, add coreos.inst.secure_ipl For installations on DASD-type disks, complete the following tasks: For coreos.inst.install_dev= , specify /dev/dasda . Use rd.dasd= to specify the DASD where RHCOS is to be installed. Leave all other parameters unchanged. Example parameter file, bootstrap-0.parm , for the bootstrap machine: cio_ignore=all,!condev rd.neednet=1 \ console=ttysclp0 \ coreos.inst.install_dev=/dev/<block_device> \ 1 coreos.inst.ignition_url=http://<http_server>/bootstrap.ign \ 2 coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \ 3 coreos.inst.secure_ipl \ 4 ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> \ rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 \ rd.dasd=0.0.3490 \ zfcp.allow_lun_scan=0 1 Specify the block device type. For installations on DASD-type disks, specify /dev/dasda . For installations on FCP-type disks, specify /dev/sda . 2 Specify the location of the Ignition config file. Use bootstrap.ign , master.ign , or worker.ign . Only HTTP and HTTPS protocols are supported. 3 Specify the location of the rootfs artifact for the kernel and initramfs you are booting. Only HTTP and HTTPS protocols are supported. 4 Optional: To enable secure boot, add coreos.inst.secure_ipl . Write all options in the parameter file as a single line and make sure you have no newline characters. For installations on FCP-type disks, complete the following tasks: Use rd.zfcp=<adapter>,<wwpn>,<lun> to specify the FCP disk where RHCOS is to be installed. For multipathing repeat this step for each additional path. Note When you install with multiple paths, you must enable multipathing directly after the installation, not at a later point in time, as this can cause problems. Set the install device as: coreos.inst.install_dev=/dev/disk/by-id/scsi-<serial_number> . Note If additional LUNs are configured with NPIV, FCP requires zfcp.allow_lun_scan=0 . If you must enable zfcp.allow_lun_scan=1 because you use a CSI driver, for example, you must configure your NPIV so that each node cannot access the boot partition of another node. Leave all other parameters unchanged. Important Additional postinstallation steps are required to fully enable multipathing. For more information, see "Enabling multipathing with kernel arguments on RHCOS" in Postinstallation machine configuration tasks . The following is an example parameter file worker-1.parm for a compute node with multipathing: cio_ignore=all,!condev rd.neednet=1 \ console=ttysclp0 \ coreos.inst.install_dev=/dev/disk/by-id/scsi-<serial_number> \ coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \ coreos.inst.ignition_url=http://<http_server>/worker.ign \ ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> \ rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 \ rd.zfcp=0.0.1987,0x50050763070bc5e3,0x4008400B00000000 \ rd.zfcp=0.0.19C7,0x50050763070bc5e3,0x4008400B00000000 \ rd.zfcp=0.0.1987,0x50050763071bc5e3,0x4008400B00000000 \ rd.zfcp=0.0.19C7,0x50050763071bc5e3,0x4008400B00000000 \ zfcp.allow_lun_scan=0 Write all options in the parameter file as a single line and make sure you have no newline characters. Transfer the initramfs, kernel, parameter files, and RHCOS images to the LPAR, for example with FTP. For details about how to transfer the files with FTP and boot, see Booting the installation on IBM Z(R) to install RHEL in an LPAR . Boot the machine Repeat this procedure for the other machines in the cluster. 7.12.1. Advanced RHCOS installation reference This section illustrates the networking configuration and other advanced options that allow you to modify the Red Hat Enterprise Linux CoreOS (RHCOS) manual installation process. The following tables describe the kernel arguments and command-line options you can use with the RHCOS live installer and the coreos-installer command. 7.12.1.1. Networking and bonding options for ISO installations If you install RHCOS from an ISO image, you can add kernel arguments manually when you boot the image to configure networking for a node. If no networking arguments are specified, DHCP is activated in the initramfs when RHCOS detects that networking is required to fetch the Ignition config file. Important When adding networking arguments manually, you must also add the rd.neednet=1 kernel argument to bring the network up in the initramfs. The following information provides examples for configuring networking and bonding on your RHCOS nodes for ISO installations. The examples describe how to use the ip= , nameserver= , and bond= kernel arguments. Note Ordering is important when adding the kernel arguments: ip= , nameserver= , and then bond= . The networking options are passed to the dracut tool during system boot. For more information about the networking options supported by dracut , see the dracut.cmdline manual page . The following examples are the networking options for ISO installation. Configuring DHCP or static IP addresses To configure an IP address, either use DHCP ( ip=dhcp ) or set an individual static IP address ( ip=<host_ip> ). If setting a static IP, you must then identify the DNS server IP address ( nameserver=<dns_ip> ) on each node. The following example sets: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The hostname to core0.example.com The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41 Note When you use DHCP to configure IP addressing for the RHCOS machines, the machines also obtain the DNS server information through DHCP. For DHCP-based deployments, you can define the DNS server address that is used by the RHCOS nodes through your DHCP server configuration. Configuring an IP address without a static hostname You can configure an IP address without assigning a static hostname. If a static hostname is not set by the user, it will be picked up and automatically set by a reverse DNS lookup. To configure an IP address without a static hostname refer to the following example: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41 Specifying multiple network interfaces You can specify multiple network interfaces by setting multiple ip= entries. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring default gateway and route Optional: You can configure routes to additional networks by setting an rd.route= value. Note When you configure one or multiple networks, one default gateway is required. If the additional network gateway is different from the primary network gateway, the default gateway must be the primary network gateway. Run the following command to configure the default gateway: ip=::10.10.10.254:::: Enter the following command to configure the route for the additional network: rd.route=20.20.20.0/24:20.20.20.254:enp2s0 Disabling DHCP on a single interface You can disable DHCP on a single interface, such as when there are two or more network interfaces and only one interface is being used. In the example, the enp1s0 interface has a static networking configuration and DHCP is disabled for enp2s0 , which is not used: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none Combining DHCP and static IP configurations You can combine DHCP and static IP configurations on systems with multiple network interfaces, for example: ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring VLANs on individual interfaces Optional: You can configure VLANs on individual interfaces by using the vlan= parameter. To configure a VLAN on a network interface and use a static IP address, run the following command: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0 To configure a VLAN on a network interface and to use DHCP, run the following command: ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0 Providing multiple DNS servers You can provide multiple DNS servers by adding a nameserver= entry for each server, for example: nameserver=1.1.1.1 nameserver=8.8.8.8 Bonding multiple network interfaces to a single interface Optional: You can bond multiple network interfaces to a single interface by using the bond= option. Refer to the following examples: The syntax for configuring a bonded interface is: bond=<name>[:<network_interfaces>][:options] <name> is the bonding device name ( bond0 ), <network_interfaces> represents a comma-separated list of physical (ethernet) interfaces ( em1,em2 ), and options is a comma-separated list of bonding options. Enter modinfo bonding to see available options. When you create a bonded interface using bond= , you must specify how the IP address is assigned and other information for the bonded interface. To configure the bonded interface to use DHCP, set the bond's IP address to dhcp . For example: bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp To configure the bonded interface to use a static IP address, enter the specific IP address you want and related information. For example: bond=bond0:em1,em2:mode=active-backup,fail_over_mac=1 ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none Always set the fail_over_mac=1 option in active-backup mode, to avoid problems when shared OSA/RoCE cards are used. Bonding multiple network interfaces to a single interface Optional: You can configure VLANs on bonded interfaces by using the vlan= parameter and to use DHCP, for example: ip=bond0.100:dhcp bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0 Use the following example to configure the bonded interface with a VLAN and to use a static IP address: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0.100:none bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0 Using network teaming Optional: You can use a network teaming as an alternative to bonding by using the team= parameter: The syntax for configuring a team interface is: team=name[:network_interfaces] name is the team device name ( team0 ) and network_interfaces represents a comma-separated list of physical (ethernet) interfaces ( em1, em2 ). Note Teaming is planned to be deprecated when RHCOS switches to an upcoming version of RHEL. For more information, see this Red Hat Knowledgebase Article . Use the following example to configure a network team: team=team0:em1,em2 ip=team0:dhcp 7.13. Waiting for the bootstrap process to complete The OpenShift Container Platform bootstrap process begins after the cluster nodes first boot into the persistent RHCOS environment that has been installed to disk. The configuration information provided through the Ignition config files is used to initialize the bootstrap process and install OpenShift Container Platform on the machines. You must wait for the bootstrap process to complete. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have obtained the installation program and generated the Ignition config files for your cluster. You installed RHCOS on your cluster machines and provided the Ignition config files that the OpenShift Container Platform installation program generated. Procedure Monitor the bootstrap process: USD ./openshift-install --dir <installation_directory> wait-for bootstrap-complete \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443... INFO API v1.29.4 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines. After the bootstrap process is complete, remove the bootstrap machine from the load balancer. Important You must remove the bootstrap machine from the load balancer at this point. You can also remove or reformat the bootstrap machine itself. 7.14. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 7.15. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.29.4 master-1 Ready master 63m v1.29.4 master-2 Ready master 64m v1.29.4 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.29.4 master-1 Ready master 73m v1.29.4 master-2 Ready master 74m v1.29.4 worker-0 Ready worker 11m v1.29.4 worker-1 Ready worker 11m v1.29.4 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 7.16. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.16.0 True False False 19m baremetal 4.16.0 True False False 37m cloud-credential 4.16.0 True False False 40m cluster-autoscaler 4.16.0 True False False 37m config-operator 4.16.0 True False False 38m console 4.16.0 True False False 26m csi-snapshot-controller 4.16.0 True False False 37m dns 4.16.0 True False False 37m etcd 4.16.0 True False False 36m image-registry 4.16.0 True False False 31m ingress 4.16.0 True False False 30m insights 4.16.0 True False False 31m kube-apiserver 4.16.0 True False False 26m kube-controller-manager 4.16.0 True False False 36m kube-scheduler 4.16.0 True False False 36m kube-storage-version-migrator 4.16.0 True False False 37m machine-api 4.16.0 True False False 29m machine-approver 4.16.0 True False False 37m machine-config 4.16.0 True False False 36m marketplace 4.16.0 True False False 37m monitoring 4.16.0 True False False 29m network 4.16.0 True False False 38m node-tuning 4.16.0 True False False 37m openshift-apiserver 4.16.0 True False False 32m openshift-controller-manager 4.16.0 True False False 30m openshift-samples 4.16.0 True False False 32m operator-lifecycle-manager 4.16.0 True False False 37m operator-lifecycle-manager-catalog 4.16.0 True False False 37m operator-lifecycle-manager-packageserver 4.16.0 True False False 32m service-ca 4.16.0 True False False 38m storage 4.16.0 True False False 37m Configure the Operators that are not available. 7.16.1. Disabling the default OperatorHub catalog sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. 7.16.2. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 7.16.2.1. Configuring registry storage for IBM Z As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have a cluster on IBM Z(R). You have provisioned persistent storage for your cluster, such as Red Hat OpenShift Data Foundation. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have 100Gi capacity. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When you use shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resources found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: Leave the claim field blank to allow the automatic creation of an image-registry-storage PVC. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.16 True False False 6h50m Ensure that your registry is set to managed to enable building and pushing of images. Run: Then, change the line to 7.16.2.2. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 7.17. Completing installation on user-provisioned infrastructure After you complete the Operator configuration, you can finish installing the cluster on infrastructure that you provide. Prerequisites Your control plane has initialized. You have completed the initial Operator configuration. Procedure Confirm that all the cluster components are online with the following command: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.16.0 True False False 19m baremetal 4.16.0 True False False 37m cloud-credential 4.16.0 True False False 40m cluster-autoscaler 4.16.0 True False False 37m config-operator 4.16.0 True False False 38m console 4.16.0 True False False 26m csi-snapshot-controller 4.16.0 True False False 37m dns 4.16.0 True False False 37m etcd 4.16.0 True False False 36m image-registry 4.16.0 True False False 31m ingress 4.16.0 True False False 30m insights 4.16.0 True False False 31m kube-apiserver 4.16.0 True False False 26m kube-controller-manager 4.16.0 True False False 36m kube-scheduler 4.16.0 True False False 36m kube-storage-version-migrator 4.16.0 True False False 37m machine-api 4.16.0 True False False 29m machine-approver 4.16.0 True False False 37m machine-config 4.16.0 True False False 36m marketplace 4.16.0 True False False 37m monitoring 4.16.0 True False False 29m network 4.16.0 True False False 38m node-tuning 4.16.0 True False False 37m openshift-apiserver 4.16.0 True False False 32m openshift-controller-manager 4.16.0 True False False 30m openshift-samples 4.16.0 True False False 32m operator-lifecycle-manager 4.16.0 True False False 37m operator-lifecycle-manager-catalog 4.16.0 True False False 37m operator-lifecycle-manager-packageserver 4.16.0 True False False 32m service-ca 4.16.0 True False False 38m storage 4.16.0 True False False 37m Alternatively, the following command notifies you when all of the clusters are available. It also retrieves and displays credentials: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 30m0s for the cluster to initialize... The command succeeds when the Cluster Version Operator finishes deploying the OpenShift Container Platform cluster from Kubernetes API server. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Confirm that the Kubernetes API server is communicating with the pods. To view a list of all pods, use the following command: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m ... View the logs for a pod that is listed in the output of the command by using the following command: USD oc logs <pod_name> -n <namespace> 1 1 Specify the pod name and namespace, as shown in the output of the command. If the pod logs display, the Kubernetes API server can communicate with the cluster machines. For an installation with Fibre Channel Protocol (FCP), additional steps are required to enable multipathing. Do not enable multipathing during installation. See "Enabling multipathing with kernel arguments on RHCOS" in the Postinstallation machine configuration tasks documentation for more information. Register your cluster on the Cluster registration page. Verification If you have enabled secure boot during the OpenShift Container Platform bootstrap process, the following verification steps are required: Debug the node by running the following command: USD oc debug node/<node_name> chroot /host Confirm that secure boot is enabled by running the following command: USD cat /sys/firmware/ipl/secure Example output 1 1 1 The value is 1 if secure boot is enabled and 0 if secure boot is not enabled. List the re-IPL configuration by running the following command: # lsreipl Example output for an FCP disk Re-IPL type: fcp WWPN: 0x500507630400d1e3 LUN: 0x4001400e00000000 Device: 0.0.810e bootprog: 0 br_lba: 0 Loadparm: "" Bootparms: "" clear: 0 Example output for a DASD disk for DASD output: Re-IPL type: ccw Device: 0.0.525d Loadparm: "" clear: 0 Shut down the node by running the following command: sudo shutdown -h Initiate a boot from LPAR from the Hardware Management Console (HMC). See Initiating a secure boot from an LPAR in IBM documentation. When the node is back, check the secure boot status again. Additional resources How to generate SOSREPORT within OpenShift Container Platform version 4 nodes without SSH . 7.18. steps Customize your cluster . If the mirror registry that you used to install your cluster has a trusted CA, add it to the cluster by configuring additional trust stores . If necessary, you can opt out of remote health reporting . If necessary, see Registering your disconnected cluster
[ "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF", "global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s", "dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1", "api.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>", "api-int.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>", "random.apps.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>", "console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>", "bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96", "dig +noall +answer @<nameserver_ip> -x 192.168.1.5", "5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2", "dig +noall +answer @<nameserver_ip> -x 192.168.1.96", "96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "mkdir <installation_directory>", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: s390x controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: s390x metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 additionalTrustBundle: | 17 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 18 - mirrors: - <local_repository>/ocp4/openshift4 source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_repository>/ocp4/openshift4 source: quay.io/openshift-release-dev/ocp-v4.0-art-dev", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "compute: - name: worker platform: {} replicas: 0", "spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23", "spec: serviceNetwork: - 172.30.0.0/14", "defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full", "kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s", "./openshift-install create manifests --dir <installation_directory> 1", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "variant: openshift version: 4.16.0 metadata: name: master-storage labels: machineconfiguration.openshift.io/role: master storage: luks: - clevis: tang: - thumbprint: QcPr_NHFJammnRCA3fFMVdNBwjs url: http://clevis.example.com:7500 options: 1 - --cipher - aes-cbc-essiv:sha256 device: /dev/disk/by-partlabel/root 2 label: luks-root name: root wipe_volume: true filesystems: - device: /dev/mapper/root format: xfs label: root wipe_filesystem: true openshift: fips: true 3", "coreos-installer pxe customize /root/rhcos-bootfiles/rhcos-<release>-live-initramfs.s390x.img --dest-device /dev/disk/by-id/scsi-<serial_number> --dest-karg-append ip=<ip_address>::<gateway_ip>:<subnet_mask>::<network_device>:none --dest-karg-append nameserver=<nameserver_ip> --dest-karg-append rd.neednet=1 -o /root/rhcos-bootfiles/<node_name>-initramfs.s390x.img", "cio_ignore=all,!condev rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/<block_device> \\ 1 ignition.firstboot ignition.platform.id=metal coreos.inst.ignition_url=http://<http_server>/master.ign \\ 2 coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \\ 3 ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 rd.zfcp=0.0.5677,0x600606680g7f0056,0x034F000000000000 \\ 4 zfcp.allow_lun_scan=0", "cio_ignore=all,!condev rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/<block_device> \\ 1 coreos.inst.ignition_url=http://<http_server>/bootstrap.ign \\ 2 coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \\ 3 coreos.inst.secure_ipl \\ 4 ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 rd.dasd=0.0.3490 zfcp.allow_lun_scan=0", "cio_ignore=all,!condev rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/disk/by-id/scsi-<serial_number> coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.ignition_url=http://<http_server>/worker.ign ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 rd.zfcp=0.0.1987,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.1987,0x50050763071bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763071bc5e3,0x4008400B00000000 zfcp.allow_lun_scan=0", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=::10.10.10.254::::", "rd.route=20.20.20.0/24:20.20.20.254:enp2s0", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none", "ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0", "ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0", "nameserver=1.1.1.1 nameserver=8.8.8.8", "bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp", "bond=bond0:em1,em2:mode=active-backup,fail_over_mac=1 ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none", "ip=bond0.100:dhcp bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0.100:none bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0", "team=team0:em1,em2 ip=team0:dhcp", "./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2", "INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.29.4 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.29.4 master-1 Ready master 63m v1.29.4 master-2 Ready master 64m v1.29.4", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.29.4 master-1 Ready master 73m v1.29.4 master-2 Ready master 74m v1.29.4 worker-0 Ready worker 11m v1.29.4 worker-1 Ready worker 11m v1.29.4", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.16.0 True False False 19m baremetal 4.16.0 True False False 37m cloud-credential 4.16.0 True False False 40m cluster-autoscaler 4.16.0 True False False 37m config-operator 4.16.0 True False False 38m console 4.16.0 True False False 26m csi-snapshot-controller 4.16.0 True False False 37m dns 4.16.0 True False False 37m etcd 4.16.0 True False False 36m image-registry 4.16.0 True False False 31m ingress 4.16.0 True False False 30m insights 4.16.0 True False False 31m kube-apiserver 4.16.0 True False False 26m kube-controller-manager 4.16.0 True False False 36m kube-scheduler 4.16.0 True False False 36m kube-storage-version-migrator 4.16.0 True False False 37m machine-api 4.16.0 True False False 29m machine-approver 4.16.0 True False False 37m machine-config 4.16.0 True False False 36m marketplace 4.16.0 True False False 37m monitoring 4.16.0 True False False 29m network 4.16.0 True False False 38m node-tuning 4.16.0 True False False 37m openshift-apiserver 4.16.0 True False False 32m openshift-controller-manager 4.16.0 True False False 30m openshift-samples 4.16.0 True False False 32m operator-lifecycle-manager 4.16.0 True False False 37m operator-lifecycle-manager-catalog 4.16.0 True False False 37m operator-lifecycle-manager-packageserver 4.16.0 True False False 32m service-ca 4.16.0 True False False 38m storage 4.16.0 True False False 37m", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resources found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim:", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.16 True False False 6h50m", "oc edit configs.imageregistry/cluster", "managementState: Removed", "managementState: Managed", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'", "Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.16.0 True False False 19m baremetal 4.16.0 True False False 37m cloud-credential 4.16.0 True False False 40m cluster-autoscaler 4.16.0 True False False 37m config-operator 4.16.0 True False False 38m console 4.16.0 True False False 26m csi-snapshot-controller 4.16.0 True False False 37m dns 4.16.0 True False False 37m etcd 4.16.0 True False False 36m image-registry 4.16.0 True False False 31m ingress 4.16.0 True False False 30m insights 4.16.0 True False False 31m kube-apiserver 4.16.0 True False False 26m kube-controller-manager 4.16.0 True False False 36m kube-scheduler 4.16.0 True False False 36m kube-storage-version-migrator 4.16.0 True False False 37m machine-api 4.16.0 True False False 29m machine-approver 4.16.0 True False False 37m machine-config 4.16.0 True False False 36m marketplace 4.16.0 True False False 37m monitoring 4.16.0 True False False 29m network 4.16.0 True False False 38m node-tuning 4.16.0 True False False 37m openshift-apiserver 4.16.0 True False False 32m openshift-controller-manager 4.16.0 True False False 30m openshift-samples 4.16.0 True False False 32m operator-lifecycle-manager 4.16.0 True False False 37m operator-lifecycle-manager-catalog 4.16.0 True False False 37m operator-lifecycle-manager-packageserver 4.16.0 True False False 32m service-ca 4.16.0 True False False 38m storage 4.16.0 True False False 37m", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m", "oc logs <pod_name> -n <namespace> 1", "oc debug node/<node_name> chroot /host", "cat /sys/firmware/ipl/secure", "1 1", "lsreipl", "Re-IPL type: fcp WWPN: 0x500507630400d1e3 LUN: 0x4001400e00000000 Device: 0.0.810e bootprog: 0 br_lba: 0 Loadparm: \"\" Bootparms: \"\" clear: 0", "for DASD output: Re-IPL type: ccw Device: 0.0.525d Loadparm: \"\" clear: 0", "sudo shutdown -h" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/installing_on_ibm_z_and_ibm_linuxone/installing-restricted-networks-ibm-z-lpar
Hardware accelerators
Hardware accelerators OpenShift Container Platform 4.16 Hardware accelerators Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/hardware_accelerators/index
Chapter 2. Installing
Chapter 2. Installing Installing the Red Hat build of OpenTelemetry involves the following steps: Installing the Red Hat build of OpenTelemetry Operator. Creating a namespace for an OpenTelemetry Collector instance. Creating an OpenTelemetryCollector custom resource to deploy the OpenTelemetry Collector instance. 2.1. Installing the Red Hat build of OpenTelemetry from the web console You can install the Red Hat build of OpenTelemetry from the Administrator view of the web console. Prerequisites You are logged in to the web console as a cluster administrator with the cluster-admin role. For Red Hat OpenShift Dedicated, you must be logged in using an account with the dedicated-admin role. Procedure Install the Red Hat build of OpenTelemetry Operator: Go to Operators OperatorHub and search for Red Hat build of OpenTelemetry Operator . Select the Red Hat build of OpenTelemetry Operator that is provided by Red Hat Install Install View Operator . Important This installs the Operator with the default presets: Update channel stable Installation mode All namespaces on the cluster Installed Namespace openshift-operators Update approval Automatic In the Details tab of the installed Operator page, under ClusterServiceVersion details , verify that the installation Status is Succeeded . Create a project of your choice for the OpenTelemetry Collector instance that you will create in the step by going to Home Projects Create Project . Create an OpenTelemetry Collector instance. Go to Operators Installed Operators . Select OpenTelemetry Collector Create OpenTelemetry Collector YAML view . In the YAML view , customize the OpenTelemetryCollector custom resource (CR): Example OpenTelemetryCollector CR apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: <project_of_opentelemetry_collector_instance> spec: mode: deployment config: receivers: 1 otlp: protocols: grpc: http: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} zipkin: {} processors: 2 batch: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 exporters: 3 debug: {} service: pipelines: traces: receivers: [otlp,jaeger,zipkin] processors: [memory_limiter,batch] exporters: [debug] 1 For details, see the "Receivers" page. 2 For details, see the "Processors" page. 3 For details, see the "Exporters" page. Select Create . Verification Use the Project: dropdown list to select the project of the OpenTelemetry Collector instance. Go to Operators Installed Operators to verify that the Status of the OpenTelemetry Collector instance is Condition: Ready . Go to Workloads Pods to verify that all the component pods of the OpenTelemetry Collector instance are running. 2.2. Installing the Red Hat build of OpenTelemetry by using the CLI You can install the Red Hat build of OpenTelemetry from the command line. Prerequisites An active OpenShift CLI ( oc ) session by a cluster administrator with the cluster-admin role. Tip Ensure that your OpenShift CLI ( oc ) version is up to date and matches your OpenShift Container Platform version. Run oc login : USD oc login --username=<your_username> Procedure Install the Red Hat build of OpenTelemetry Operator: Create a project for the Red Hat build of OpenTelemetry Operator by running the following command: USD oc apply -f - << EOF apiVersion: project.openshift.io/v1 kind: Project metadata: labels: kubernetes.io/metadata.name: openshift-opentelemetry-operator openshift.io/cluster-monitoring: "true" name: openshift-opentelemetry-operator EOF Create an Operator group by running the following command: USD oc apply -f - << EOF apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-opentelemetry-operator namespace: openshift-opentelemetry-operator spec: upgradeStrategy: Default EOF Create a subscription by running the following command: USD oc apply -f - << EOF apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: opentelemetry-product namespace: openshift-opentelemetry-operator spec: channel: stable installPlanApproval: Automatic name: opentelemetry-product source: redhat-operators sourceNamespace: openshift-marketplace EOF Check the Operator status by running the following command: USD oc get csv -n openshift-opentelemetry-operator Create a project of your choice for the OpenTelemetry Collector instance that you will create in a subsequent step: To create a project without metadata, run the following command: USD oc new-project <project_of_opentelemetry_collector_instance> To create a project with metadata, run the following command: USD oc apply -f - << EOF apiVersion: project.openshift.io/v1 kind: Project metadata: name: <project_of_opentelemetry_collector_instance> EOF Create an OpenTelemetry Collector instance in the project that you created for it. Note You can create multiple OpenTelemetry Collector instances in separate projects on the same cluster. Customize the OpenTelemetryCollector custom resource (CR): Example OpenTelemetryCollector CR apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: <project_of_opentelemetry_collector_instance> spec: mode: deployment config: receivers: 1 otlp: protocols: grpc: http: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} zipkin: {} processors: 2 batch: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 exporters: 3 debug: {} service: pipelines: traces: receivers: [otlp,jaeger,zipkin] processors: [memory_limiter,batch] exporters: [debug] 1 For details, see the "Receivers" page. 2 For details, see the "Processors" page. 3 For details, see the "Exporters" page. Apply the customized CR by running the following command: USD oc apply -f - << EOF <OpenTelemetryCollector_custom_resource> EOF Verification Verify that the status.phase of the OpenTelemetry Collector pod is Running and the conditions are type: Ready by running the following command: USD oc get pod -l app.kubernetes.io/managed-by=opentelemetry-operator,app.kubernetes.io/instance=<namespace>.<instance_name> -o yaml Get the OpenTelemetry Collector service by running the following command: USD oc get service -l app.kubernetes.io/managed-by=opentelemetry-operator,app.kubernetes.io/instance=<namespace>.<instance_name> 2.3. Using taints and tolerations To schedule the OpenTelemetry pods on dedicated nodes, see How to deploy the different OpenTelemetry components on infra nodes using nodeSelector and tolerations in OpenShift 4 2.4. Creating the required RBAC resources automatically Some Collector components require configuring the RBAC resources. Procedure Add the following permissions to the opentelemetry-operator-controller-manage service account so that the Red Hat build of OpenTelemetry Operator can create them automatically: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: generate-processors-rbac rules: - apiGroups: - rbac.authorization.k8s.io resources: - clusterrolebindings - clusterroles verbs: - create - delete - get - list - patch - update - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: generate-processors-rbac roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: generate-processors-rbac subjects: - kind: ServiceAccount name: opentelemetry-operator-controller-manager namespace: openshift-opentelemetry-operator 2.5. Additional resources Creating a cluster admin OperatorHub.io Accessing the web console Installing from OperatorHub using the web console Creating applications from installed Operators Getting started with the OpenShift CLI
[ "apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: <project_of_opentelemetry_collector_instance> spec: mode: deployment config: receivers: 1 otlp: protocols: grpc: http: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} zipkin: {} processors: 2 batch: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 exporters: 3 debug: {} service: pipelines: traces: receivers: [otlp,jaeger,zipkin] processors: [memory_limiter,batch] exporters: [debug]", "oc login --username=<your_username>", "oc apply -f - << EOF apiVersion: project.openshift.io/v1 kind: Project metadata: labels: kubernetes.io/metadata.name: openshift-opentelemetry-operator openshift.io/cluster-monitoring: \"true\" name: openshift-opentelemetry-operator EOF", "oc apply -f - << EOF apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-opentelemetry-operator namespace: openshift-opentelemetry-operator spec: upgradeStrategy: Default EOF", "oc apply -f - << EOF apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: opentelemetry-product namespace: openshift-opentelemetry-operator spec: channel: stable installPlanApproval: Automatic name: opentelemetry-product source: redhat-operators sourceNamespace: openshift-marketplace EOF", "oc get csv -n openshift-opentelemetry-operator", "oc new-project <project_of_opentelemetry_collector_instance>", "oc apply -f - << EOF apiVersion: project.openshift.io/v1 kind: Project metadata: name: <project_of_opentelemetry_collector_instance> EOF", "apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: <project_of_opentelemetry_collector_instance> spec: mode: deployment config: receivers: 1 otlp: protocols: grpc: http: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} zipkin: {} processors: 2 batch: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 exporters: 3 debug: {} service: pipelines: traces: receivers: [otlp,jaeger,zipkin] processors: [memory_limiter,batch] exporters: [debug]", "oc apply -f - << EOF <OpenTelemetryCollector_custom_resource> EOF", "oc get pod -l app.kubernetes.io/managed-by=opentelemetry-operator,app.kubernetes.io/instance=<namespace>.<instance_name> -o yaml", "oc get service -l app.kubernetes.io/managed-by=opentelemetry-operator,app.kubernetes.io/instance=<namespace>.<instance_name>", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: generate-processors-rbac rules: - apiGroups: - rbac.authorization.k8s.io resources: - clusterrolebindings - clusterroles verbs: - create - delete - get - list - patch - update - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: generate-processors-rbac roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: generate-processors-rbac subjects: - kind: ServiceAccount name: opentelemetry-operator-controller-manager namespace: openshift-opentelemetry-operator" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/red_hat_build_of_opentelemetry/install-otel
Preface
Preface Red Hat OpenShift Data Foundation supports deployment on existing Red Hat OpenShift Container Platform (RHOCP) using Red Hat OpenStack Platform clusters. Note Both internal and external OpenShift Data Foundation clusters are supported on Red Hat OpenStack Platform. See Planning your deployment for more information about deployment requirements. To deploy OpenShift Data Foundation, start with the requirements in Preparing to deploy OpenShift Data Foundation chapter and then follow the appropriate deployment process based on your requirement: Internal mode Deploying OpenShift Data Foundation on Red Hat OpenStack Platform in internal mode Deploy standalone Multicloud Object Gateway component External mode Deploying OpenShift Data Foundation on Red Hat OpenStack Platform in external mode
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/deploying_and_managing_openshift_data_foundation_using_red_hat_openstack_platform/preface-ocs-osp
Chapter 5. Using Insights tasks to help you convert from CentOS Linux 7 to RHEL 7
Chapter 5. Using Insights tasks to help you convert from CentOS Linux 7 to RHEL 7 You can use Red Hat Insights to help you convert from CentOS Linux 7 to RHEL 7. For more information about using Insights tasks to help convert your systems, see Converting using Insights in the Converting from a Linux distribution to RHEL using the Convert2RHEL utility documentation . Additional resources Video: Pre-conversion analysis for converting to Red Hat Enterprise Linux Video: Convert to Red Hat Enterprise Linux from CentOS7 Linux using Red Hat Insights Troubleshooting conversion-related Insights tasks Tasks help you update, manage, or secure your Red Hat Enterprise Linux infrastructure using Insights. Each task is a predefined playbook that executes a task from start to finish. If you have trouble completing some Insights conversion-related tasks, see: Troubleshooting issues with Red Hat Insights conversions
null
https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/assessing_and_remediating_system_issues_using_red_hat_insights_tasks_with_fedramp/convert-to-centos-using-tasks_overview-tasks
Chapter 21. Maven settings and repositories for Red Hat Decision Manager
Chapter 21. Maven settings and repositories for Red Hat Decision Manager When you create a Red Hat Decision Manager project, Business Central uses the Maven repositories that are configured for Business Central. You can use the Maven global or user settings to direct all Red Hat Decision Manager projects to retrieve dependencies from the public Red Hat Decision Manager repository by modifying the Maven project object model (POM) file ( pom.xml ). You can also configure Business Central and KIE Server to use an external Maven repository or prepare a Maven mirror for offline use. For more information about Red Hat Decision Manager packaging and deployment options, see Packaging and deploying an Red Hat Decision Manager project . 21.1. Adding Maven dependencies for Red Hat Decision Manager To use the correct Maven dependencies in your Red Hat Decision Manager project, add the Red Hat Business Automation bill of materials (BOM) files to the project's pom.xml file. The Red Hat Business Automation BOM applies to both Red Hat Decision Manager and Red Hat Process Automation Manager. When you add the BOM files, the correct versions of transitive dependencies from the provided Maven repositories are included in the project. For more information about the Red Hat Business Automation BOM, see What is the mapping between Red Hat Process Automation Manager and the Maven library version? . Procedure Declare the Red Hat Business Automation BOM in the pom.xml file: <dependencyManagement> <dependencies> <dependency> <groupId>com.redhat.ba</groupId> <artifactId>ba-platform-bom</artifactId> <version>7.13.5.redhat-00002</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> <dependencies> <!-- Your dependencies --> </dependencies> Declare dependencies required for your project in the <dependencies> tag. After you import the product BOM into your project, the versions of the user-facing product dependencies are defined so you do not need to specify the <version> sub-element of these <dependency> elements. However, you must use the <dependency> element to declare dependencies which you want to use in your project. For standalone projects that are not authored in Business Central, specify all dependencies required for your projects. In projects that you author in Business Central, the basic decision engine dependencies are provided automatically by Business Central. For a basic Red Hat Decision Manager project, declare the following dependencies, depending on the features that you want to use: For a basic Red Hat Decision Manager project, declare the following dependencies: Embedded decision engine dependencies <dependency> <groupId>org.drools</groupId> <artifactId>drools-compiler</artifactId> </dependency> <!-- Dependency for persistence support. --> <dependency> <groupId>org.drools</groupId> <artifactId>drools-persistence-jpa</artifactId> </dependency> <!-- Dependencies for decision tables, templates, and scorecards. For other assets, declare org.drools:business-central-models-* dependencies. --> <dependency> <groupId>org.drools</groupId> <artifactId>drools-decisiontables</artifactId> </dependency> <dependency> <groupId>org.drools</groupId> <artifactId>drools-templates</artifactId> </dependency> <dependency> <groupId>org.drools</groupId> <artifactId>drools-scorecards</artifactId> </dependency> <!-- Dependency for loading KJARs from a Maven repository using KieScanner. --> <dependency> <groupId>org.kie</groupId> <artifactId>kie-ci</artifactId> </dependency> To use KIE Server, declare the following dependencies: Client application KIE Server dependencies <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-client</artifactId> </dependency> To create a remote client for Red Hat Decision Manager, declare the following dependency: Client dependency <dependency> <groupId>org.uberfire</groupId> <artifactId>uberfire-rest-client</artifactId> </dependency> When creating a JAR file that includes assets, such as rules and process definitions, specify the packaging type for your Maven project as kjar and use org.kie:kie-maven-plugin to process the kjar packaging type located under the <project> element. In the following example, USD{kie.version} is the Maven library version listed in What is the mapping between Red Hat Decision Manager and the Maven library version? : <packaging>kjar</packaging> <build> <plugins> <plugin> <groupId>org.kie</groupId> <artifactId>kie-maven-plugin</artifactId> <version>USD{kie.version}</version> <extensions>true</extensions> </plugin> </plugins> </build> 21.2. Configuring an external Maven repository for Business Central and KIE Server You can configure Business Central and KIE Server to use an external Maven repository, such as Nexus or Artifactory, instead of the built-in repository. This enables Business Central and KIE Server to access and download artifacts that are maintained in the external Maven repository. Important Artifacts in the repository do not receive automated security patches because Maven requires that artifacts be immutable. As a result, artifacts that are missing patches for known security flaws will remain in the repository to avoid breaking builds that depend on them. The version numbers of patched artifacts are incremented. For more information, see JBoss Enterprise Maven Repository . Note For information about configuring an external Maven repository for an authoring environment on Red Hat OpenShift Container Platform, see the following documents: Deploying an Red Hat Decision Manager environment on Red Hat OpenShift Container Platform 4 using Operators Deploying an Red Hat Decision Manager environment on Red Hat OpenShift Container Platform 3 using templates Prerequisites Business Central and KIE Server are installed. For installation options, see Planning a Red Hat Decision Manager installation . Procedure Create a Maven settings.xml file with connection and access details for your external repository. For details about the settings.xml file, see the Maven Settings Reference . Save the file in a known location, for example, /opt/custom-config/settings.xml . In your Red Hat Process Automation Manager installation directory, navigate to the standalone-full.xml file. For example, if you use a Red Hat JBoss EAP installation for Red Hat Process Automation Manager go to USDEAP_HOME/standalone/configuration/standalone-full.xml . Open standalone-full.xml and under the <system-properties> tag, set the kie.maven.settings.custom property to the full path name of the settings.xml file. For example: <property name="kie.maven.settings.custom" value="/opt/custom-config/settings.xml"/> Start or restart Business Central and KIE Server. steps For each Business Central project that you want to export or push as a KJAR artifact to the external Maven repository, you must add the repository information in the project pom.xml file. For instructions, see Packaging and deploying an Red Hat Decision Manager project . 21.3. Preparing a Maven mirror repository for offline use If your Red Hat Process Automation Manager deployment does not have outgoing access to the public Internet, you must prepare a Maven repository with a mirror of all the necessary artifacts and make this repository available to your environment. Note You do not need to complete this procedure if your Red Hat Process Automation Manager deployment is connected to the Internet. Prerequisites A computer that has outgoing access to the public Internet is available. Procedure On the computer that has an outgoing connection to the public Internet, complete the following steps: Navigate to the Software Downloads page in the Red Hat Customer Portal (login required), and select the product and version from the drop-down options: Product: Process Automation Manager Version: 7.13.5 Download and extract the Red Hat Process Automation Manager 7.13.5 Offliner Content List ( rhpam-7.13.5-offliner.zip ) product deliverable file. Extract the contents of the rhpam-7.13.5-offliner.zip file into any directory. Change to the directory and enter the following command: This command creates the repository subdirectory and downloads the necessary artifacts into this subdirectory. This is the mirror repository. If a message reports that some downloads have failed, run the same command again. If downloads fail again, contact Red Hat support. If you developed services outside of Business Central and they have additional dependencies, add the dependencies to the mirror repository. If you developed the services as Maven projects, you can use the following steps to prepare these dependencies automatically. Complete the steps on the computer that has an outgoing connection to the public Internet. Create a backup of the local Maven cache directory ( ~/.m2/repository ) and then clear the directory. Build the source of your projects using the mvn clean install command. For every project, enter the following command to ensure that Maven downloads all runtime dependencies for all the artifacts generated by the project: Replace /path/to/project/pom.xml with the path of the pom.xml file of the project. Copy the contents of the local Maven cache directory ( ~/.m2/repository ) to the repository subdirectory that was created. Copy the contents of the repository subdirectory to a directory on the computer on which you deployed Red Hat Process Automation Manager. This directory becomes the offline Maven mirror repository. Create and configure a settings.xml file for your Red Hat Process Automation Manager deployment as described in Section 21.2, "Configuring an external Maven repository for Business Central and KIE Server" . Make the following changes in the settings.xml file: Under the <profile> tag, if a <repositories> or <pluginRepositores> tag is missing, add the missing tags. Under <repositories> add the following content: <repository> <id>offline-repository</id> <url>file:///path/to/repo</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> Replace /path/to/repo with the full path to the local Maven mirror repository directory. Under <pluginRepositories> add the following content: <repository> <id>offline-plugin-repository</id> <url>file:///path/to/repo</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> Replace /path/to/repo with the full path to the local Maven mirror repository directory. Set the kie.maven.offline.force property for Business Central to true . For instructions about setting properties for Business Central, see Installing and configuring Red Hat Decision Manager on Red Hat JBoss EAP 7.4 .
[ "<dependencyManagement> <dependencies> <dependency> <groupId>com.redhat.ba</groupId> <artifactId>ba-platform-bom</artifactId> <version>7.13.5.redhat-00002</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> <dependencies> <!-- Your dependencies --> </dependencies>", "<dependency> <groupId>org.drools</groupId> <artifactId>drools-compiler</artifactId> </dependency> <!-- Dependency for persistence support. --> <dependency> <groupId>org.drools</groupId> <artifactId>drools-persistence-jpa</artifactId> </dependency> <!-- Dependencies for decision tables, templates, and scorecards. For other assets, declare org.drools:business-central-models-* dependencies. --> <dependency> <groupId>org.drools</groupId> <artifactId>drools-decisiontables</artifactId> </dependency> <dependency> <groupId>org.drools</groupId> <artifactId>drools-templates</artifactId> </dependency> <dependency> <groupId>org.drools</groupId> <artifactId>drools-scorecards</artifactId> </dependency> <!-- Dependency for loading KJARs from a Maven repository using KieScanner. --> <dependency> <groupId>org.kie</groupId> <artifactId>kie-ci</artifactId> </dependency>", "<dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-client</artifactId> </dependency>", "<dependency> <groupId>org.uberfire</groupId> <artifactId>uberfire-rest-client</artifactId> </dependency>", "<packaging>kjar</packaging> <build> <plugins> <plugin> <groupId>org.kie</groupId> <artifactId>kie-maven-plugin</artifactId> <version>USD{kie.version}</version> <extensions>true</extensions> </plugin> </plugins> </build>", "<property name=\"kie.maven.settings.custom\" value=\"/opt/custom-config/settings.xml\"/>", "./offline-repo-builder.sh offliner.txt", "mvn -e -DskipTests dependency:go-offline -f /path/to/project/pom.xml --batch-mode -Djava.net.preferIPv4Stack=true", "<repository> <id>offline-repository</id> <url>file:///path/to/repo</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository>", "<repository> <id>offline-plugin-repository</id> <url>file:///path/to/repo</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository>" ]
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/installing_and_configuring_red_hat_decision_manager/maven-repo-using-con_install-on-eap
Chapter 58. Servlet
Chapter 58. Servlet Only consumer is supported The Servlet component provides HTTP based endpoints for consuming HTTP requests that arrive at a HTTP endpoint that is bound to a published Servlet. Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-servlet</artifactId> <version>{CamelSBVersion}</version> <!-- use the same version as your Camel core version --> </dependency> Note Stream Servlet is stream based, which means the input it receives is submitted to Camel as a stream. That means you will only be able to read the content of the stream once . If you find a situation where the message body appears to be empty or you need to access the data multiple times (eg: doing multicasting, or redelivery error handling) you should use Stream caching or convert the message body to a String which is safe to be read multiple times. 58.1. URI format 58.2. Configuring Options Camel components are configured on two separate levels: component level endpoint level 58.2.1. Configuring Component Options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre-configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 58.2.2. Configuring Endpoint Options Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints. A good practice when configuring options is to use Property Placeholders , which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 58.3. Component Options The Servlet component supports 11 options, which are listed below. Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which means any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean muteException (consumer) If enabled and an Exchange failed processing on the consumer side the response's body won't contain the exception's stack trace. false boolean servletName (consumer) Default name of servlet to use. The default name is CamelServlet. CamelServlet String attachmentMultipartBinding (consumer (advanced)) Whether to automatic bind multipart/form-data as attachments on the Camel Exchange. The options attachmentMultipartBinding=true and disableStreamCache=false cannot work together. Remove disableStreamCache to use AttachmentMultipartBinding. This is turned off by default as this may require servlet specific configuration to enable this when using Servlets. false boolean fileNameExtWhitelist (consumer (advanced)) Whitelist of accepted filename extensions for accepting uploaded files. Multiple extensions can be separated by comma, such as txt,xml. String httpRegistry (consumer (advanced)) To use a custom org.apache.camel.component.servlet.HttpRegistry. HttpRegistry allowJavaSerializedObject (advanced) Whether to allow java serialization when a request uses context-type=application/x-java-serialized-object. This is by default turned off. If you enable this then be aware that Java will deserialize the incoming data from the request to Java and that can be a potential security risk. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean httpBinding (advanced) To use a custom HttpBinding to control the mapping between Camel message and HttpClient. HttpBinding httpConfiguration (advanced) To use the shared HttpConfiguration as base configuration. HttpConfiguration headerFilterStrategy (filter) To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message. HeaderFilterStrategy 58.4. Endpoint Options The Servlet endpoint is configured using URI syntax: with the following path and query parameters: 58.4.1. Path Parameters (1 parameters) Name Description Default Type contextPath (consumer) Required The context-path to use. String 58.4.2. Query Parameters (22 parameters) Name Description Default Type chunked (consumer) If this option is false the Servlet will disable the HTTP streaming and set the content-length header on the response. true boolean disableStreamCache (common) Determines whether or not the raw input stream from Servlet is cached or not (Camel will read the stream into a in memory/overflow to file, Stream caching) cache. By default Camel will cache the Servlet input stream to support reading it multiple times to ensure Camel can retrieve all data from the stream. However you can set this option to true when you for example need to access the raw stream, such as streaming it directly to a file or other persistent store. DefaultHttpBinding will copy the request input stream into a stream cache and put it into message body if this option is false to support reading the stream multiple times. If you use Servlet to bridge/proxy an endpoint then consider enabling this option to improve performance, in case you do not need to read the message payload multiple times. The http producer will by default cache the response body stream. If this option is set to true, then the producers will not cache the response body stream but use the response stream as-is as the message body. false boolean headerFilterStrategy (common) To use a custom HeaderFilterStrategy to filter header to and from Camel message. HeaderFilterStrategy httpBinding (common (advanced)) To use a custom HttpBinding to control the mapping between Camel message and HttpClient. HttpBinding async (consumer) Configure the consumer to work in async mode. false boolean bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which means any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean httpMethodRestrict (consumer) Used to only allow consuming if the HttpMethod matches, such as GET/POST/PUT etc. Multiple methods can be specified separated by comma. String matchOnUriPrefix (consumer) Whether or not the consumer should try to find a target consumer by matching the URI prefix if no exact match is found. false boolean muteException (consumer) If enabled and an Exchange failed processing on the consumer side the response's body won't contain the exception's stack trace. false boolean responseBufferSize (consumer) To use a custom buffer size on the javax.servlet.ServletResponse. Integer servletName (consumer) Name of the servlet to use. CamelServlet String transferException (consumer) If enabled and an Exchange failed processing on the consumer side, and if the caused Exception was sent back serialized in the response as an application/x-java-serialized-object content type. On the producer side the exception will be deserialized and thrown as is, instead of the HttpOperationFailedException. The caused exception is required to be serialized. This is by default turned off. If you enable this then be aware that Java will deserialize the incoming data from the request to Java and that can be a potential security risk. false boolean attachmentMultipartBinding (consumer (advanced)) Whether to automatic bind multipart/form-data as attachments on the Camel Exchange. The options attachmentMultipartBinding=true and disableStreamCache=false cannot work together. Remove disableStreamCache to use AttachmentMultipartBinding. This is turned off by default as this may require servlet specific configuration to enable this when using Servlets. false boolean eagerCheckContentAvailable (consumer (advanced)) Whether to eager check whether the HTTP requests has content if the content-length header is 0 or not present. This can be turned on in case HTTP clients do not send streamed data. false boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern fileNameExtWhitelist (consumer (advanced)) Whitelist of accepted filename extensions for accepting uploaded files. Multiple extensions can be separated by comma, such as txt,xml. String mapHttpMessageBody (consumer (advanced)) If this option is true then IN exchange Body of the exchange will be mapped to HTTP body. Setting this to false will avoid the HTTP mapping. true boolean mapHttpMessageFormUrlEncodedBody (consumer (advanced)) If this option is true then IN exchange Form Encoded body of the exchange will be mapped to HTTP. Setting this to false will avoid the HTTP Form Encoded body mapping. true boolean mapHttpMessageHeaders (consumer (advanced)) If this option is true then IN exchange Headers of the exchange will be mapped to HTTP headers. Setting this to false will avoid the HTTP Headers mapping. true boolean optionsEnabled (consumer (advanced)) Specifies whether to enable HTTP OPTIONS for this Servlet consumer. By default OPTIONS is turned off. false boolean traceEnabled (consumer (advanced)) Specifies whether to enable HTTP TRACE for this Servlet consumer. By default TRACE is turned off. false boolean 58.5. Message Headers Camel will apply the same Message Headers as the HTTP component. Camel will also populate all request.parameter and request.headers . For example, if a client request has the URL, http://myserver/myserver?orderid=123 , the exchange will contain a header named orderid with the value 123. 58.6. Usage You can consume only from endpoints generated by the Servlet component. Therefore, it should be used only as input into your Camel routes. To issue HTTP requests against other HTTP endpoints, use the HTTP component. 58.7. Spring Boot Auto-Configuration When using servlet with Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-servlet-starter</artifactId> </dependency> The component supports 15 options, which are listed below. Name Description Default Type camel.component.servlet.allow-java-serialized-object Whether to allow java serialization when a request uses context-type=application/x-java-serialized-object. This is by default turned off. If you enable this then be aware that Java will deserialize the incoming data from the request to Java and that can be a potential security risk. false Boolean camel.component.servlet.attachment-multipart-binding Whether to automatic bind multipart/form-data as attachments on the Camel Exchange. The options attachmentMultipartBinding=true and disableStreamCache=false cannot work together. Remove disableStreamCache to use AttachmentMultipartBinding. This is turned off by default as this may require servlet specific configuration to enable this when using Servlet's. false Boolean camel.component.servlet.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.servlet.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which means any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.servlet.enabled Whether to enable auto configuration of the servlet component. This is enabled by default. Boolean camel.component.servlet.file-name-ext-whitelist Whitelist of accepted filename extensions for accepting uploaded files. Multiple extensions can be separated by comma, such as txt,xml. String camel.component.servlet.header-filter-strategy To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message. The option is a org.apache.camel.spi.HeaderFilterStrategy type. HeaderFilterStrategy camel.component.servlet.http-binding To use a custom HttpBinding to control the mapping between Camel message and HttpClient. The option is a org.apache.camel.http.common.HttpBinding type. HttpBinding camel.component.servlet.http-configuration To use the shared HttpConfiguration as base configuration. The option is a org.apache.camel.http.common.HttpConfiguration type. HttpConfiguration camel.component.servlet.http-registry To use a custom org.apache.camel.component.servlet.HttpRegistry. The option is a org.apache.camel.http.common.HttpRegistry type. HttpRegistry camel.component.servlet.mute-exception If enabled and an Exchange failed processing on the consumer side the response's body won't contain the exception's stack trace. false Boolean camel.component.servlet.servlet-name Default name of servlet to use. The default name is CamelServlet. CamelServlet String camel.servlet.mapping.context-path Context path used by the servlet component for automatic mapping. /camel/* String camel.servlet.mapping.enabled Enables the automatic mapping of the servlet component into the Spring web context. true Boolean camel.servlet.mapping.servlet-name The name of the Camel servlet. CamelServlet String
[ "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-servlet</artifactId> <version>{CamelSBVersion}</version> <!-- use the same version as your Camel core version --> </dependency>", "servlet://relative_path[?options]", "servlet:contextPath", "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-servlet-starter</artifactId> </dependency>" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_for_spring_boot/3.20/html/camel_spring_boot_reference/csb-camel-servlet-component-starter
Chapter 38. Storage
Chapter 38. Storage No support for thin provisioning on top of RAID in a cluster While RAID logical volumes and thinly provisioned logical volumes can be used in a cluster when activated exclusively, there is currently no support for thin provisioning on top of RAID in a cluster. This is the case even if the combination is activated exclusively. Currently this combination is only supported in LVM's single machine non-clustered mode. When using thin-provisioning, it is possible to lose buffered writes to the thin-pool if it reaches capacity If a thin-pool is filled to capacity, it may be possible to lose some writes even if the pool is being grown at that time. This is because a resize operation (even an automated one) will attempt to flush outstanding I/O to the storage device prior to the resize being performed. Since there is no room in the thin-pool, the I/O operations must be errored first to allow the grow to succeed. Once the thin pool has grown, the logical volumes associated with the thin-pool will return to normal operation. As a workaround to this problem, set 'thin_pool_autoextend_threshold' and 'thin_pool_autoextend_percent' appropriately for your needs in the lvm.conf file. Do not set the threshold so high or the percent so low that your thin-pool will reach full capacity so quickly that it does not allow enough time for it to be auto-extended (or manually extended if you prefer). If you are not using over-provisioning (creating logical volumes in excess of the size of the backing thin-pool), then be prepared to remove snapshots as necessary if the thin-pool begins to near capacity.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.2_release_notes/known-issues-storage
Operations Guide
Operations Guide Red Hat Ceph Storage 8 Operational tasks for Red Hat Ceph Storage Red Hat Ceph Storage Documentation Team
[ "ceph orch apply mon --placement=\"3 host01 host02 host03\"", "service_type: node-exporter placement: host_pattern: '*' extra_entrypoint_args: - \"--collector.textfile.directory=/var/lib/node_exporter/textfile_collector2\"", "cephadm shell", "ceph orch apply SERVICE_NAME --placement=\" NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_2 HOST_NAME_3 \"", "ceph orch apply mon --placement=\"3 host01 host02 host03\"", "ceph orch host label add HOSTNAME_1 LABEL", "ceph orch host label add host01 mon", "ceph orch apply DAEMON_NAME label: LABEL", "ceph orch apply mon label:mon", "ceph orch host label add HOSTNAME_1 LABEL", "ceph orch host label add host01 mon", "ceph orch apply DAEMON_NAME --placement=\"label: LABEL \"", "ceph orch apply mon --placement=\"label:mon\"", "ceph orch ls", "ceph orch ps --daemon_type= DAEMON_NAME ceph orch ps --service_name= SERVICE_NAME", "ceph orch ps --daemon_type=mon ceph orch ps --service_name=mon", "cephadm shell", "ceph orch host ls", "ceph orch apply SERVICE_NAME --placement=\" NUMBER_OF_DAEMONS HOST_NAME_1 _HOST_NAME_2 HOST_NAME_3 \"", "ceph orch apply mgr --placement=\"2 host01 host02 host03\"", "ceph orch host ls", "service_type: mon placement: host_pattern: \"mon*\" --- service_type: mgr placement: host_pattern: \"mgr*\" --- service_type: osd service_id: default_drive_group placement: host_pattern: \"osd*\" data_devices: all: true", "ceph orch set-unmanaged SERVICE_NAME", "ceph orch set-unmanaged grafana", "ceph orch set-managed SERVICE_NAME", "ceph orch set-managed mon", "touch mon.yaml", "service_type: SERVICE_NAME placement: hosts: - HOST_NAME_1 - HOST_NAME_2", "service_type: mon placement: hosts: - host01 - host02 - host03", "service_type: SERVICE_NAME placement: label: \" LABEL_1 \"", "service_type: mon placement: label: \"mon\"", "extra_container_args: - \"-v\" - \"/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\" - \"--security-opt\" - \"label=disable\" - \"cpus=2\" - \"--collector.textfile.directory=/var/lib/node_exporter/textfile_collector2\"", "cephadm shell --mount mon.yaml:/var/lib/ceph/mon/mon.yaml", "cd /var/lib/ceph/mon/", "ceph orch apply -i FILE_NAME .yaml", "ceph orch apply -i mon.yaml", "ceph orch ls", "ceph orch ps --daemon_type= DAEMON_NAME", "ceph orch ps --daemon_type=mon", "touch mirror.yaml", "service_type: cephfs-mirror service_name: SERVICE_NAME placement: hosts: - HOST_NAME_1 - HOST_NAME_2 - HOST_NAME_3", "service_type: cephfs-mirror service_name: cephfs-mirror placement: hosts: - host01 - host02 - host03", "cephadm shell --mount mirror.yaml:/var/lib/ceph/mirror.yaml", "cd /var/lib/ceph/", "ceph orch apply -i mirror.yaml", "ceph orch ls", "ceph orch ps --daemon_type=cephfs-mirror", "cephadm shell", "ceph cephadm get-pub-key > ~/ PATH", "ceph cephadm get-pub-key > ~/ceph.pub", "ssh-copy-id -f -i ~/ PATH root@ HOST_NAME_2", "ssh-copy-id -f -i ~/ceph.pub root@host02", "host01 host02 host03 [admin] host00", "ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\" --limit NEWHOST", "[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\" --limit host02", "cephadm shell", "ceph orch host add HOST_NAME IP_ADDRESS_OF_HOST [--label= LABEL_NAME_1 , LABEL_NAME_2 ]", "ceph orch host add host02 10.10.128.70 --labels=mon,mgr", "ceph orch host ls", "touch hosts.yaml", "service_type: host addr: host01 hostname: host01 labels: - mon - osd - mgr --- service_type: host addr: host02 hostname: host02 labels: - mon - osd - mgr --- service_type: host addr: host03 hostname: host03 labels: - mon - osd", "cephadm shell --mount hosts.yaml:/var/lib/ceph/hosts.yaml", "cd /var/lib/ceph/", "ceph orch apply -i FILE_NAME .yaml", "ceph orch apply -i hosts.yaml", "ceph orch host ls", "cephadm shell", "ceph orch host ls", "cephadm shell", "ceph orch host label add HOSTNAME LABEL", "ceph orch host label add host02 mon", "ceph orch host ls", "cephadm shell", "ceph orch host label rm HOSTNAME LABEL", "ceph orch host label rm host02 mon", "ceph orch host ls", "cephadm shell", "ceph orch host ls", "ceph orch host drain HOSTNAME", "ceph orch host drain host02", "ceph orch osd rm status", "ceph orch ps HOSTNAME", "ceph orch ps host02", "ceph orch host rm HOSTNAME", "ceph orch host rm host02", "cephadm shell", "ceph orch host maintenance enter HOST_NAME [--force]", "ceph orch host maintenance enter host02 --force", "ceph orch host maintenance exit HOST_NAME", "ceph orch host maintenance exit host02", "ceph orch host ls", "ceph mon set election_strategy {classic|disallow|connectivity}", "cephadm shell", "ceph orch apply mon --placement=\" HOST_NAME_1 HOST_NAME_2 HOST_NAME_3 \"", "ceph orch apply mon --placement=\"host01 host02 host03\"", "ceph orch apply mon host01 ceph orch apply mon host02 ceph orch apply mon host03", "ceph orch host label add HOSTNAME_1 LABEL", "ceph orch host label add host01 mon", "ceph orch apply mon --placement=\" HOST_NAME_1 :mon HOST_NAME_2 :mon HOST_NAME_3 :mon\"", "ceph orch apply mon --placement=\"host01:mon host02:mon host03:mon\"", "ceph orch apply mon --placement=\" NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_2 HOST_NAME_3 \"", "ceph orch apply mon --placement=\"3 host01 host02 host03\"", "ceph orch apply mon NUMBER_OF_DAEMONS", "ceph orch apply mon 3", "ceph orch ls", "ceph orch ps --daemon_type= DAEMON_NAME", "ceph orch ps --daemon_type=mon", "touch mon.yaml", "service_type: mon placement: hosts: - HOST_NAME_1 - HOST_NAME_2", "service_type: mon placement: hosts: - host01 - host02", "cephadm shell --mount mon.yaml:/var/lib/ceph/mon/mon.yaml", "cd /var/lib/ceph/mon/", "ceph orch apply -i FILE_NAME .yaml", "ceph orch apply -i mon.yaml", "ceph orch ls", "ceph orch ps --daemon_type= DAEMON_NAME", "ceph orch ps --daemon_type=mon", "cephadm shell", "ceph orch apply mon --unmanaged", "ceph orch daemon add mon HOST_NAME_1 : IP_OR_NETWORK", "ceph orch daemon add mon host03:10.1.2.123", "ceph orch ls", "ceph orch ps --daemon_type= DAEMON_NAME", "ceph orch ps --daemon_type=mon", "cephadm shell", "ceph orch apply mon \" NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_3 \"", "ceph orch apply mon \"2 host01 host03\"", "ceph orch ps --daemon_type= DAEMON_NAME", "ceph orch ps --daemon_type=mon", "ssh root@ MONITOR_ID", "ssh root@host00", "cephadm unit --name DAEMON_NAME . HOSTNAME stop", "cephadm unit --name mon.host00 stop", "cephadm shell --name DAEMON_NAME . HOSTNAME", "cephadm shell --name mon.host00", "ceph-mon -i HOSTNAME --extract-monmap TEMP_PATH", "ceph-mon -i host01 --extract-monmap /tmp/monmap 2022-01-05T11:13:24.440+0000 7f7603bd1700 -1 wrote monmap to /tmp/monmap", "monmaptool TEMPORARY_PATH --rm HOSTNAME", "monmaptool /tmp/monmap --rm host01", "ceph-mon -i HOSTNAME --inject-monmap TEMP_PATH", "ceph-mon -i host00 --inject-monmap /tmp/monmap", "cephadm unit --name DAEMON_NAME . HOSTNAME start", "cephadm unit --name mon.host00 start", "ceph -s", "cephadm shell", "ceph orch apply mgr --placement=\" HOST_NAME_1 HOST_NAME_2 HOST_NAME_3 \"", "ceph orch apply mgr --placement=\"host01 host02 host03\"", "ceph orch apply mgr NUMBER_OF_DAEMONS", "ceph orch apply mgr 3", "ceph orch ls", "ceph orch ps --daemon_type= DAEMON_NAME", "ceph orch ps --daemon_type=mgr", "cephadm shell", "ceph orch apply mgr \" NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_3 \"", "ceph orch apply mgr \"2 host01 host03\"", "ceph orch ps --daemon_type= DAEMON_NAME", "ceph orch ps --daemon_type=mgr", "ceph mgr module enable dashboard ceph mgr module ls MODULE balancer on (always on) crash on (always on) devicehealth on (always on) orchestrator on (always on) pg_autoscaler on (always on) progress on (always on) rbd_support on (always on) status on (always on) telemetry on (always on) volumes on (always on) cephadm on dashboard on iostat on nfs on prometheus on restful on alerts - diskprediction_local - influx - insights - k8sevents - localpool - mds_autoscaler - mirroring - osd_perf_query - osd_support - rgw - rook - selftest - snap_schedule - stats - telegraf - test_orchestrator - zabbix - ceph mgr services { \"dashboard\": \"http://myserver.com:7789/\", \"restful\": \"https://myserver.com:8789/\" }", "[mon] mgr initial modules = dashboard balancer", "ceph <command | help>", "ceph osd set-require-min-compat-client luminous", "ceph osd set-require-min-compat-client luminous --yes-i-really-mean-it", "ceph features", "ceph osd set-require-min-compat-client reef", "ceph osd set-require-min-compat-client reef --yes-i-really-mean-it", "ceph features", "ceph osd set-require-min-compat-client reef", "ceph osd set-require-min-compat-client reef --yes-i-really-mean-it", "ceph features", "ceph mgr module enable balancer", "ceph balancer on", "ceph balancer mode crush-compat", "ceph balancer mode upmap", "ceph balancer status", "ceph balancer on", "ceph balancer on ceph balancer mode crush-compat ceph balancer status { \"active\": true, \"last_optimize_duration\": \"0:00:00.001174\", \"last_optimize_started\": \"Fri Nov 22 11:09:18 2024\", \"mode\": \"crush-compact\", \"no_optimization_needed\": false, \"optimize_result\": \"Unable to find further optimization, change balancer mode and retry might help\", \"plans\": [] }", "ceph balancer off ceph balancer status { \"active\": false, \"last_optimize_duration\": \"\", \"last_optimize_started\": \"\", \"mode\": \"crush-compat\", \"no_optimization_needed\": false, \"optimize_result\": \"\", \"plans\": [] }", "ceph config-key set mgr target_max_misplaced_ratio THRESHOLD_PERCENTAGE", "ceph config-key set mgr target_max_misplaced_ratio .07", "ceph config set mgr mgr/balancer/sleep_interval 60", "ceph config set mgr mgr/balancer/begin_time 0000", "ceph config set mgr mgr/balancer/end_time 2359", "ceph config set mgr mgr/balancer/begin_weekday 0", "ceph config set mgr mgr/balancer/end_weekday 6", "ceph config set mgr mgr/balancer/pool_ids 1,2,3", "ceph balancer eval", "ceph balancer eval POOL_NAME", "ceph balancer eval rbd", "ceph balancer eval-verbose", "ceph balancer optimize PLAN_NAME", "ceph balancer optimize rbd_123", "ceph balancer show PLAN_NAME", "ceph balancer show rbd_123", "ceph balancer rm PLAN_NAME", "ceph balancer rm rbd_123", "ceph balancer status", "ceph balancer eval PLAN_NAME", "ceph balancer eval rbd_123", "ceph balancer execute PLAN_NAME", "ceph balancer execute rbd_123", "ceph mgr module enable balancer", "ceph balancer on", "ceph osd set-require-min-compat-client reef", "ceph osd set-require-min-compat-client reef --yes-i-really-mean-it", "You can check what client versions are in use with: the ceph features command.", "ceph features", "ceph balancer mode upmap-read ceph balancer mode read", "ceph balancer status", "ceph balancer status { \"active\": true, \"last_optimize_duration\": \"0:00:00.013640\", \"last_optimize_started\": \"Mon Nov 22 14:47:57 2024\", \"mode\": \"upmap-read\", \"no_optimization_needed\": true, \"optimize_result\": \"Unable to find further optimization, or pool(s) pg_num is decreasing, or distribution is already perfect\", \"plans\": [] }", "ceph osd getmap -o map", "ospmaptool map -upmap out.txt", "source out.txt", "ceph osd pool ls detail", "ceph osd pool ls detail pool 1 '.mgr' replicated size 3 min_size 1 crush_rule 0 object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change 17 flags hashpspool stripe_width 0 pg_num_max 32 pg_num_min 1 application mgr read_balance_score 3.00 pool 2 'cephfs.a.meta' replicated size 3 min_size 1 crush_rule 0 object_hash rjenkins pg_num 16 pgp_num 16 autoscale_mode on last_change 55 lfor 0/0/25 flags hashpspool stripe_width 0 pg_autoscale_bias 4 pg_num_min 16 recovery_priority 5 application cephfs read_balance_score 1.50 pool 3 'cephfs.a.data' replicated size 3 min_size 1 crush_rule 0 object_hash rjenkins pg_num 128 pgp_num 128 autoscale_mode on last_change 27 lfor 0/0/25 flags hashpspool,bulk stripe_width 0 application cephfs read_balance_score 1.31", "ceph osd getmap -o om", "got osdmap epoch 56", "osdmaptool om --read out.txt --read-pool _POOL_NAME_ [--vstart]", "osdmaptool om --read out.txt --read-pool cephfs.a.meta ./bin/osdmaptool: osdmap file 'om' writing upmap command output to: out.txt ---------- BEFORE ------------ osd.0 | primary affinity: 1 | number of prims: 4 osd.1 | primary affinity: 1 | number of prims: 8 osd.2 | primary affinity: 1 | number of prims: 4 read_balance_score of 'cephfs.a.meta': 1.5 ---------- AFTER ------------ osd.0 | primary affinity: 1 | number of prims: 5 osd.1 | primary affinity: 1 | number of prims: 6 osd.2 | primary affinity: 1 | number of prims: 5 read_balance_score of 'cephfs.a.meta': 1.13 num changes: 2", "source out.txt", "cat out.txt ceph osd pg-upmap-primary 2.3 0 ceph osd pg-upmap-primary 2.4 2 source out.txt change primary for pg 2.3 to osd.0 change primary for pg 2.4 to osd.2", "Error EPERM: min_compat_client luminous < reef, which is required for pg-upmap-primary. Try 'ceph osd set-require-min-compat-client reef' before using the new interface", "cephadm shell", "ceph mgr module enable alerts", "ceph mgr module ls | more { \"always_on_modules\": [ \"balancer\", \"crash\", \"devicehealth\", \"orchestrator\", \"pg_autoscaler\", \"progress\", \"rbd_support\", \"status\", \"telemetry\", \"volumes\" ], \"enabled_modules\": [ \"alerts\", \"cephadm\", \"dashboard\", \"iostat\", \"nfs\", \"prometheus\", \"restful\" ]", "ceph config set mgr mgr/alerts/smtp_host SMTP_SERVER ceph config set mgr mgr/alerts/smtp_destination RECEIVER_EMAIL_ADDRESS ceph config set mgr mgr/alerts/smtp_sender SENDER_EMAIL_ADDRESS", "ceph config set mgr mgr/alerts/smtp_host smtp.example.com ceph config set mgr mgr/alerts/smtp_destination [email protected] ceph config set mgr mgr/alerts/smtp_sender [email protected]", "ceph config set mgr mgr/alerts/smtp_port PORT_NUMBER", "ceph config set mgr mgr/alerts/smtp_port 587", "ceph config set mgr mgr/alerts/smtp_user USERNAME ceph config set mgr mgr/alerts/smtp_password PASSWORD", "ceph config set mgr mgr/alerts/smtp_user admin1234 ceph config set mgr mgr/alerts/smtp_password admin1234", "ceph config set mgr mgr/alerts/smtp_from_name CLUSTER_NAME", "ceph config set mgr mgr/alerts/smtp_from_name 'Ceph Cluster Test'", "ceph config set mgr mgr/alerts/interval INTERVAL", "ceph config set mgr mgr/alerts/interval \"5m\"", "ceph alerts send", "ceph config set mgr/crash/warn_recent_interval 0", "ceph mgr module ls | more { \"always_on_modules\": [ \"balancer\", \"crash\", \"devicehealth\", \"orchestrator_cli\", \"progress\", \"rbd_support\", \"status\", \"volumes\" ], \"enabled_modules\": [ \"dashboard\", \"pg_autoscaler\", \"prometheus\" ]", "ceph crash post -i meta", "ceph crash ls", "ceph crash ls-new", "ceph crash ls-new", "ceph crash stat 8 crashes recorded 8 older than 1 days old: 2022-05-20T08:30:14.533316Z_4ea88673-8db6-4959-a8c6-0eea22d305c2 2022-05-20T08:30:14.590789Z_30a8bb92-2147-4e0f-a58b-a12c2c73d4f5 2022-05-20T08:34:42.278648Z_6a91a778-bce6-4ef3-a3fb-84c4276c8297 2022-05-20T08:34:42.801268Z_e5f25c74-c381-46b1-bee3-63d891f9fc2d 2022-05-20T08:34:42.803141Z_96adfc59-be3a-4a38-9981-e71ad3d55e47 2022-05-20T08:34:42.830416Z_e45ed474-550c-44b3-b9bb-283e3f4cc1fe 2022-05-24T19:58:42.549073Z_b2382865-ea89-4be2-b46f-9a59af7b7a2d 2022-05-24T19:58:44.315282Z_1847afbc-f8a9-45da-94e8-5aef0738954e", "ceph crash info CRASH_ID", "ceph crash info 2022-05-24T19:58:42.549073Z_b2382865-ea89-4be2-b46f-9a59af7b7a2d { \"assert_condition\": \"session_map.sessions.empty()\", \"assert_file\": \"/builddir/build/BUILD/ceph-16.1.0-486-g324d7073/src/mon/Monitor.cc\", \"assert_func\": \"virtual Monitor::~Monitor()\", \"assert_line\": 287, \"assert_msg\": \"/builddir/build/BUILD/ceph-16.1.0-486-g324d7073/src/mon/Monitor.cc: In function 'virtual Monitor::~Monitor()' thread 7f67a1aeb700 time 2022-05-24T19:58:42.545485+0000\\n/builddir/build/BUILD/ceph-16.1.0-486-g324d7073/src/mon/Monitor.cc: 287: FAILED ceph_assert(session_map.sessions.empty())\\n\", \"assert_thread_name\": \"ceph-mon\", \"backtrace\": [ \"/lib64/libpthread.so.0(+0x12b30) [0x7f679678bb30]\", \"gsignal()\", \"abort()\", \"(ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x1a9) [0x7f6798c8d37b]\", \"/usr/lib64/ceph/libceph-common.so.2(+0x276544) [0x7f6798c8d544]\", \"(Monitor::~Monitor()+0xe30) [0x561152ed3c80]\", \"(Monitor::~Monitor()+0xd) [0x561152ed3cdd]\", \"main()\", \"__libc_start_main()\", \"_start()\" ], \"ceph_version\": \"16.2.8-65.el8cp\", \"crash_id\": \"2022-07-06T19:58:42.549073Z_b2382865-ea89-4be2-b46f-9a59af7b7a2d\", \"entity_name\": \"mon.ceph-adm4\", \"os_id\": \"rhel\", \"os_name\": \"Red Hat Enterprise Linux\", \"os_version\": \"8.5 (Ootpa)\", \"os_version_id\": \"8.5\", \"process_name\": \"ceph-mon\", \"stack_sig\": \"957c21d558d0cba4cee9e8aaf9227b3b1b09738b8a4d2c9f4dc26d9233b0d511\", \"timestamp\": \"2022-07-06T19:58:42.549073Z\", \"utsname_hostname\": \"host02\", \"utsname_machine\": \"x86_64\", \"utsname_release\": \"4.18.0-240.15.1.el8_3.x86_64\", \"utsname_sysname\": \"Linux\", \"utsname_version\": \"#1 SMP Wed Jul 06 03:12:15 EDT 2022\" }", "ceph crash prune KEEP", "ceph crash prune 60", "ceph crash archive CRASH_ID", "ceph crash archive 2022-05-24T19:58:42.549073Z_b2382865-ea89-4be2-b46f-9a59af7b7a2d", "ceph crash archive-all", "ceph crash rm CRASH_ID", "ceph crash rm 2022-05-24T19:58:42.549073Z_b2382865-ea89-4be2-b46f-9a59af7b7a2d", "ceph telemetry on", "ceph telemetry enable channel basic ceph telemetry enable channel crash ceph telemetry enable channel device ceph telemetry enable channel ident ceph telemetry enable channel perf ceph telemetry disable channel basic ceph telemetry disable channel crash ceph telemetry disable channel device ceph telemetry disable channel ident ceph telemetry disable channel perf", "ceph telemetry enable channel basic crash device ident perf ceph telemetry disable channel basic crash device ident perf", "ceph telemetry enable channel all ceph telemetry disable channel all", "ceph telemetry show", "ceph telemetry preview", "ceph telemetry show-device", "ceph telemetry preview-device", "ceph telemetry show-all", "ceph telemetry preview-all", "ceph telemetry show CHANNEL_NAME", "ceph telemetry preview CHANNEL_NAME", "ceph telemetry collection ls", "ceph telemetry diff", "ceph telemetry on ceph telemetry enable channel CHANNEL_NAME", "ceph config set mgr mgr/telemetry/interval INTERVAL", "ceph config set mgr mgr/telemetry/interval 72", "ceph telemetry status", "ceph telemetry send", "ceph config set mgr mgr/telemetry/proxy PROXY_URL", "ceph config set mgr mgr/telemetry/proxy https://10.0.0.1:8080", "ceph config set mgr mgr/telemetry/proxy https://10.0.0.1:8080", "ceph config set mgr mgr/telemetry/contact '_CONTACT_NAME_' ceph config set mgr mgr/telemetry/description '_DESCRIPTION_' ceph config set mgr mgr/telemetry/channel_ident true", "ceph config set mgr mgr/telemetry/contact 'John Doe <[email protected]>' ceph config set mgr mgr/telemetry/description 'My first Ceph cluster' ceph config set mgr mgr/telemetry/channel_ident true", "ceph config set mgr mgr/telemetry/leaderboard true", "ceph telemetry off", "ceph config set osd osd_memory_target_autotune true", "osd_memory_target = TOTAL_RAM_OF_THE_OSD * (1048576) * (autotune_memory_target_ratio) / NUMBER_OF_OSDS_IN_THE_OSD_NODE - ( SPACE_ALLOCATED_FOR_OTHER_DAEMONS )", "ceph config set mgr mgr/cephadm/autotune_memory_target_ratio 0.2", "ceph config set osd.123 osd_memory_target 7860684936", "ceph config set osd/host: HOSTNAME osd_memory_target TARGET_BYTES", "ceph config set osd/host:host01 osd_memory_target 1000000000", "ceph orch host label add HOSTNAME _no_autotune_memory", "ceph config set osd.123 osd_memory_target_autotune false ceph config set osd.123 osd_memory_target 16G", "cephadm shell", "ceph orch device ls [--hostname= HOSTNAME_1 HOSTNAME_2 ] [--wide] [--refresh]", "ceph orch device ls --wide --refresh", "cephadm shell lsmcli ldl", "cephadm shell ceph config set mgr mgr/cephadm/device_enhanced_scan true", "ceph orch device ls", "cephadm shell", "ceph orch device ls [--hostname= HOSTNAME_1 HOSTNAME_2 ] [--wide] [--refresh]", "ceph orch device ls --wide --refresh", "ceph orch device zap HOSTNAME FILE_PATH --force", "ceph orch device zap host02 /dev/sdb --force", "ceph orch device ls", "cephadm shell", "ceph orch device ls [--hostname= HOSTNAME_1 HOSTNAME_2 ] [--wide] [--refresh]", "ceph orch device ls --wide --refresh", "ceph orch apply osd --all-available-devices", "ceph orch apply osd --all-available-devices --unmanaged=true", "ceph orch ls", "ceph osd tree", "cephadm shell", "ceph orch device ls [--hostname= HOSTNAME_1 HOSTNAME_2 ] [--wide] [--refresh]", "ceph orch device ls --wide --refresh", "ceph orch daemon add osd HOSTNAME : DEVICE_PATH", "ceph orch daemon add osd host02:/dev/sdb", "ceph orch daemon add osd --method raw HOSTNAME : DEVICE_PATH", "ceph orch daemon add osd --method raw host02:/dev/sdb", "ceph orch ls osd", "ceph osd tree", "ceph orch ps --service_name= SERVICE_NAME", "ceph orch ps --service_name=osd", "touch osd_spec.yaml", "service_type: osd service_id: SERVICE_ID placement: host_pattern: '*' # optional data_devices: # optional model: DISK_MODEL_NAME # optional paths: - / DEVICE_PATH osds_per_device: NUMBER_OF_DEVICES # optional db_devices: # optional size: # optional all: true # optional paths: - / DEVICE_PATH encrypted: true", "service_type: osd service_id: osd_spec_default placement: host_pattern: '*' data_devices: all: true paths: - /dev/sdb encrypted: true", "service_type: osd service_id: osd_spec_default placement: host_pattern: '*' data_devices: size: '80G' db_devices: size: '40G:' paths: - /dev/sdc", "service_type: osd service_id: all-available-devices encrypted: \"true\" method: raw placement: host_pattern: \"*\" data_devices: all: \"true\"", "service_type: osd service_id: osd_spec_hdd placement: host_pattern: '*' data_devices: rotational: 0 db_devices: model: Model-name limit: 2 --- service_type: osd service_id: osd_spec_ssd placement: host_pattern: '*' data_devices: model: Model-name db_devices: vendor: Vendor-name", "service_type: osd service_id: osd_spec_node_one_to_five placement: host_pattern: 'node[1-5]' data_devices: rotational: 1 db_devices: rotational: 0 --- service_type: osd service_id: osd_spec_six_to_ten placement: host_pattern: 'node[6-10]' data_devices: model: Model-name db_devices: model: Model-name", "service_type: osd service_id: osd_using_paths placement: hosts: - host01 - host02 data_devices: paths: - /dev/sdb db_devices: paths: - /dev/sdc wal_devices: paths: - /dev/sdd", "service_type: osd service_id: multiple_osds placement: hosts: - host01 - host02 osds_per_device: 4 data_devices: paths: - /dev/sdb", "service_type: osd service_id: SERVICE_ID placement: hosts: - HOSTNAME data_devices: # optional model: DISK_MODEL_NAME # optional paths: - / DEVICE_PATH db_devices: # optional size: # optional all: true # optional paths: - / DEVICE_PATH", "service_type: osd service_id: osd_spec placement: hosts: - machine1 data_devices: paths: - /dev/vg_hdd/lv_hdd db_devices: paths: - /dev/vg_nvme/lv_nvme", "service_type: osd service_id: OSD_BY_ID_HOSTNAME placement: hosts: - HOSTNAME data_devices: # optional model: DISK_MODEL_NAME # optional paths: - / DEVICE_PATH db_devices: # optional size: # optional all: true # optional paths: - / DEVICE_PATH", "service_type: osd service_id: osd_by_id_host01 placement: hosts: - host01 data_devices: paths: - /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-5 db_devices: paths: - /dev/disk/by-id/nvme-nvme.1b36-31323334-51454d55204e564d65204374726c-00000001", "service_type: osd service_id: OSD_BY_PATH_HOSTNAME placement: hosts: - HOSTNAME data_devices: # optional model: DISK_MODEL_NAME # optional paths: - / DEVICE_PATH db_devices: # optional size: # optional all: true # optional paths: - / DEVICE_PATH", "service_type: osd service_id: osd_by_path_host01 placement: hosts: - host01 data_devices: paths: - /dev/disk/by-path/pci-0000:0d:00.0-scsi-0:0:0:4 db_devices: paths: - /dev/disk/by-path/pci-0000:00:02.0-nvme-1", "cephadm shell --mount osd_spec.yaml:/var/lib/ceph/osd/osd_spec.yaml", "cd /var/lib/ceph/osd/", "ceph orch apply -i osd_spec.yaml --dry-run", "ceph orch apply -i FILE_NAME .yml", "ceph orch apply -i osd_spec.yaml", "ceph orch ls osd", "ceph osd tree", "cephadm shell", "ceph osd tree", "ceph orch osd rm OSD_ID [--replace] [--force] --zap", "ceph orch osd rm 0 --zap", "ceph orch osd rm OSD_ID OSD_ID --zap", "ceph orch osd rm 2 5 --zap", "ceph orch osd rm status OSD HOST STATE PGS REPLACE FORCE ZAP DRAIN STARTED AT 9 host01 done, waiting for purge 0 False False True 2023-06-06 17:50:50.525690 10 host03 done, waiting for purge 0 False False True 2023-06-06 17:49:38.731533 11 host02 done, waiting for purge 0 False False True 2023-06-06 17:48:36.641105", "ceph osd tree", "cephadm shell", "ceph osd metadata -f plain | grep device_paths \"device_paths\": \"sde=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:0:1,sdi=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1\", \"device_paths\": \"sde=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:0:1,sdf=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1\", \"device_paths\": \"sdd=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:0:2,sdg=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:2\", \"device_paths\": \"sdd=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:0:2,sdh=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:2\", \"device_paths\": \"sdd=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:0:2,sdk=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:2\", \"device_paths\": \"sdc=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:0:3,sdl=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:3\", \"device_paths\": \"sdc=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:0:3,sdj=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:3\", \"device_paths\": \"sdc=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:0:3,sdm=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:3\", [.. output omitted ..]", "ceph osd tree", "ceph orch osd rm OSD_ID --replace [--force]", "ceph orch osd rm 0 --replace", "ceph orch osd rm status", "ceph orch pause ceph orch status Backend: cephadm Available: Yes Paused: Yes", "ceph orch device zap node.example.com /dev/sdi --force zap successful for /dev/sdi on node.example.com ceph orch device zap node.example.com /dev/sdf --force zap successful for /dev/sdf on node.example.com", "ceph orch resume", "ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.77112 root default -3 0.77112 host node 0 hdd 0.09639 osd.0 up 1.00000 1.00000 1 hdd 0.09639 osd.1 up 1.00000 1.00000 2 hdd 0.09639 osd.2 up 1.00000 1.00000 3 hdd 0.09639 osd.3 up 1.00000 1.00000 4 hdd 0.09639 osd.4 up 1.00000 1.00000 5 hdd 0.09639 osd.5 up 1.00000 1.00000 6 hdd 0.09639 osd.6 up 1.00000 1.00000 7 hdd 0.09639 osd.7 up 1.00000 1.00000 [.. output omitted ..]", "ceph osd tree", "ceph osd metadata 0 | grep bluefs_db_devices \"bluefs_db_devices\": \"nvme0n1\", ceph osd metadata 1 | grep bluefs_db_devices \"bluefs_db_devices\": \"nvme0n1\",", "cephadm shell", "ceph orch osd rm OSD_ID [--replace]", "ceph orch osd rm 8 --replace Scheduled OSD(s) for removal", "ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.32297 root default -9 0.05177 host host10 3 hdd 0.01520 osd.3 up 1.00000 1.00000 13 hdd 0.02489 osd.13 up 1.00000 1.00000 17 hdd 0.01169 osd.17 up 1.00000 1.00000 -13 0.05177 host host11 2 hdd 0.01520 osd.2 up 1.00000 1.00000 15 hdd 0.02489 osd.15 up 1.00000 1.00000 19 hdd 0.01169 osd.19 up 1.00000 1.00000 -7 0.05835 host host12 20 hdd 0.01459 osd.20 up 1.00000 1.00000 21 hdd 0.01459 osd.21 up 1.00000 1.00000 22 hdd 0.01459 osd.22 up 1.00000 1.00000 23 hdd 0.01459 osd.23 up 1.00000 1.00000 -5 0.03827 host host04 1 hdd 0.01169 osd.1 up 1.00000 1.00000 6 hdd 0.01129 osd.6 up 1.00000 1.00000 7 hdd 0.00749 osd.7 up 1.00000 1.00000 9 hdd 0.00780 osd.9 up 1.00000 1.00000 -3 0.03816 host host05 0 hdd 0.01169 osd.0 up 1.00000 1.00000 8 hdd 0.01129 osd.8 destroyed 0 1.00000 12 hdd 0.00749 osd.12 up 1.00000 1.00000 16 hdd 0.00769 osd.16 up 1.00000 1.00000 -15 0.04237 host host06 5 hdd 0.01239 osd.5 up 1.00000 1.00000 10 hdd 0.01540 osd.10 up 1.00000 1.00000 11 hdd 0.01459 osd.11 up 1.00000 1.00000 -11 0.04227 host host07 4 hdd 0.01239 osd.4 up 1.00000 1.00000 14 hdd 0.01529 osd.14 up 1.00000 1.00000 18 hdd 0.01459 osd.18 up 1.00000 1.00000", "ceph-volume lvm zap --osd-id OSD_ID", "ceph-volume lvm zap --osd-id 8 Zapping: /dev/vg1/data-lv2 Closing encrypted path /dev/mapper/l4D6ql-Prji-IzH4-dfhF-xzuf-5ETl-jNRcXC Running command: /usr/sbin/cryptsetup remove /dev/mapper/l4D6ql-Prji-IzH4-dfhF-xzuf-5ETl-jNRcXC Running command: /usr/bin/dd if=/dev/zero of=/dev/vg1/data-lv2 bs=1M count=10 conv=fsync stderr: 10+0 records in 10+0 records out stderr: 10485760 bytes (10 MB, 10 MiB) copied, 0.034742 s, 302 MB/s Zapping successful for OSD: 8", "ceph-volume lvm list", "cat osd.yml service_type: osd service_id: osd_service placement: hosts: - host03 data_devices: paths: - /dev/vg1/data-lv2 db_devices: paths: - /dev/vg1/db-lv1", "ceph orch apply -i osd.yml Scheduled osd.osd_service update", "ceph -s ceph osd tree", "lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 20G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 19G 0 part ├─rhel-root 253:0 0 17G 0 lvm / └─rhel-swap 253:1 0 2G 0 lvm [SWAP] sdb 8:16 0 10G 0 disk └─ceph--5726d3e9--4fdb--4eda--b56a--3e0df88d663f-osd--block--3ceb89ec--87ef--46b4--99c6--2a56bac09ff0 253:2 0 10G 0 lvm sdc 8:32 0 10G 0 disk └─ceph--d7c9ab50--f5c0--4be0--a8fd--e0313115f65c-osd--block--37c370df--1263--487f--a476--08e28bdbcd3c 253:4 0 10G 0 lvm sdd 8:48 0 10G 0 disk ├─ceph--1774f992--44f9--4e78--be7b--b403057cf5c3-osd--db--31b20150--4cbc--4c2c--9c8f--6f624f3bfd89 253:7 0 2.5G 0 lvm └─ceph--1774f992--44f9--4e78--be7b--b403057cf5c3-osd--db--1bee5101--dbab--4155--a02c--e5a747d38a56 253:9 0 2.5G 0 lvm sde 8:64 0 10G 0 disk sdf 8:80 0 10G 0 disk └─ceph--412ee99b--4303--4199--930a--0d976e1599a2-osd--block--3a99af02--7c73--4236--9879--1fad1fe6203d 253:6 0 10G 0 lvm sdg 8:96 0 10G 0 disk └─ceph--316ca066--aeb6--46e1--8c57--f12f279467b4-osd--block--58475365--51e7--42f2--9681--e0c921947ae6 253:8 0 10G 0 lvm sdh 8:112 0 10G 0 disk ├─ceph--d7064874--66cb--4a77--a7c2--8aa0b0125c3c-osd--db--0dfe6eca--ba58--438a--9510--d96e6814d853 253:3 0 5G 0 lvm └─ceph--d7064874--66cb--4a77--a7c2--8aa0b0125c3c-osd--db--26b70c30--8817--45de--8843--4c0932ad2429 253:5 0 5G 0 lvm sr0", "cephadm shell", "ceph-volume lvm list /dev/sdh ====== osd.2 ======= [db] /dev/ceph-d7064874-66cb-4a77-a7c2-8aa0b0125c3c/osd-db-0dfe6eca-ba58-438a-9510-d96e6814d853 block device /dev/ceph-5726d3e9-4fdb-4eda-b56a-3e0df88d663f/osd-block-3ceb89ec-87ef-46b4-99c6-2a56bac09ff0 block uuid GkWLoo-f0jd-Apj2-Zmwj-ce0h-OY6J-UuW8aD cephx lockbox secret cluster fsid fa0bd9dc-e4c4-11ed-8db4-001a4a00046e cluster name ceph crush device class db device /dev/ceph-d7064874-66cb-4a77-a7c2-8aa0b0125c3c/osd-db-0dfe6eca-ba58-438a-9510-d96e6814d853 db uuid 6gSPoc-L39h-afN3-rDl6-kozT-AX9S-XR20xM encrypted 0 osd fsid 3ceb89ec-87ef-46b4-99c6-2a56bac09ff0 osd id 2 osdspec affinity non-colocated type db vdo 0 devices /dev/sdh ====== osd.5 ======= [db] /dev/ceph-d7064874-66cb-4a77-a7c2-8aa0b0125c3c/osd-db-26b70c30-8817-45de-8843-4c0932ad2429 block device /dev/ceph-d7c9ab50-f5c0-4be0-a8fd-e0313115f65c/osd-block-37c370df-1263-487f-a476-08e28bdbcd3c block uuid Eay3I7-fcz5-AWvp-kRcI-mJaH-n03V-Zr0wmJ cephx lockbox secret cluster fsid fa0bd9dc-e4c4-11ed-8db4-001a4a00046e cluster name ceph crush device class db device /dev/ceph-d7064874-66cb-4a77-a7c2-8aa0b0125c3c/osd-db-26b70c30-8817-45de-8843-4c0932ad2429 db uuid mwSohP-u72r-DHcT-BPka-piwA-lSwx-w24N0M encrypted 0 osd fsid 37c370df-1263-487f-a476-08e28bdbcd3c osd id 5 osdspec affinity non-colocated type db vdo 0 devices /dev/sdh", "cat osds.yml service_type: osd service_id: non-colocated unmanaged: true placement: host_pattern: 'ceph*' data_devices: paths: - /dev/sdb - /dev/sdc - /dev/sdf - /dev/sdg db_devices: paths: - /dev/sdd - /dev/sdh", "ceph orch apply -i osds.yml Scheduled osd.non-colocated update", "ceph orch ls NAME PORTS RUNNING REFRESHED AGE PLACEMENT alertmanager ?:9093,9094 1/1 9m ago 4d count:1 crash 3/4 4d ago 4d * grafana ?:3000 1/1 9m ago 4d count:1 mgr 1/2 4d ago 4d count:2 mon 3/5 4d ago 4d count:5 node-exporter ?:9100 3/4 4d ago 4d * osd.non-colocated 8 4d ago 5s <unmanaged> prometheus ?:9095 1/1 9m ago 4d count:1", "ceph orch osd rm 2 5 --zap --replace Scheduled OSD(s) for removal", "ceph osd df tree | egrep -i \"ID|host02|osd.2|osd.5\" ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME -5 0.04877 - 55 GiB 15 GiB 4.1 MiB 0 B 60 MiB 40 GiB 27.27 1.17 - host02 2 hdd 0.01219 1.00000 15 GiB 5.0 GiB 996 KiB 0 B 15 MiB 10 GiB 33.33 1.43 0 destroyed osd.2 5 hdd 0.01219 1.00000 15 GiB 5.0 GiB 1.0 MiB 0 B 15 MiB 10 GiB 33.33 1.43 0 destroyed osd.5", "cat osds.yml service_type: osd service_id: non-colocated unmanaged: false placement: host_pattern: 'ceph01*' data_devices: paths: - /dev/sdb - /dev/sdc - /dev/sdf - /dev/sdg db_devices: paths: - /dev/sdd - /dev/sde", "ceph orch apply -i osds.yml --dry-run WARNING! Dry-Runs are snapshots of a certain point in time and are bound to the current inventory setup. If any of these conditions change, the preview will be invalid. Please make sure to have a minimal timeframe between planning and applying the specs. #################### SERVICESPEC PREVIEWS #################### +---------+------+--------+-------------+ |SERVICE |NAME |ADD_TO |REMOVE_FROM | +---------+------+--------+-------------+ +---------+------+--------+-------------+ ################ OSDSPEC PREVIEWS ################ +---------+-------+-------+----------+----------+-----+ |SERVICE |NAME |HOST |DATA |DB |WAL | +---------+-------+-------+----------+----------+-----+ |osd |non-colocated |host02 |/dev/sdb |/dev/sde |- | |osd |non-colocated |host02 |/dev/sdc |/dev/sde |- | +---------+-------+-------+----------+----------+-----+", "ceph orch apply -i osds.yml Scheduled osd.non-colocated update", "ceph osd df tree | egrep -i \"ID|host02|osd.2|osd.5\" ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME -5 0.04877 - 55 GiB 15 GiB 4.5 MiB 0 B 60 MiB 40 GiB 27.27 1.17 - host host02 2 hdd 0.01219 1.00000 15 GiB 5.0 GiB 1.1 MiB 0 B 15 MiB 10 GiB 33.33 1.43 0 up osd.2 5 hdd 0.01219 1.00000 15 GiB 5.0 GiB 1.1 MiB 0 B 15 MiB 10 GiB 33.33 1.43 0 up osd.5", "ceph-volume lvm list /dev/sde ====== osd.2 ======= [db] /dev/ceph-15ce813a-8a4c-46d9-ad99-7e0845baf15e/osd-db-1998a02e-5e67-42a9-b057-e02c22bbf461 block device /dev/ceph-a4afcb78-c804-4daf-b78f-3c7ad1ed0379/osd-block-564b3d2f-0f85-4289-899a-9f98a2641979 block uuid ITPVPa-CCQ5-BbFa-FZCn-FeYt-c5N4-ssdU41 cephx lockbox secret cluster fsid fa0bd9dc-e4c4-11ed-8db4-001a4a00046e cluster name ceph crush device class db device /dev/ceph-15ce813a-8a4c-46d9-ad99-7e0845baf15e/osd-db-1998a02e-5e67-42a9-b057-e02c22bbf461 db uuid HF1bYb-fTK7-0dcB-CHzW-xvNn-dCym-KKdU5e encrypted 0 osd fsid 564b3d2f-0f85-4289-899a-9f98a2641979 osd id 2 osdspec affinity non-colocated type db vdo 0 devices /dev/sde ====== osd.5 ======= [db] /dev/ceph-15ce813a-8a4c-46d9-ad99-7e0845baf15e/osd-db-6c154191-846d-4e63-8c57-fc4b99e182bd block device /dev/ceph-b37c8310-77f9-4163-964b-f17b4c29c537/osd-block-b42a4f1f-8e19-4416-a874-6ff5d305d97f block uuid 0LuPoz-ao7S-UL2t-BDIs-C9pl-ct8J-xh5ep4 cephx lockbox secret cluster fsid fa0bd9dc-e4c4-11ed-8db4-001a4a00046e cluster name ceph crush device class db device /dev/ceph-15ce813a-8a4c-46d9-ad99-7e0845baf15e/osd-db-6c154191-846d-4e63-8c57-fc4b99e182bd db uuid SvmXms-iWkj-MTG7-VnJj-r5Mo-Moiw-MsbqVD encrypted 0 osd fsid b42a4f1f-8e19-4416-a874-6ff5d305d97f osd id 5 osdspec affinity non-colocated type db vdo 0 devices /dev/sde", "cephadm shell", "ceph osd tree", "ceph orch osd rm stop OSD_ID", "ceph orch osd rm stop 0", "ceph orch osd rm status", "ceph osd tree", "cephadm shell", "ceph cephadm osd activate HOSTNAME", "ceph cephadm osd activate host03", "ceph orch ls", "ceph orch ps --service_name= SERVICE_NAME", "ceph orch ps --service_name=osd", "ceph -w", "ceph config-key set mgr/cephadm/ HOSTNAME /grafana_key -i PRESENT_WORKING_DIRECTORY /key.pem ceph config-key set mgr/cephadm/ HOSTNAME /grafana_crt -i PRESENT_WORKING_DIRECTORY /certificate.pem", "ceph mgr module enable prometheus", "ceph orch redeploy prometheus", "cd /var/lib/ceph/ DAEMON_PATH /", "cd /var/lib/ceph/monitoring/", "touch monitoring.yml", "service_type: prometheus service_name: prometheus placement: hosts: - host01 networks: - 192.169.142.0/24 --- service_type: node-exporter --- service_type: alertmanager service_name: alertmanager placement: hosts: - host01 networks: - 192.169.142.0/24 --- service_type: grafana service_name: grafana placement: hosts: - host01 networks: - 192.169.142.0/24 --- service_type: ceph-exporter", "ceph orch apply -i monitoring.yml", "ceph orch ls", "ceph orch ps --service_name= SERVICE_NAME", "ceph orch ps --service_name=prometheus", "cephadm shell", "ceph orch rm SERVICE_NAME --force", "ceph orch rm grafana ceph orch rm prometheus ceph orch rm node-exporter ceph orch rm ceph-exporter ceph orch rm alertmanager ceph mgr module disable prometheus", "ceph orch status", "ceph orch ls", "ceph orch ps", "ceph orch ps", "mkdir /etc/ceph/", "cd /etc/ceph/", "ceph config generate-minimal-conf minimal ceph.conf for 417b1d7a-a0e6-11eb-b940-001a4a000740 [global] fsid = 417b1d7a-a0e6-11eb-b940-001a4a000740 mon_host = [v2:10.74.249.41:3300/0,v1:10.74.249.41:6789/0]", "mkdir /etc/ceph/", "cd /etc/ceph/", "ceph auth get-or-create client. CLIENT_NAME -o /etc/ceph/ NAME_OF_THE_FILE", "ceph auth get-or-create client.fs -o /etc/ceph/ceph.keyring", "cat ceph.keyring [client.fs] key = AQAvoH5gkUCsExAATz3xCBLd4n6B6jRv+Z7CVQ==", "cephadm shell", "ceph fs volume create FILESYSTEM_NAME --placement=\" NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_2 HOST_NAME_3 \"", "ceph fs volume create test --placement=\"2 host01 host02\"", "ceph osd pool create DATA_POOL [ PG_NUM ] ceph osd pool create METADATA_POOL [ PG_NUM ]", "ceph osd pool create cephfs_data 64 ceph osd pool create cephfs_metadata 64", "ceph fs new FILESYSTEM_NAME METADATA_POOL DATA_POOL", "ceph fs new test cephfs_metadata cephfs_data", "ceph orch apply mds FILESYSTEM_NAME --placement=\" NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_2 HOST_NAME_3 \"", "ceph orch apply mds test --placement=\"2 host01 host02\"", "ceph orch ls", "ceph fs ls ceph fs status", "ceph orch ps --daemon_type= DAEMON_NAME", "ceph orch ps --daemon_type=mds", "touch mds.yaml", "service_type: mds service_id: FILESYSTEM_NAME placement: hosts: - HOST_NAME_1 - HOST_NAME_2 - HOST_NAME_3", "service_type: mds service_id: fs_name placement: hosts: - host01 - host02", "cephadm shell --mount mds.yaml:/var/lib/ceph/mds/mds.yaml", "cd /var/lib/ceph/mds/", "cephadm shell", "cd /var/lib/ceph/mds/", "ceph orch apply -i FILE_NAME .yaml", "ceph orch apply -i mds.yaml", "ceph fs new CEPHFS_NAME METADATA_POOL DATA_POOL", "ceph fs new test metadata_pool data_pool", "ceph orch ls", "ceph orch ps --daemon_type= DAEMON_NAME", "ceph orch ps --daemon_type=mds", "cephadm shell", "ceph config set mon mon_allow_pool_delete true", "ceph fs volume rm FILESYSTEM_NAME --yes-i-really-mean-it", "ceph fs volume rm cephfs-new --yes-i-really-mean-it", "ceph orch ls", "ceph orch rm SERVICE_NAME", "ceph orch rm mds.test", "ceph orch ps", "ceph orch ps", "cephadm shell", "radosgw-admin realm create --rgw-realm= REALM_NAME --default", "radosgw-admin realm create --rgw-realm=test_realm --default", "radosgw-admin zonegroup create --rgw-zonegroup= ZONE_GROUP_NAME --master --default", "radosgw-admin zonegroup create --rgw-zonegroup=default --master --default", "radosgw-admin zone create --rgw-zonegroup= ZONE_GROUP_NAME --rgw-zone= ZONE_NAME --master --default", "radosgw-admin zone create --rgw-zonegroup=default --rgw-zone=test_zone --master --default", "radosgw-admin period update --rgw-realm= REALM_NAME --commit", "radosgw-admin period update --rgw-realm=test_realm --commit", "ceph orch apply rgw NAME [--realm= REALM_NAME ] [--zone= ZONE_NAME ] --placement=\" NUMBER_OF_DAEMONS [ HOST_NAME_1 HOST_NAME_2 ]\"", "ceph orch apply rgw test --realm=test_realm --zone=test_zone --placement=\"2 host01 host02\"", "ceph orch apply rgw SERVICE_NAME", "ceph orch apply rgw foo", "ceph orch host label add HOST_NAME_1 LABEL_NAME ceph orch host label add HOSTNAME_2 LABEL_NAME ceph orch apply rgw SERVICE_NAME --placement=\"label: LABEL_NAME count-per-host: NUMBER_OF_DAEMONS \" --port=8000", "ceph orch host label add host01 rgw # the 'rgw' label can be anything ceph orch host label add host02 rgw ceph orch apply rgw foo --placement=\"2 label:rgw\" --port=8000", "ceph orch ls", "ceph orch ps --daemon_type= DAEMON_NAME", "ceph orch ps --daemon_type=rgw", "touch radosgw.yml", "service_type: rgw service_id: REALM_NAME . ZONE_NAME placement: hosts: - HOST_NAME_1 - HOST_NAME_2 count_per_host: NUMBER_OF_DAEMONS spec: rgw_realm: REALM_NAME rgw_zone: ZONE_NAME rgw_frontend_port: FRONT_END_PORT networks: - NETWORK_CIDR # Ceph Object Gateway service binds to a specific network", "service_type: rgw service_id: default placement: hosts: - host01 - host02 - host03 count_per_host: 2 spec: rgw_realm: default rgw_zone: default rgw_frontend_port: 1234 networks: - 192.169.142.0/24", "radosgw-admin realm create --rgw-realm=test_realm radosgw-admin zonegroup create --rgw-zonegroup=test_zonegroup radosgw-admin zone create --rgw-zonegroup=test_zonegroup --rgw-zone=test_zone radosgw-admin period update --rgw-realm=test_realm --commit", "service_type: rgw service_id: test_realm.test_zone placement: hosts: - host01 - host02 - host03 count_per_host: 2 spec: rgw_realm: test_realm rgw_zone: test_zone rgw_frontend_port: 1234 networks: - 192.169.142.0/24", "cephadm shell --mount radosgw.yml:/var/lib/ceph/radosgw/radosgw.yml", "ceph orch apply -i FILE_NAME .yml", "ceph orch apply -i radosgw.yml", "ceph orch ls", "ceph orch ps --daemon_type= DAEMON_NAME", "ceph orch ps --daemon_type=rgw", "radosgw-admin realm create --rgw-realm= REALM_NAME --default", "radosgw-admin realm create --rgw-realm=test_realm --default", "radosgw-admin zonegroup create --rgw-zonegroup= ZONE_GROUP_NAME --endpoints=http:// RGW_PRIMARY_HOSTNAME : RGW_PRIMARY_PORT_NUMBER_1 --master --default", "radosgw-admin zonegroup create --rgw-zonegroup=us --endpoints=http://rgw1:80 --master --default", "radosgw-admin zone create --rgw-zonegroup= PRIMARY_ZONE_GROUP_NAME --rgw-zone= PRIMARY_ZONE_NAME --endpoints=http:// RGW_PRIMARY_HOSTNAME : RGW_PRIMARY_PORT_NUMBER_1 --access-key= SYSTEM_ACCESS_KEY --secret= SYSTEM_SECRET_KEY", "radosgw-admin zone create --rgw-zonegroup=us --rgw-zone=us-east-1 --endpoints=http://rgw1:80 --access-key=LIPEYZJLTWXRKXS9LPJC --secret-key=IsAje0AVDNXNw48LjMAimpCpI7VaxJYSnfD0FFKQ", "radosgw-admin zonegroup delete --rgw-zonegroup=default ceph osd pool rm default.rgw.log default.rgw.log --yes-i-really-really-mean-it ceph osd pool rm default.rgw.meta default.rgw.meta --yes-i-really-really-mean-it ceph osd pool rm default.rgw.control default.rgw.control --yes-i-really-really-mean-it ceph osd pool rm default.rgw.data.root default.rgw.data.root --yes-i-really-really-mean-it ceph osd pool rm default.rgw.gc default.rgw.gc --yes-i-really-really-mean-it", "radosgw-admin user create --uid= USER_NAME --display-name=\" USER_NAME \" --access-key= SYSTEM_ACCESS_KEY --secret= SYSTEM_SECRET_KEY --system", "radosgw-admin user create --uid=zone.user --display-name=\"Zone user\" --system", "radosgw-admin zone modify --rgw-zone= PRIMARY_ZONE_NAME --access-key= ACCESS_KEY --secret= SECRET_KEY", "radosgw-admin zone modify --rgw-zone=us-east-1 --access-key=NE48APYCAODEPLKBCZVQ--secret=u24GHQWRE3yxxNBnFBzjM4jn14mFIckQ4EKL6LoW", "radosgw-admin period update --commit", "radosgw-admin period update --commit", "systemctl list-units | grep ceph", "systemctl start ceph- FSID @ DAEMON_NAME systemctl enable ceph- FSID @ DAEMON_NAME", "systemctl start [email protected]_realm.us-east-1.host01.ahdtsw.service systemctl enable [email protected]_realm.us-east-1.host01.ahdtsw.service", "radosgw-admin realm pull --url= URL_TO_PRIMARY_ZONE_GATEWAY --access-key= ACCESS_KEY --secret-key= SECRET_KEY", "radosgw-admin realm pull --url=http://10.74.249.26:80 --access-key=LIPEYZJLTWXRKXS9LPJC --secret-key=IsAje0AVDNXNw48LjMAimpCpI7VaxJYSnfD0FFKQ", "radosgw-admin period pull --url= URL_TO_PRIMARY_ZONE_GATEWAY --access-key= ACCESS_KEY --secret-key= SECRET_KEY", "radosgw-admin period pull --url=http://10.74.249.26:80 --access-key=LIPEYZJLTWXRKXS9LPJC --secret-key=IsAje0AVDNXNw48LjMAimpCpI7VaxJYSnfD0FFKQ", "radosgw-admin zone create --rgw-zonegroup= ZONE_GROUP_NAME --rgw-zone= SECONDARY_ZONE_NAME --endpoints=http:// RGW_SECONDARY_HOSTNAME : RGW_PRIMARY_PORT_NUMBER_1 --access-key= SYSTEM_ACCESS_KEY --secret= SYSTEM_SECRET_KEY --endpoints=http:// FQDN :80 [--read-only]", "radosgw-admin zone create --rgw-zonegroup=us --rgw-zone=us-east-2 --endpoints=http://rgw2:80 --access-key=LIPEYZJLTWXRKXS9LPJC --secret-key=IsAje0AVDNXNw48LjMAimpCpI7VaxJYSnfD0FFKQ --endpoints=http://rgw.example.com:80", "radosgw-admin zone rm --rgw-zone=default ceph osd pool rm default.rgw.log default.rgw.log --yes-i-really-really-mean-it ceph osd pool rm default.rgw.meta default.rgw.meta --yes-i-really-really-mean-it ceph osd pool rm default.rgw.control default.rgw.control --yes-i-really-really-mean-it ceph osd pool rm default.rgw.data.root default.rgw.data.root --yes-i-really-really-mean-it ceph osd pool rm default.rgw.gc default.rgw.gc --yes-i-really-really-mean-it", "ceph config set SERVICE_NAME rgw_zone SECONDARY_ZONE_NAME", "ceph config set rgw rgw_zone us-east-2", "radosgw-admin period update --commit", "radosgw-admin period update --commit", "systemctl list-units | grep ceph", "systemctl start ceph- FSID @ DAEMON_NAME systemctl enable ceph- FSID @ DAEMON_NAME", "systemctl start [email protected]_realm.us-east-2.host04.ahdtsw.service systemctl enable [email protected]_realm.us-east-2.host04.ahdtsw.service", "ceph orch apply rgw NAME --realm= REALM_NAME --zone= PRIMARY_ZONE_NAME --placement=\" NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_2 \"", "ceph orch apply rgw east --realm=test_realm --zone=us-east-1 --placement=\"2 host01 host02\"", "radosgw-admin sync status", "cephadm shell", "ceph orch ls", "ceph orch rm SERVICE_NAME", "ceph orch rm rgw.test_realm.test_zone_bb", "ceph orch ps", "ceph orch ps", "dnf install -y net-snmp-utils net-snmp", "firewall-cmd --zone=public --add-port=162/udp firewall-cmd --zone=public --add-port=162/udp --permanent", "curl -o CEPH_MIB.txt -L https://raw.githubusercontent.com/ceph/ceph/master/monitoring/snmp/CEPH-MIB.txt scp CEPH_MIB.txt root@host02:/usr/share/snmp/mibs", "mkdir /root/snmptrapd/", "format2 %V\\n% Agent Address: %A \\n Agent Hostname: %B \\n Date: %H - %J - %K - %L - %M - %Y \\n Enterprise OID: %N \\n Trap Type: %W \\n Trap Sub-Type: %q \\n Community/Infosec Context: %P \\n Uptime: %T \\n Description: %W \\n PDU Attribute/Value Pair Array:\\n%v \\n -------------- \\n createuser -e 0x_ENGINE_ID_ SNMPV3_AUTH_USER_NAME AUTH_PROTOCOL SNMP_V3_AUTH_PASSWORD PRIVACY_PROTOCOL PRIVACY_PASSWORD authuser log,execute SNMP_V3_AUTH_USER_NAME authCommunity log,execute,net SNMP_COMMUNITY_FOR_SNMPV2", "format2 %V\\n% Agent Address: %A \\n Agent Hostname: %B \\n Date: %H - %J - %K - %L - %M - %Y \\n Enterprise OID: %N \\n Trap Type: %W \\n Trap Sub-Type: %q \\n Community/Infosec Context: %P \\n Uptime: %T \\n Description: %W \\n PDU Attribute/Value Pair Array:\\n%v \\n -------------- \\n authCommunity log,execute,net public", "format2 %V\\n% Agent Address: %A \\n Agent Hostname: %B \\n Date: %H - %J - %K - %L - %M - %Y \\n Enterprise OID: %N \\n Trap Type: %W \\n Trap Sub-Type: %q \\n Community/Infosec Context: %P \\n Uptime: %T \\n Description: %W \\n PDU Attribute/Value Pair Array:\\n%v \\n -------------- \\n createuser -e 0x8000C53Ff64f341c655d11eb8778fa163e914bcc myuser SHA mypassword authuser log,execute myuser", "snmp_v3_auth_username: myuser snmp_v3_auth_password: mypassword", "format2 %V\\n% Agent Address: %A \\n Agent Hostname: %B \\n Date: %H - %J - %K - %L - %M - %Y \\n Enterprise OID: %N \\n Trap Type: %W \\n Trap Sub-Type: %q \\n Community/Infosec Context: %P \\n Uptime: %T \\n Description: %W \\n PDU Attribute/Value Pair Array:\\n%v \\n -------------- \\n createuser -e 0x8000C53Ff64f341c655d11eb8778fa163e914bcc myuser SHA mypassword DES mysecret authuser log,execute myuser", "snmp_v3_auth_username: myuser snmp_v3_auth_password: mypassword snmp_v3_priv_password: mysecret", "/usr/sbin/snmptrapd -M /usr/share/snmp/mibs -m CEPH-MIB.txt -f -C -c /root/snmptrapd/ CONFIGURATION_FILE -Of -Lo :162", "/usr/sbin/snmptrapd -M /usr/share/snmp/mibs -m CEPH-MIB.txt -f -C -c /root/snmptrapd/snmptrapd_auth.conf -Of -Lo :162", "NET-SNMP version 5.8 Agent Address: 0.0.0.0 Agent Hostname: <UNKNOWN> Date: 15 - 5 - 12 - 8 - 10 - 4461391 Enterprise OID: . Trap Type: Cold Start Trap Sub-Type: 0 Community/Infosec Context: TRAP2, SNMP v3, user myuser, context Uptime: 0 Description: Cold Start PDU Attribute/Value Pair Array: .iso.org.dod.internet.mgmt.mib-2.1.3.0 = Timeticks: (292276100) 3 days, 19:52:41.00 .iso.org.dod.internet.snmpV2.snmpModules.1.1.4.1.0 = OID: .iso.org.dod.internet.private.enterprises.ceph.cephCluster.cephNotifications.prometheus.promMgr.promMgrPrometheusInactive .iso.org.dod.internet.private.enterprises.ceph.cephCluster.cephNotifications.prometheus.promMgr.promMgrPrometheusInactive.1 = STRING: \"1.3.6.1.4.1.50495.1.2.1.6.2[alertname=CephMgrPrometheusModuleInactive]\" .iso.org.dod.internet.private.enterprises.ceph.cephCluster.cephNotifications.prometheus.promMgr.promMgrPrometheusInactive.2 = STRING: \"critical\" .iso.org.dod.internet.private.enterprises.ceph.cephCluster.cephNotifications.prometheus.promMgr.promMgrPrometheusInactive.3 = STRING: \"Status: critical - Alert: CephMgrPrometheusModuleInactive Summary: Ceph's mgr/prometheus module is not available Description: The mgr/prometheus module at 10.70.39.243:9283 is unreachable. This could mean that the module has been disabled or the mgr itself is down. Without the mgr/prometheus module metrics and alerts will no longer function. Open a shell to ceph and use 'ceph -s' to determine whether the mgr is active. If the mgr is not active, restart it, otherwise you can check the mgr/prometheus module is loaded with 'ceph mgr module ls' and if it's not listed as enabled, enable it with 'ceph mgr module enable prometheus'\"", "cephadm shell", "ceph orch host label add HOSTNAME snmp-gateway", "ceph orch host label add host02 snmp-gateway", "cat snmp_creds.yml snmp_community: public", "cat snmp-gateway.yml service_type: snmp-gateway service_name: snmp-gateway placement: count: 1 spec: credentials: snmp_community: public port: 9464 snmp_destination: 192.168.122.73:162 snmp_version: V2c", "cat snmp_creds.yml snmp_v3_auth_username: myuser snmp_v3_auth_password: mypassword", "cat snmp-gateway.yml service_type: snmp-gateway service_name: snmp-gateway placement: count: 1 spec: credentials: snmp_v3_auth_password: mypassword snmp_v3_auth_username: myuser engine_id: 8000C53Ff64f341c655d11eb8778fa163e914bcc port: 9464 snmp_destination: 192.168.122.1:162 snmp_version: V3", "cat snmp_creds.yml snmp_v3_auth_username: myuser snmp_v3_auth_password: mypassword snmp_v3_priv_password: mysecret", "cat snmp-gateway.yml service_type: snmp-gateway service_name: snmp-gateway placement: count: 1 spec: credentials: snmp_v3_auth_password: mypassword snmp_v3_auth_username: myuser snmp_v3_priv_password: mysecret engine_id: 8000C53Ff64f341c655d11eb8778fa163e914bcc port: 9464 snmp_destination: 192.168.122.1:162 snmp_version: V3", "ceph orch apply snmp-gateway --snmp_version= V2c_OR_V3 --destination= SNMP_DESTINATION [--port= PORT_NUMBER ] [--engine-id=8000C53F_CLUSTER_FSID_WITHOUT_DASHES_] [--auth-protocol= MDS_OR_SHA ] [--privacy_protocol= DES_OR_AES ] -i FILENAME", "ceph orch apply -i FILENAME .yml", "ceph orch apply snmp-gateway --snmp-version=V2c --destination=192.168.122.73:162 --port=9464 -i snmp_creds.yml", "ceph orch apply snmp-gateway --snmp-version=V3 --engine-id=8000C53Ff64f341c655d11eb8778fa163e914bcc--destination=192.168.122.73:162 -i snmp_creds.yml", "ceph orch apply snmp-gateway --snmp-version=V3 --engine-id=8000C53Ff64f341c655d11eb8778fa163e914bcc--destination=192.168.122.73:162 --privacy-protocol=AES -i snmp_creds.yml", "ceph orch apply -i snmp-gateway.yml", "ceph osd set noout ceph osd set noscrub ceph osd set nodeep-scrub", "ceph osd set noout ceph osd set noscrub ceph osd set nodeep-scrub", "ceph osd unset noout ceph osd unset noscrub ceph osd unset nodeep-scrub", "ceph osd unset noout ceph osd unset noscrub ceph osd unset nodeep-scrub", "ceph osd set noout ceph osd set noscrub ceph osd set nodeep-scrub", "ceph osd set noout ceph osd set noscrub ceph osd set nodeep-scrub", "cp /etc/ceph/ceph.conf / PATH_TO_BACKUP_LOCATION /ceph.conf", "cp /etc/ceph/ceph.conf /some/backup/location/ceph.conf", "cp / PATH_TO_BACKUP_LOCATION /ceph.conf /etc/ceph/ceph.conf", "cp /some/backup/location/ceph.conf /etc/ceph/ceph.conf", "ceph osd unset noout ceph osd unset noscrub ceph osd unset nodeep-scrub", "ceph osd unset noout ceph osd unset noscrub ceph osd unset nodeep-scrub", "ceph osd set noout ceph osd set noscrub ceph osd set nodeep-scrub", "ceph osd set noout ceph osd set noscrub ceph osd set nodeep-scrub", "cp /etc/ceph/ceph.conf / PATH_TO_BACKUP_LOCATION /ceph.conf", "cp /etc/ceph/ceph.conf /some/backup/location/ceph.conf", "ceph osd unset noout ceph osd unset noscrub ceph osd unset nodeep-scrub", "ceph osd unset noout ceph osd unset noscrub ceph osd unset nodeep-scrub", "ceph osd set noscrub ceph osd set nodeep-scrub", "ceph osd unset noscrub ceph osd unset nodeep-scrub", "osd_max_backfills = 1 osd_recovery_max_active = 1 osd_recovery_op_priority = 1", "ceph osd set noscrub ceph osd set nodeep-scrub", "ceph tell DAEMON_TYPE .* injectargs -- OPTION_NAME VALUE [-- OPTION_NAME VALUE ]", "ceph tell osd.* injectargs --osd-max-backfills 1 --osd-recovery-max-active 1 --osd-recovery-op-priority 1", "ceph cephadm get-pub-key > ~/ PATH", "ceph cephadm get-pub-key > ~/ceph.pub", "ssh-copy-id -f -i ~/ PATH root@ HOST_NAME_2", "ssh-copy-id -f -i ~/ceph.pub root@host02", "ceph orch host add NODE_NAME IP_ADDRESS", "ceph orch host add host02 10.10.128.70", "ceph df rados df ceph osd df", "ceph osd set noscrub ceph osd set nodeep-scrub", "ceph tell DAEMON_TYPE .* injectargs -- OPTION_NAME VALUE [-- OPTION_NAME VALUE ]", "ceph tell osd.* injectargs --osd-max-backfills 1 --osd-recovery-max-active 1 --osd-recovery-op-priority 1", "ceph -s ceph df", "ceph df rados df ceph osd df", "ceph osd set noout ceph osd set noscrub ceph osd set nodeep-scrub", "ceph osd crush rm host03", "ceph -s", "ceph osd unset noout ceph osd unset noscrub ceph osd unset nodeep-scrub", "ceph -s", "ceph osd tree ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY -1 0.33554 root default -2 0.04779 host host03 0 0.04779 osd.0 up 1.00000 1.00000 -3 0.04779 host host02 1 0.04779 osd.1 up 1.00000 1.00000 -4 0.04779 host host01 2 0.04779 osd.2 up 1.00000 1.00000 -5 0.04779 host host04 3 0.04779 osd.3 up 1.00000 1.00000 -6 0.07219 host host06 4 0.07219 osd.4 up 0.79999 1.00000 -7 0.07219 host host05 5 0.07219 osd.5 up 0.79999 1.00000", "ceph osd crush add-bucket allDC root ceph osd crush add-bucket DC1 datacenter ceph osd crush add-bucket DC2 datacenter ceph osd crush add-bucket DC3 datacenter", "ceph osd crush move DC1 root=allDC ceph osd crush move DC2 root=allDC ceph osd crush move DC3 root=allDC ceph osd crush move host01 datacenter=DC1 ceph osd crush move host02 datacenter=DC1 ceph osd crush move host03 datacenter=DC2 ceph osd crush move host05 datacenter=DC2 ceph osd crush move host04 datacenter=DC3 ceph osd crush move host06 datacenter=DC3", "ceph osd tree ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY -8 6.00000 root allDC -9 2.00000 datacenter DC1 -4 1.00000 host host01 2 1.00000 osd.2 up 1.00000 1.00000 -3 1.00000 host host02 1 1.00000 osd.1 up 1.00000 1.00000 -10 2.00000 datacenter DC2 -2 1.00000 host host03 0 1.00000 osd.0 up 1.00000 1.00000 -7 1.00000 host host05 5 1.00000 osd.5 up 0.79999 1.00000 -11 2.00000 datacenter DC3 -6 1.00000 host host06 4 1.00000 osd.4 up 0.79999 1.00000 -5 1.00000 host host04 3 1.00000 osd.3 up 1.00000 1.00000 -1 0 root default" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html-single/operations_guide/index
Chapter 14. Deploying machine health checks
Chapter 14. Deploying machine health checks You can configure and deploy a machine health check to automatically repair damaged machines in a machine pool. Important You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API. Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation. To view the platform type for your cluster, run the following command: USD oc get infrastructure cluster -o jsonpath='{.status.platform}' 14.1. About machine health checks Note You can only apply a machine health check to machines that are managed by compute machine sets or control plane machine sets. To monitor machine health, create a resource to define the configuration for a controller. Set a condition to check, such as staying in the NotReady status for five minutes or displaying a permanent condition in the node-problem-detector, and a label for the set of machines to monitor. The controller that observes a MachineHealthCheck resource checks for the defined condition. If a machine fails the health check, the machine is automatically deleted and one is created to take its place. When a machine is deleted, you see a machine deleted event. To limit disruptive impact of the machine deletion, the controller drains and deletes only one node at a time. If there are more unhealthy machines than the maxUnhealthy threshold allows for in the targeted pool of machines, remediation stops and therefore enables manual intervention. Note Consider the timeouts carefully, accounting for workloads and requirements. Long timeouts can result in long periods of downtime for the workload on the unhealthy machine. Too short timeouts can result in a remediation loop. For example, the timeout for checking the NotReady status must be long enough to allow the machine to complete the startup process. To stop the check, remove the resource. 14.1.1. Limitations when deploying machine health checks There are limitations to consider before deploying a machine health check: Only machines owned by a machine set are remediated by a machine health check. If the node for a machine is removed from the cluster, a machine health check considers the machine to be unhealthy and remediates it immediately. If the corresponding node for a machine does not join the cluster after the nodeStartupTimeout , the machine is remediated. A machine is remediated immediately if the Machine resource phase is Failed . Additional resources About listing all the nodes in a cluster Short-circuiting machine health check remediation About the Control Plane Machine Set Operator 14.2. Sample MachineHealthCheck resource The MachineHealthCheck resource for all cloud-based installation types, and other than bare metal, resembles the following YAML file: apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example 1 namespace: openshift-machine-api spec: selector: matchLabels: machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 machine.openshift.io/cluster-api-machineset: <cluster_name>-<label>-<zone> 4 unhealthyConditions: - type: "Ready" timeout: "300s" 5 status: "False" - type: "Ready" timeout: "300s" 6 status: "Unknown" maxUnhealthy: "40%" 7 nodeStartupTimeout: "10m" 8 1 Specify the name of the machine health check to deploy. 2 3 Specify a label for the machine pool that you want to check. 4 Specify the machine set to track in <cluster_name>-<label>-<zone> format. For example, prod-node-us-east-1a . 5 6 Specify the timeout duration for a node condition. If a condition is met for the duration of the timeout, the machine will be remediated. Long timeouts can result in long periods of downtime for a workload on an unhealthy machine. 7 Specify the amount of machines allowed to be concurrently remediated in the targeted pool. This can be set as a percentage or an integer. If the number of unhealthy machines exceeds the limit set by maxUnhealthy , remediation is not performed. 8 Specify the timeout duration that a machine health check must wait for a node to join the cluster before a machine is determined to be unhealthy. Note The matchLabels are examples only; you must map your machine groups based on your specific needs. 14.2.1. Short-circuiting machine health check remediation Short-circuiting ensures that machine health checks remediate machines only when the cluster is healthy. Short-circuiting is configured through the maxUnhealthy field in the MachineHealthCheck resource. If the user defines a value for the maxUnhealthy field, before remediating any machines, the MachineHealthCheck compares the value of maxUnhealthy with the number of machines within its target pool that it has determined to be unhealthy. Remediation is not performed if the number of unhealthy machines exceeds the maxUnhealthy limit. Important If maxUnhealthy is not set, the value defaults to 100% and the machines are remediated regardless of the state of the cluster. The appropriate maxUnhealthy value depends on the scale of the cluster you deploy and how many machines the MachineHealthCheck covers. For example, you can use the maxUnhealthy value to cover multiple compute machine sets across multiple availability zones so that if you lose an entire zone, your maxUnhealthy setting prevents further remediation within the cluster. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability. Important If you configure a MachineHealthCheck resource for the control plane, set the value of maxUnhealthy to 1 . This configuration ensures that the machine health check takes no action when multiple control plane machines appear to be unhealthy. Multiple unhealthy control plane machines can indicate that the etcd cluster is degraded or that a scaling operation to replace a failed machine is in progress. If the etcd cluster is degraded, manual intervention might be required. If a scaling operation is in progress, the machine health check should allow it to finish. The maxUnhealthy field can be set as either an integer or percentage. There are different remediation implementations depending on the maxUnhealthy value. 14.2.1.1. Setting maxUnhealthy by using an absolute value If maxUnhealthy is set to 2 : Remediation will be performed if 2 or fewer nodes are unhealthy Remediation will not be performed if 3 or more nodes are unhealthy These values are independent of how many machines are being checked by the machine health check. 14.2.1.2. Setting maxUnhealthy by using percentages If maxUnhealthy is set to 40% and there are 25 machines being checked: Remediation will be performed if 10 or fewer nodes are unhealthy Remediation will not be performed if 11 or more nodes are unhealthy If maxUnhealthy is set to 40% and there are 6 machines being checked: Remediation will be performed if 2 or fewer nodes are unhealthy Remediation will not be performed if 3 or more nodes are unhealthy Note The allowed number of machines is rounded down when the percentage of maxUnhealthy machines that are checked is not a whole number. 14.3. Creating a machine health check resource You can create a MachineHealthCheck resource for machine sets in your cluster. Note You can only apply a machine health check to machines that are managed by compute machine sets or control plane machine sets. Prerequisites Install the oc command line interface. Procedure Create a healthcheck.yml file that contains the definition of your machine health check. Apply the healthcheck.yml file to your cluster: USD oc apply -f healthcheck.yml You can configure and deploy a machine health check to detect and repair unhealthy bare metal nodes. 14.4. About power-based remediation of bare metal In a bare metal cluster, remediation of nodes is critical to ensuring the overall health of the cluster. Physically remediating a cluster can be challenging and any delay in putting the machine into a safe or an operational state increases the time the cluster remains in a degraded state, and the risk that subsequent failures might bring the cluster offline. Power-based remediation helps counter such challenges. Instead of reprovisioning the nodes, power-based remediation uses a power controller to power off an inoperable node. This type of remediation is also called power fencing. OpenShift Container Platform uses the MachineHealthCheck controller to detect faulty bare metal nodes. Power-based remediation is fast and reboots faulty nodes instead of removing them from the cluster. Power-based remediation provides the following capabilities: Allows the recovery of control plane nodes Reduces the risk data loss in hyperconverged environments Reduces the downtime associated with recovering physical machines 14.4.1. MachineHealthChecks on bare metal Machine deletion on bare metal cluster triggers reprovisioning of a bare metal host. Usually bare metal reprovisioning is a lengthy process, during which the cluster is missing compute resources and applications might be interrupted. To change the default remediation process from machine deletion to host power-cycle, annotate the MachineHealthCheck resource with the machine.openshift.io/remediation-strategy: external-baremetal annotation. After you set the annotation, unhealthy machines are power-cycled by using BMC credentials. 14.4.2. Understanding the remediation process The remediation process operates as follows: The MachineHealthCheck (MHC) controller detects that a node is unhealthy. The MHC notifies the bare metal machine controller which requests to power-off the unhealthy node. After the power is off, the node is deleted, which allows the cluster to reschedule the affected workload on other nodes. The bare metal machine controller requests to power on the node. After the node is up, the node re-registers itself with the cluster, resulting in the creation of a new node. After the node is recreated, the bare metal machine controller restores the annotations and labels that existed on the unhealthy node before its deletion. Note If the power operations did not complete, the bare metal machine controller triggers the reprovisioning of the unhealthy node unless this is a control plane node or a node that was provisioned externally. 14.4.3. Creating a MachineHealthCheck resource for bare metal Prerequisites The OpenShift Container Platform is installed using installer-provisioned infrastructure (IPI). Access to Baseboard Management Controller (BMC) credentials (or BMC access to each node) Network access to the BMC interface of the unhealthy node. Procedure Create a healthcheck.yaml file that contains the definition of your machine health check. Apply the healthcheck.yaml file to your cluster using the following command: USD oc apply -f healthcheck.yaml Sample MachineHealthCheck resource for bare metal apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example 1 namespace: openshift-machine-api annotations: machine.openshift.io/remediation-strategy: external-baremetal 2 spec: selector: matchLabels: machine.openshift.io/cluster-api-machine-role: <role> 3 machine.openshift.io/cluster-api-machine-type: <role> 4 machine.openshift.io/cluster-api-machineset: <cluster_name>-<label>-<zone> 5 unhealthyConditions: - type: "Ready" timeout: "300s" 6 status: "False" - type: "Ready" timeout: "300s" 7 status: "Unknown" maxUnhealthy: "40%" 8 nodeStartupTimeout: "10m" 9 1 Specify the name of the machine health check to deploy. 2 For bare metal clusters, you must include the machine.openshift.io/remediation-strategy: external-baremetal annotation in the annotations section to enable power-cycle remediation. With this remediation strategy, unhealthy hosts are rebooted instead of removed from the cluster. 3 4 Specify a label for the machine pool that you want to check. 5 Specify the compute machine set to track in <cluster_name>-<label>-<zone> format. For example, prod-node-us-east-1a . 6 7 Specify the timeout duration for the node condition. If the condition is met for the duration of the timeout, the machine will be remediated. Long timeouts can result in long periods of downtime for a workload on an unhealthy machine. 8 Specify the amount of machines allowed to be concurrently remediated in the targeted pool. This can be set as a percentage or an integer. If the number of unhealthy machines exceeds the limit set by maxUnhealthy , remediation is not performed. 9 Specify the timeout duration that a machine health check must wait for a node to join the cluster before a machine is determined to be unhealthy. Note The matchLabels are examples only; you must map your machine groups based on your specific needs. 14.4.4. Troubleshooting issues with power-based remediation To troubleshoot an issue with power-based remediation, verify the following: You have access to the BMC. BMC is connected to the control plane node that is responsible for running the remediation task.
[ "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example 1 namespace: openshift-machine-api spec: selector: matchLabels: machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 machine.openshift.io/cluster-api-machineset: <cluster_name>-<label>-<zone> 4 unhealthyConditions: - type: \"Ready\" timeout: \"300s\" 5 status: \"False\" - type: \"Ready\" timeout: \"300s\" 6 status: \"Unknown\" maxUnhealthy: \"40%\" 7 nodeStartupTimeout: \"10m\" 8", "oc apply -f healthcheck.yml", "oc apply -f healthcheck.yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example 1 namespace: openshift-machine-api annotations: machine.openshift.io/remediation-strategy: external-baremetal 2 spec: selector: matchLabels: machine.openshift.io/cluster-api-machine-role: <role> 3 machine.openshift.io/cluster-api-machine-type: <role> 4 machine.openshift.io/cluster-api-machineset: <cluster_name>-<label>-<zone> 5 unhealthyConditions: - type: \"Ready\" timeout: \"300s\" 6 status: \"False\" - type: \"Ready\" timeout: \"300s\" 7 status: \"Unknown\" maxUnhealthy: \"40%\" 8 nodeStartupTimeout: \"10m\" 9" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/machine_management/deploying-machine-health-checks
Chapter 79. KafkaConnectTemplate schema reference
Chapter 79. KafkaConnectTemplate schema reference Used in: KafkaConnectSpec , KafkaMirrorMaker2Spec Property Description deployment Template for Kafka Connect Deployment . DeploymentTemplate podSet Template for Kafka Connect StrimziPodSet resource. ResourceTemplate pod Template for Kafka Connect Pods . PodTemplate apiService Template for Kafka Connect API Service . InternalServiceTemplate headlessService Template for Kafka Connect headless Service . InternalServiceTemplate connectContainer Template for the Kafka Connect container. ContainerTemplate initContainer Template for the Kafka init container. ContainerTemplate podDisruptionBudget Template for Kafka Connect PodDisruptionBudget . PodDisruptionBudgetTemplate serviceAccount Template for the Kafka Connect service account. ResourceTemplate clusterRoleBinding Template for the Kafka Connect ClusterRoleBinding. ResourceTemplate buildPod Template for Kafka Connect Build Pods . The build pod is used only on OpenShift. PodTemplate buildContainer Template for the Kafka Connect Build container. The build container is used only on OpenShift. ContainerTemplate buildConfig Template for the Kafka Connect BuildConfig used to build new container images. The BuildConfig is used only on OpenShift. BuildConfigTemplate buildServiceAccount Template for the Kafka Connect Build service account. ResourceTemplate jmxSecret Template for Secret of the Kafka Connect Cluster JMX authentication. ResourceTemplate
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-kafkaconnecttemplate-reference
function::proc_mem_string
function::proc_mem_string Name function::proc_mem_string - Human readable string of current proc memory usage Synopsis Arguments None Description Returns a human readable string showing the size, rss, shr, txt and data of the memory used by the current process. For example " size: 301m, rss: 11m, shr: 8m, txt: 52k, data: 2248k " .
[ "proc_mem_string:string()" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-proc-mem-string
Chapter 3. Recommended cluster scaling practices
Chapter 3. Recommended cluster scaling practices Important The guidance in this section is only relevant for installations with cloud provider integration. These guidelines apply to OpenShift Container Platform with software-defined networking (SDN), not Open Virtual Network (OVN). Apply the following best practices to scale the number of worker machines in your OpenShift Container Platform cluster. You scale the worker machines by increasing or decreasing the number of replicas that are defined in the worker machine set. 3.1. Recommended practices for scaling the cluster When scaling up the cluster to higher node counts: Spread nodes across all of the available zones for higher availability. Scale up by no more than 25 to 50 machines at once. Consider creating new machine sets in each available zone with alternative instance types of similar size to help mitigate any periodic provider capacity constraints. For example, on AWS, use m5.large and m5d.large. Note Cloud providers might implement a quota for API services. Therefore, gradually scale the cluster. The controller might not be able to create the machines if the replicas in the machine sets are set to higher numbers all at one time. The number of requests the cloud platform, which OpenShift Container Platform is deployed on top of, is able to handle impacts the process. The controller will start to query more while trying to create, check, and update the machines with the status. The cloud platform on which OpenShift Container Platform is deployed has API request limits and excessive queries might lead to machine creation failures due to cloud platform limitations. Enable machine health checks when scaling to large node counts. In case of failures, the health checks monitor the condition and automatically repair unhealthy machines. Note When scaling large and dense clusters to lower node counts, it might take large amounts of time as the process involves draining or evicting the objects running on the nodes being terminated in parallel. Also, the client might start to throttle the requests if there are too many objects to evict. The default client QPS and burst rates are currently set to 5 and 10 respectively and they cannot be modified in OpenShift Container Platform. 3.2. Modifying a machine set by using the CLI When you modify a machine set, your changes only apply to machines that are created after you save the updated MachineSet custom resource (CR). The changes do not affect existing machines. You can replace the existing machines with new ones that reflect the updated configuration by scaling the machine set. If you need to scale a machine set without making other changes, you do not need to delete the machines. Note By default, the OpenShift Container Platform router pods are deployed on machines. Because the router is required to access some cluster resources, including the web console, do not scale the machine set to 0 unless you first relocate the router pods. Prerequisites Your OpenShift Container Platform cluster uses the Machine API. You are logged in to the cluster as an administrator by using the OpenShift CLI ( oc ). Procedure Edit the machine set: USD oc edit machineset <machine_set_name> -n openshift-machine-api Note the value of the spec.replicas field, as you need it when scaling the machine set to apply the changes. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machine_set_name> namespace: openshift-machine-api spec: replicas: 2 1 # ... 1 The examples in this procedure show a machine set that has a replicas value of 2 . Update the machine set CR with the configuration options that you want and save your changes. List the machines that are managed by the updated machine set by running the following command: USD oc get -n openshift-machine-api machines -l machine.openshift.io/cluster-api-machineset=<machine_set_name> Example output NAME PHASE TYPE REGION ZONE AGE <machine_name_original_1> Running m6i.xlarge us-west-1 us-west-1a 4h <machine_name_original_2> Running m6i.xlarge us-west-1 us-west-1a 4h For each machine that is managed by the updated machine set, set the delete annotation by running the following command: USD oc annotate machine/<machine_name_original_1> \ -n openshift-machine-api \ machine.openshift.io/delete-machine="true" Scale the machine set to twice the number of replicas by running the following command: USD oc scale --replicas=4 \ 1 machineset <machine_set_name> \ -n openshift-machine-api 1 The original example value of 2 is doubled to 4 . List the machines that are managed by the updated machine set by running the following command: USD oc get -n openshift-machine-api machines -l machine.openshift.io/cluster-api-machineset=<machine_set_name> Example output NAME PHASE TYPE REGION ZONE AGE <machine_name_original_1> Running m6i.xlarge us-west-1 us-west-1a 4h <machine_name_original_2> Running m6i.xlarge us-west-1 us-west-1a 4h <machine_name_updated_1> Provisioned m6i.xlarge us-west-1 us-west-1a 55s <machine_name_updated_2> Provisioning m6i.xlarge us-west-1 us-west-1a 55s When the new machines are in the Running phase, you can scale the machine set to the original number of replicas. Scale the machine set to the original number of replicas by running the following command: USD oc scale --replicas=2 \ 1 machineset <machine_set_name> \ -n openshift-machine-api 1 The original example value of 2 . Verification To verify that the machines without the updated configuration are deleted, list the machines that are managed by the updated machine set by running the following command: USD oc get -n openshift-machine-api machines -l machine.openshift.io/cluster-api-machineset=<machine_set_name> Example output while deletion is in progress NAME PHASE TYPE REGION ZONE AGE <machine_name_original_1> Deleting m6i.xlarge us-west-1 us-west-1a 4h <machine_name_original_2> Deleting m6i.xlarge us-west-1 us-west-1a 4h <machine_name_updated_1> Running m6i.xlarge us-west-1 us-west-1a 5m41s <machine_name_updated_2> Running m6i.xlarge us-west-1 us-west-1a 5m41s Example output when deletion is complete NAME PHASE TYPE REGION ZONE AGE <machine_name_updated_1> Running m6i.xlarge us-west-1 us-west-1a 6m30s <machine_name_updated_2> Running m6i.xlarge us-west-1 us-west-1a 6m30s To verify that a machine created by the updated machine set has the correct configuration, examine the relevant fields in the CR for one of the new machines by running the following command: USD oc describe machine <machine_name_updated_1> -n openshift-machine-api 3.3. About machine health checks Machine health checks automatically repair unhealthy machines in a particular machine pool. To monitor machine health, create a resource to define the configuration for a controller. Set a condition to check, such as staying in the NotReady status for five minutes or displaying a permanent condition in the node-problem-detector, and a label for the set of machines to monitor. Note You cannot apply a machine health check to a machine with the master role. The controller that observes a MachineHealthCheck resource checks for the defined condition. If a machine fails the health check, the machine is automatically deleted and one is created to take its place. When a machine is deleted, you see a machine deleted event. To limit disruptive impact of the machine deletion, the controller drains and deletes only one node at a time. If there are more unhealthy machines than the maxUnhealthy threshold allows for in the targeted pool of machines, remediation stops and therefore enables manual intervention. Note Consider the timeouts carefully, accounting for workloads and requirements. Long timeouts can result in long periods of downtime for the workload on the unhealthy machine. Too short timeouts can result in a remediation loop. For example, the timeout for checking the NotReady status must be long enough to allow the machine to complete the startup process. To stop the check, remove the resource. 3.3.1. Limitations when deploying machine health checks There are limitations to consider before deploying a machine health check: Only machines owned by a machine set are remediated by a machine health check. Control plane machines are not currently supported and are not remediated if they are unhealthy. If the node for a machine is removed from the cluster, a machine health check considers the machine to be unhealthy and remediates it immediately. If the corresponding node for a machine does not join the cluster after the nodeStartupTimeout , the machine is remediated. A machine is remediated immediately if the Machine resource phase is Failed . 3.4. Sample MachineHealthCheck resource The MachineHealthCheck resource for all cloud-based installation types, and other than bare metal, resembles the following YAML file: apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example 1 namespace: openshift-machine-api spec: selector: matchLabels: machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 machine.openshift.io/cluster-api-machineset: <cluster_name>-<label>-<zone> 4 unhealthyConditions: - type: "Ready" timeout: "300s" 5 status: "False" - type: "Ready" timeout: "300s" 6 status: "Unknown" maxUnhealthy: "40%" 7 nodeStartupTimeout: "10m" 8 1 Specify the name of the machine health check to deploy. 2 3 Specify a label for the machine pool that you want to check. 4 Specify the machine set to track in <cluster_name>-<label>-<zone> format. For example, prod-node-us-east-1a . 5 6 Specify the timeout duration for a node condition. If a condition is met for the duration of the timeout, the machine will be remediated. Long timeouts can result in long periods of downtime for a workload on an unhealthy machine. 7 Specify the amount of machines allowed to be concurrently remediated in the targeted pool. This can be set as a percentage or an integer. If the number of unhealthy machines exceeds the limit set by maxUnhealthy , remediation is not performed. 8 Specify the timeout duration that a machine health check must wait for a node to join the cluster before a machine is determined to be unhealthy. Note The matchLabels are examples only; you must map your machine groups based on your specific needs. 3.4.1. Short-circuiting machine health check remediation Short circuiting ensures that machine health checks remediate machines only when the cluster is healthy. Short-circuiting is configured through the maxUnhealthy field in the MachineHealthCheck resource. If the user defines a value for the maxUnhealthy field, before remediating any machines, the MachineHealthCheck compares the value of maxUnhealthy with the number of machines within its target pool that it has determined to be unhealthy. Remediation is not performed if the number of unhealthy machines exceeds the maxUnhealthy limit. Important If maxUnhealthy is not set, the value defaults to 100% and the machines are remediated regardless of the state of the cluster. The appropriate maxUnhealthy value depends on the scale of the cluster you deploy and how many machines the MachineHealthCheck covers. For example, you can use the maxUnhealthy value to cover multiple machine sets across multiple availability zones so that if you lose an entire zone, your maxUnhealthy setting prevents further remediation within the cluster. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability. The maxUnhealthy field can be set as either an integer or percentage. There are different remediation implementations depending on the maxUnhealthy value. 3.4.1.1. Setting maxUnhealthy by using an absolute value If maxUnhealthy is set to 2 : Remediation will be performed if 2 or fewer nodes are unhealthy Remediation will not be performed if 3 or more nodes are unhealthy These values are independent of how many machines are being checked by the machine health check. 3.4.1.2. Setting maxUnhealthy by using percentages If maxUnhealthy is set to 40% and there are 25 machines being checked: Remediation will be performed if 10 or fewer nodes are unhealthy Remediation will not be performed if 11 or more nodes are unhealthy If maxUnhealthy is set to 40% and there are 6 machines being checked: Remediation will be performed if 2 or fewer nodes are unhealthy Remediation will not be performed if 3 or more nodes are unhealthy Note The allowed number of machines is rounded down when the percentage of maxUnhealthy machines that are checked is not a whole number. 3.5. Creating a MachineHealthCheck resource You can create a MachineHealthCheck resource for all MachineSets in your cluster. You should not create a MachineHealthCheck resource that targets control plane machines. Prerequisites Install the oc command line interface. Procedure Create a healthcheck.yml file that contains the definition of your machine health check. Apply the healthcheck.yml file to your cluster: USD oc apply -f healthcheck.yml
[ "oc edit machineset <machine_set_name> -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machine_set_name> namespace: openshift-machine-api spec: replicas: 2 1", "oc get -n openshift-machine-api machines -l machine.openshift.io/cluster-api-machineset=<machine_set_name>", "NAME PHASE TYPE REGION ZONE AGE <machine_name_original_1> Running m6i.xlarge us-west-1 us-west-1a 4h <machine_name_original_2> Running m6i.xlarge us-west-1 us-west-1a 4h", "oc annotate machine/<machine_name_original_1> -n openshift-machine-api machine.openshift.io/delete-machine=\"true\"", "oc scale --replicas=4 \\ 1 machineset <machine_set_name> -n openshift-machine-api", "oc get -n openshift-machine-api machines -l machine.openshift.io/cluster-api-machineset=<machine_set_name>", "NAME PHASE TYPE REGION ZONE AGE <machine_name_original_1> Running m6i.xlarge us-west-1 us-west-1a 4h <machine_name_original_2> Running m6i.xlarge us-west-1 us-west-1a 4h <machine_name_updated_1> Provisioned m6i.xlarge us-west-1 us-west-1a 55s <machine_name_updated_2> Provisioning m6i.xlarge us-west-1 us-west-1a 55s", "oc scale --replicas=2 \\ 1 machineset <machine_set_name> -n openshift-machine-api", "oc get -n openshift-machine-api machines -l machine.openshift.io/cluster-api-machineset=<machine_set_name>", "NAME PHASE TYPE REGION ZONE AGE <machine_name_original_1> Deleting m6i.xlarge us-west-1 us-west-1a 4h <machine_name_original_2> Deleting m6i.xlarge us-west-1 us-west-1a 4h <machine_name_updated_1> Running m6i.xlarge us-west-1 us-west-1a 5m41s <machine_name_updated_2> Running m6i.xlarge us-west-1 us-west-1a 5m41s", "NAME PHASE TYPE REGION ZONE AGE <machine_name_updated_1> Running m6i.xlarge us-west-1 us-west-1a 6m30s <machine_name_updated_2> Running m6i.xlarge us-west-1 us-west-1a 6m30s", "oc describe machine <machine_name_updated_1> -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example 1 namespace: openshift-machine-api spec: selector: matchLabels: machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 machine.openshift.io/cluster-api-machineset: <cluster_name>-<label>-<zone> 4 unhealthyConditions: - type: \"Ready\" timeout: \"300s\" 5 status: \"False\" - type: \"Ready\" timeout: \"300s\" 6 status: \"Unknown\" maxUnhealthy: \"40%\" 7 nodeStartupTimeout: \"10m\" 8", "oc apply -f healthcheck.yml" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/scalability_and_performance/recommended-cluster-scaling-practices
4.72. glibc
4.72. glibc 4.72.1. RHSA-2011-1526 - Low: glibc bug fix and enhancement update Updated glibc packages that fix two security issues, numerous bugs, and add various enhancements are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having low security impact. Common Vulnerability Scoring System (CVSS) base scores, which give detailed severity ratings, are available for each vulnerability from the CVE links associated with each description below. The glibc packages contain the standard C libraries used by multiple programs on the system. These packages contain the standard C and the standard math libraries. Without these two libraries, a Linux system cannot function properly. Security Fixes CVE-2009-5064 A flaw was found in the way the ldd utility identified dynamically linked libraries. If an attacker could trick a user into running ldd on a malicious binary, it could result in arbitrary code execution with the privileges of the user running ldd. CVE-2011-1089 It was found that the glibc addmntent() function, used by various mount helper utilities, did not handle certain errors correctly when updating the mtab (mounted file systems table) file. If such utilities had the setuid bit set, a local attacker could use this flaw to corrupt the mtab file. Red Hat would like to thank Dan Rosenberg for reporting the CVE-2011-1089 issue. Bug Fixes BZ# 676467 The installation of the glibc-debuginfo.i686 and glibc-debuginfo.x86_64 packages failed with a transaction check error due to a conflict between the packages. This update adds the glibc-debuginfo-common package that contains debuginfo data that are common for all platforms. The package depends on the glibc-debuginfo package and the user can now install debuginfo packages for different platforms on a single machine. BZ# 676591 When a process corrupted its heap, the malloc() function could enter a deadlock while creating an error message string. As a result, the process could become unresponsive. With this update, the process uses the mmap() function to allocate memory for the error message instead of the malloc() function. The malloc() deadlock therefore no longer occurs and the process with a corrupted heap now aborts gracefully. BZ# 692838 India has adopted a new symbol for the Indian rupee leaving the currency symbol for its Unicode U20B9 outdated. The rupee symbol has been updated for all Indian locales. BZ# 694386 The strncmp() function, which compares characters of two strings, optimized for IBM POWER4 and POWER7 architectures could return incorrect data. This happened because the function accessed the data past the zero byte (\0) of the string under certain circumstances. With this update, the function has been modified to access the string data only until the zero byte and returns correct data. BZ# 699724 The crypt() function could cause a memory leak if used with a more complex salt. The leak arose when the underlying NSS library attempted to call the dlopen() function from libnspr4.so with the RTLD_NOLOAD flag. With this update, the dlopen() with the RTLD_NOLOAD flag has been fixed and the memory leak no longer occurs. BZ# 700507 On startup, the nscd daemon logged the following error into the log file if SELinux was active: This happened because glibc failed to preserve the respective capabilities on UID change in the AVC thread. With this update, the AVC thread preservers the respective capabilities after the nscd startup. BZ# 703481 , BZ# 703480 When a host was temporarily unavailable, the nscd daemon cached an error, which did not signalize that the problem was only transient, and the request failed. With this update, the daemon caches a value signalizing that the unavailability is temporary and retries to obtain new data after a set time limit. BZ# 705465 When a module did not provide its own method for retrieving a user list of supplemental group memberships, the libc library's default method was used instead and all groups known to the module were examined to acquire the information. Consequently, applications which attempted to retrieve the information from multiple threads simultaneously, interfered with each other and received an incomplete result set. This update provides a module-specific method which prevents this interference. BZ# 706903 On machines using the Network Information Service (NIS), the getpwuid() function failed to resolve UIDs to user names when using the passwd utility in the compat mode with a big netgroup. This occurred because glibc was compiled without the -DUSE_BINDINGDIR=1 option. With this update, glibc has been compiled correctly and getpwuid() function works as expected. BZ# 711927 A debugger could have been presented with an inconsistent state after loading a library. This happened because the ld-linux program did not relocate the library before calling the debugger. With this update, the library is relocated prior to the calling of the debugger and the library is accessed successfully. BZ# 714823 The getaddrinfo() function internally uses the simpler gethostbyaddr() functions. In some cases, this could result in incorrect name canonicalization. With this update, the code has been modified and the getaddrinfo() function uses the gethostbyaddr() functions only when appropriate. BZ# 718057 The getpwent() lookups to LDAP (Lightweight Directory Access Protocol) did not return any netgroup users if the NIS (Network Information Service) domain for individual users was not defined in /etc/passwd . This happened when the nss_compat mode was set as the mode was primarily intended for use with NIS. With this update, getpwent returns LDAP netgroup users even if the users have no NIS domain defined. BZ# 730379 The libresolv library is now compiled with the stack protector enabled. BZ# 731042 The pthread_create() function failed to cancel a thread properly if setting of the real time policy failed. This occurred the because __pthread_enable_asynccancel() function as a non-leaf function did not align the stack on the 16-byte boundary as required by AMD64 ABI (Application Binary Interface). With this update, the stack alignment is preserved accros functions. BZ# 736346 When calling the setgroups function after creating threads, glibc did not cross-thread signal and supplementary group IDs were set only for the calling thread. With this update, the cross-thread signaling in the function has been introduced and supplementary group IDs are set on all involved threads as expected. BZ# 737778 The setlocale() function could fail. This happened because parameter values were parsed in the set locale. With this update, the parsing is locale-independent. BZ# 738665 A write barrier was missing in the implementation of addition to linked list of threads. This could result in the list corruption after several threads called the fork() function at the same time. The barrier has been added and the problem no longer occurs. BZ# 739184 Statically-linked binaries that call the gethostbyname() function terminated because of division by zero. This happened because the getpagesize() function required the dl_pagesize field in the dynamic linker's read-only state to be set. However, the field was not initialized when a statically linked binary loaded the dynamic linker. With this update, the getpagesize() function no longer requires a non-zero value in the dl_pagesize field and falls back to querying the value through the syscall() function if the field value is not set. Enhancements BZ# 712248 For some queries, the pathconf() and fpathconf() functions need details about each filesystem type: mapping of its superblock magic number to various filesystem properties that cannot be queried from the kernel. This update adds support for the Lustre file system to pathconf and fpathconf. BZ# 695595 The glibc package now provides functions optimized for the Intel 6 series and Intel Xeon 5600 processors. BZ# 695963 The glibc package now supports SSE2 (Streaming SIMD Extensions 2) instructions on the strlen() function for the AMD FX processors. BZ# 711987 This update adds the f_flags field to support the statvfs output received from kernel. BZ# 738763 The Linux kernel supports the UDP IP_MULTICAST_ALL socket option, which provides the ability to turn off IP Multicast multiplexing. This update adds the option to glibc. Users are advised to upgrade to these updated glibc packages, which contain backported patches to resolve these issues and add these enhancements. 4.72.2. RHSA-2012:0058 - Moderate: glibc security and bug fix update Updated glibc packages that fix two security issues and three bugs are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having moderate security impact. Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. The glibc packages contain the standard C libraries used by multiple programs on the system. These packages contain the standard C and the standard math libraries. Without these two libraries, a Linux system cannot function properly. Security Fixes CVE-2009-5029 An integer overflow flaw, leading to a heap-based buffer overflow, was found in the way the glibc library read timezone files. If a carefully-crafted timezone file was loaded by an application linked against glibc, it could cause the application to crash or, potentially, execute arbitrary code with the privileges of the user running the application. CVE-2011-4609 A denial of service flaw was found in the remote procedure call (RPC) implementation in glibc. A remote attacker able to open a large number of connections to an RPC service that is using the RPC implementation from glibc, could use this flaw to make that service use an excessive amount of CPU time. Bug Fixes BZ# 754116 glibc had incorrect information for numeric separators and groupings for specific French, Spanish, and German locales. Therefore, applications utilizing glibc's locale support printed numbers with the wrong separators and groupings when those locales were in use. With this update, the separator and grouping information has been fixed. BZ# 766484 The RHBA-2011:1179 glibc update introduced a regression, causing glibc to incorrectly parse groups with more than 126 members, resulting in applications such as "id" failing to list all the groups a particular user was a member of. With this update, group parsing has been fixed. BZ# 769594 glibc incorrectly allocated too much memory due to a race condition within its own malloc routines. This could cause a multi-threaded application to allocate more memory than was expected. With this update, the race condition has been fixed, and malloc's behavior is now consistent with the documentation regarding the MALLOC_ARENA_TEST and MALLOC_ARENA_MAX environment variables. Users should upgrade to these updated packages, which contain backported patches to resolve these issues. 4.72.3. RHSA-2012:0393 - Moderate: glibc security and bug fix update Updated glibc packages that fix one security issue and three bugs are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having moderate security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. The glibc packages provide the standard C and standard math libraries used by multiple programs on the system. Without these libraries, the Linux system cannot function correctly. Security Fix CVE-2012-0864 An integer overflow flaw was found in the implementation of the printf functions family. This could allow an attacker to bypass FORTIFY_SOURCE protections and execute arbitrary code using a format string flaw in an application, even though these protections are expected to limit the impact of such flaws to an application abort. Bug Fixes BZ# 783999 Previously, the dynamic loader generated an incorrect ordering for initialization according to the ELF specification. This could result in incorrect ordering of DSO constructors and destructors. With this update, dependency resolution has been fixed. BZ# 795328 Previously, locking of the main malloc arena was incorrect in the retry path. This could result in a deadlock if an sbrk request failed. With this update, locking of the main arena in the retry path has been fixed. This issue was exposed by a bug fix provided in the RHSA-2012:0058 update. BZ# 799259 Calling memcpy with overlapping arguments on certain processors would generate unexpected results. While such code is a clear violation of ANSI/ISO standards, this update restores prior memcpy behavior. All users of glibc are advised to upgrade to these updated packages, which contain patches to resolve these issues. 4.72.4. RHBA-2012:0566 - glibc bug fix update Updated glibc packages that fix two bugs are now available for Red Hat Enterprise Linux 6. The glibc packages provide the standard C and standard math libraries used by multiple programs on the system. Without these libraries, the Linux system cannot function correctly. Bug Fixes BZ# 802855 Previously, glibc looked for an error condition in the wrong location and failed to process a second response buffer in the gaih_getanswer() function. As a consequence, the getaddrinfo() function could not properly return all addresses. This update fixes an incorrect error test condition in gaih_getanswer() so that glibc now correctly parses the second response buffer. The getaddrinfo() function now correctly returns all addresses. BZ# 813859 Previously, if the nscd daemon received a CNAME (Canonical Name) record as a response to a DNS (Domain Name System) query, the cached DNS entry adopted the TTL (Time to Live) value of the underlying "A" or "AAAA" response. This caused the nscd daemon to wait for an unexpectedly long time before reloading the DNS entry. With this update, nscd uses the shortest TTL from the response as the TTL value for the entire record. DNS entries are reloaded as expected in this scenario. All users of glibc are advised to upgrade to these updated packages, which fix these bugs.
[ "rhel61 nscd: Can't send to audit system: USER_AVC avc: netlink poll: error 4#012: exe=\"?\" sauid=28 hostname=? addr=? terminal=?" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/glibc
2.2. Virtual Performance Monitoring Unit (vPMU)
2.2. Virtual Performance Monitoring Unit (vPMU) The virtual performance monitoring unit (vPMU) displays statistics which indicate how a guest virtual machine is functioning. The virtual performance monitoring unit allows users to identify sources of possible performance problems in their guest virtual machines. The vPMU is based on Intel's PMU (Performance Monitoring Units) and can only be used on Intel machines. This feature is only supported with guest virtual machines running Red Hat Enterprise Linux 6 or Red Hat Enterprise Linux 7 and is disabled by default. To verify if the vPMU is supported on your system, check for the arch_perfmon flag on the host CPU by running: To enable the vPMU, specify the cpu mode in the guest XML as host-passthrough : After the vPMU is enabled, display a virtual machine's performance statistics by running the perf command from the guest virtual machine.
[ "cat /proc/cpuinfo|grep arch_perfmon", "virsh dumpxml guest_name |grep \"cpu mode\" <cpu mode='host-passthrough'>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_tuning_and_optimization_guide/sect-virtualization_tuning_optimization_guide-monitoring_tools-vpmu
Chapter 16. Enabling kdump
Chapter 16. Enabling kdump For your RHEL 9 systems, you can configure enabling or disabling the kdump functionality on a specific kernel or on all installed kernels. However, you must routinely test the kdump functionality and validate its working status. 16.1. Enabling kdump for all installed kernels The kdump service starts by enabling kdump.service after the kexec tool is installed. You can enable and start the kdump service for all kernels installed on the machine. Prerequisites You have administrator privileges. Procedure Add the crashkernel= command-line parameter to all installed kernels: xxM is the required memory in megabytes. Enable the kdump service: Verification Check that the kdump service is running: 16.2. Enabling kdump for a specific installed kernel You can enable the kdump service for a specific kernel on the machine. Prerequisites You have administrator privileges. Procedure List the kernels installed on the machine. Add a specific kdump kernel to the system's Grand Unified Bootloader (GRUB) configuration. For example: xxM is the required memory reserve in megabytes. Enable the kdump service. Verification Check that the kdump service is running. 16.3. Disabling the kdump service You can stop the kdump.service and disable the service from starting on your RHEL 9 systems. Prerequisites Fulfilled requirements for kdump configurations and targets. For details, see Supported kdump configurations and targets . All configurations for installing kdump are set up according to your needs. For details, see Installing kdump . Procedure To stop the kdump service in the current session: To disable the kdump service: Warning It is recommended to set kptr_restrict=1 as default. When kptr_restrict is set to (1) as default, the kdumpctl service loads the crash kernel regardless of whether the Kernel Address Space Layout ( KASLR ) is enabled. If kptr_restrict is not set to 1 and KASLR is enabled, the contents of /proc/kore file are generated as all zeros. The kdumpctl service fails to access the /proc/kcore file and load the crash kernel. The kexec-kdump-howto.txt file displays a warning message, which recommends you to set kptr_restrict=1 . Verify for the following in the sysctl.conf file to ensure that kdumpctl service loads the crash kernel: Kernel kptr_restrict=1 in the sysctl.conf file. Additional resources Managing systemd
[ "grubby --update-kernel=ALL --args=\"crashkernel=xxM\"", "systemctl enable --now kdump.service", "systemctl status kdump.service ○ kdump.service - Crash recovery kernel arming Loaded: loaded (/usr/lib/systemd/system/kdump.service; enabled; vendor preset: disabled) Active: active (live)", "ls -a /boot/vmlinuz- * /boot/vmlinuz-0-rescue-2930657cd0dc43c2b75db480e5e5b4a9 /boot/vmlinuz-4.18.0-330.el8.x86_64 /boot/vmlinuz-4.18.0-330.rt7.111.el8.x86_64", "grubby --update-kernel= vmlinuz-4.18.0-330.el8.x86_64 --args=\"crashkernel= xxM \"", "systemctl enable --now kdump.service", "systemctl status kdump.service ○ kdump.service - Crash recovery kernel arming Loaded: loaded (/usr/lib/systemd/system/kdump.service; enabled; vendor preset: disabled) Active: active (live)", "systemctl stop kdump.service", "systemctl disable kdump.service" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_monitoring_and_updating_the_kernel/enabling-kdumpmanaging-monitoring-and-updating-the-kernel
Chapter 3. Usage
Chapter 3. Usage This chapter describes the necessary steps for using Red Hat Software Collections 3.6, and deploying applications that use Red Hat Software Collections. 3.1. Using Red Hat Software Collections 3.1.1. Running an Executable from a Software Collection To run an executable from a particular Software Collection, type the following command at a shell prompt: scl enable software_collection ... ' command ...' Or, alternatively, use the following command: scl enable software_collection ... -- command ... Replace software_collection with a space-separated list of Software Collections you want to use and command with the command you want to run. For example, to execute a Perl program stored in a file named hello.pl with the Perl interpreter from the perl526 Software Collection, type: You can execute any command using the scl utility, causing it to be run with the executables from a selected Software Collection in preference to their possible Red Hat Enterprise Linux system equivalents. For a complete list of Software Collections that are distributed with Red Hat Software Collections, see Table 1.1, "Red Hat Software Collections Components" . 3.1.2. Running a Shell Session with a Software Collection as Default To start a new shell session with executables from a selected Software Collection in preference to their Red Hat Enterprise Linux equivalents, type the following at a shell prompt: scl enable software_collection ... bash Replace software_collection with a space-separated list of Software Collections you want to use. For example, to start a new shell session with the python27 and rh-postgresql10 Software Collections as default, type: The list of Software Collections that are enabled in the current session is stored in the USDX_SCLS environment variable, for instance: For a complete list of Software Collections that are distributed with Red Hat Software Collections, see Table 1.1, "Red Hat Software Collections Components" . 3.1.3. Running a System Service from a Software Collection Running a System Service from a Software Collection in Red Hat Enterprise Linux 6 Software Collections that include system services install corresponding init scripts in the /etc/rc.d/init.d/ directory. To start such a service in the current session, type the following at a shell prompt as root : service software_collection - service_name start Replace software_collection with the name of the Software Collection and service_name with the name of the service you want to start. To configure this service to start automatically at boot time, type the following command as root : chkconfig software_collection - service_name on For example, to start the postgresql service from the rh-postgresql96 Software Collection and enable it in runlevels 2, 3, 4, and 5, type as root : For more information on how to manage system services in Red Hat Enterprise Linux 6, refer to the Red Hat Enterprise Linux 6 Deployment Guide . For a complete list of Software Collections that are distributed with Red Hat Software Collections, see Table 1.1, "Red Hat Software Collections Components" . Running a System Service from a Software Collection in Red Hat Enterprise Linux 7 In Red Hat Enterprise Linux 7, init scripts have been replaced by systemd service unit files, which end with the .service file extension and serve a similar purpose as init scripts. To start a service in the current session, execute the following command as root : systemctl start software_collection - service_name .service Replace software_collection with the name of the Software Collection and service_name with the name of the service you want to start. To configure this service to start automatically at boot time, type the following command as root : systemctl enable software_collection - service_name .service For example, to start the postgresql service from the rh-postgresql10 Software Collection and enable it at boot time, type as root : For more information on how to manage system services in Red Hat Enterprise Linux 7, refer to the Red Hat Enterprise Linux 7 System Administrator's Guide . For a complete list of Software Collections that are distributed with Red Hat Software Collections, see Table 1.1, "Red Hat Software Collections Components" . 3.2. Accessing a Manual Page from a Software Collection Every Software Collection contains a general manual page that describes the content of this component. Each manual page has the same name as the component and it is located in the /opt/rh directory. To read a manual page for a Software Collection, type the following command: scl enable software_collection 'man software_collection ' Replace software_collection with the particular Red Hat Software Collections component. For example, to display the manual page for rh-mariadb102 , type: 3.3. Deploying Applications That Use Red Hat Software Collections In general, you can use one of the following two approaches to deploy an application that depends on a component from Red Hat Software Collections in production: Install all required Software Collections and packages manually and then deploy your application, or Create a new Software Collection for your application and specify all required Software Collections and other packages as dependencies. For more information on how to manually install individual Red Hat Software Collections components, see Section 2.2, "Installing Red Hat Software Collections" . For further details on how to use Red Hat Software Collections, see Section 3.1, "Using Red Hat Software Collections" . For a detailed explanation of how to create a custom Software Collection or extend an existing one, read the Red Hat Software Collections Packaging Guide . 3.4. Red Hat Software Collections Container Images Container images based on Red Hat Software Collections include applications, daemons, and databases. The images can be run on Red Hat Enterprise Linux 7 Server and Red Hat Enterprise Linux Atomic Host. For information about their usage, see Using Red Hat Software Collections 3 Container Images . For details regarding container images based on Red Hat Software Collections versions 2.4 and earlier, see Using Red Hat Software Collections 2 Container Images . Note that only the latest version of each container image is supported. The following container images are available with Red Hat Software Collections 3.6: rhscl/devtoolset-10-toolchain-rhel7 rhscl/devtoolset-10-perftools-rhel7 rhscl/httpd-24-rhel7 rhscl/nginx-118-rhel7 rhscl/nodej-14-rhel7 rhscl/perl-530-rhel7 rhscl/php-73-rhel7 rhscl/ruby-25-rhel7 The following container images are based on Red Hat Software Collections 3.5: rhscl/python-38-rhel7 rhscl/ruby-27-rhel7 rhscl/varnish-6-rhel7 The following container images are based on Red Hat Software Collections 3.4: rhscl/nginx-116-rhel7 rhscl/nodejs-12-rhel7 rhscl/postgresql-12-rhel7 The following container images are based on Red Hat Software Collections 3.3: rhscl/mariadb-103-rhel7 rhscl/redis-5-rhel7 rhscl/ruby-26-rhel7 The following container images are based on Red Hat Software Collections 3.2: rhscl/mysql-80-rhel7 rhscl/nodejs-10-rhel7 The following container images are based on Red Hat Software Collections 3.1: rhscl/mongodb-36-rhel7 rhscl/perl-526-rhel7 rhscl/postgresql-10-rhel7 rhscl/varnish-5-rhel7 The following container images are based on Red Hat Software Collections 2: rhscl/python-27-rhel7 rhscl/s2i-base-rhel7
[ "~]USD scl enable rh-perl526 'perl hello.pl' Hello, World!", "~]USD scl enable python27 rh-postgresql10 bash", "~]USD echo USDX_SCLS python27 rh-postgresql10", "~]# service rh-postgresql96-postgresql start Starting rh-postgresql96-postgresql service: [ OK ] ~]# chkconfig rh-postgresql96-postgresql on", "~]# systemctl start rh-postgresql10-postgresql.service ~]# systemctl enable rh-postgresql10-postgresql.service", "~]USD scl enable rh-mariadb102 \"man rh-mariadb102\"" ]
https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/3.6_release_notes/chap-Usage
Data Grid Security Guide
Data Grid Security Guide Red Hat Data Grid 8.5 Enable and configure Data Grid security Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/data_grid_security_guide/index
Chapter 19. Node APIs
Chapter 19. Node APIs 19.1. Node APIs 19.1.1. RuntimeClass [node.k8s.io/v1] Description RuntimeClass defines a class of container runtime supported in the cluster. The RuntimeClass is used to determine which container runtime is used to run all containers in a pod. RuntimeClasses are manually defined by a user or cluster provisioner, and referenced in the PodSpec. The Kubelet is responsible for resolving the RuntimeClassName reference before running the pod. For more details, see https://kubernetes.io/docs/concepts/containers/runtime-class/ Type object 19.2. RuntimeClass [node.k8s.io/v1] Description RuntimeClass defines a class of container runtime supported in the cluster. The RuntimeClass is used to determine which container runtime is used to run all containers in a pod. RuntimeClasses are manually defined by a user or cluster provisioner, and referenced in the PodSpec. The Kubelet is responsible for resolving the RuntimeClassName reference before running the pod. For more details, see https://kubernetes.io/docs/concepts/containers/runtime-class/ Type object Required handler 19.2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources handler string handler specifies the underlying runtime and configuration that the CRI implementation will use to handle pods of this class. The possible values are specific to the node & CRI configuration. It is assumed that all handlers are available on every node, and handlers of the same name are equivalent on every node. For example, a handler called "runc" might specify that the runc OCI runtime (using native Linux containers) will be used to run the containers in a pod. The Handler must be lowercase, conform to the DNS Label (RFC 1123) requirements, and is immutable. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata overhead object Overhead structure represents the resource overhead associated with running a pod. scheduling object Scheduling specifies the scheduling constraints for nodes supporting a RuntimeClass. 19.2.1.1. .overhead Description Overhead structure represents the resource overhead associated with running a pod. Type object Property Type Description podFixed object (Quantity) podFixed represents the fixed resource overhead associated with running a pod. 19.2.1.2. .scheduling Description Scheduling specifies the scheduling constraints for nodes supporting a RuntimeClass. Type object Property Type Description nodeSelector object (string) nodeSelector lists labels that must be present on nodes that support this RuntimeClass. Pods using this RuntimeClass can only be scheduled to a node matched by this selector. The RuntimeClass nodeSelector is merged with a pod's existing nodeSelector. Any conflicts will cause the pod to be rejected in admission. tolerations array (Toleration) tolerations are appended (excluding duplicates) to pods running with this RuntimeClass during admission, effectively unioning the set of nodes tolerated by the pod and the RuntimeClass. 19.2.2. API endpoints The following API endpoints are available: /apis/node.k8s.io/v1/runtimeclasses DELETE : delete collection of RuntimeClass GET : list or watch objects of kind RuntimeClass POST : create a RuntimeClass /apis/node.k8s.io/v1/watch/runtimeclasses GET : watch individual changes to a list of RuntimeClass. deprecated: use the 'watch' parameter with a list operation instead. /apis/node.k8s.io/v1/runtimeclasses/{name} DELETE : delete a RuntimeClass GET : read the specified RuntimeClass PATCH : partially update the specified RuntimeClass PUT : replace the specified RuntimeClass /apis/node.k8s.io/v1/watch/runtimeclasses/{name} GET : watch changes to an object of kind RuntimeClass. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 19.2.2.1. /apis/node.k8s.io/v1/runtimeclasses Table 19.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of RuntimeClass Table 19.2. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 19.3. Body parameters Parameter Type Description body DeleteOptions schema Table 19.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind RuntimeClass Table 19.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 19.6. HTTP responses HTTP code Reponse body 200 - OK RuntimeClassList schema 401 - Unauthorized Empty HTTP method POST Description create a RuntimeClass Table 19.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 19.8. Body parameters Parameter Type Description body RuntimeClass schema Table 19.9. HTTP responses HTTP code Reponse body 200 - OK RuntimeClass schema 201 - Created RuntimeClass schema 202 - Accepted RuntimeClass schema 401 - Unauthorized Empty 19.2.2.2. /apis/node.k8s.io/v1/watch/runtimeclasses Table 19.10. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of RuntimeClass. deprecated: use the 'watch' parameter with a list operation instead. Table 19.11. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 19.2.2.3. /apis/node.k8s.io/v1/runtimeclasses/{name} Table 19.12. Global path parameters Parameter Type Description name string name of the RuntimeClass Table 19.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a RuntimeClass Table 19.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 19.15. Body parameters Parameter Type Description body DeleteOptions schema Table 19.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified RuntimeClass Table 19.17. HTTP responses HTTP code Reponse body 200 - OK RuntimeClass schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified RuntimeClass Table 19.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 19.19. Body parameters Parameter Type Description body Patch schema Table 19.20. HTTP responses HTTP code Reponse body 200 - OK RuntimeClass schema 201 - Created RuntimeClass schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified RuntimeClass Table 19.21. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 19.22. Body parameters Parameter Type Description body RuntimeClass schema Table 19.23. HTTP responses HTTP code Reponse body 200 - OK RuntimeClass schema 201 - Created RuntimeClass schema 401 - Unauthorized Empty 19.2.2.4. /apis/node.k8s.io/v1/watch/runtimeclasses/{name} Table 19.24. Global path parameters Parameter Type Description name string name of the RuntimeClass Table 19.25. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind RuntimeClass. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 19.26. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/red_hat_build_of_microshift/4.18/html/api_reference/node-apis-1
Chapter 1. Integrating communications applications with the Hybrid Cloud Console
Chapter 1. Integrating communications applications with the Hybrid Cloud Console Receive event notifications in your preferred communications application by connecting the Hybrid Cloud Console with Microsoft Teams, Google Chat, or Slack. 1.1. Integrating Microsoft Teams with the Hybrid Cloud Console You can configure the Red Hat Hybrid Cloud Console to send event notifications to all users on a new or existing channel in Microsoft Teams. The Microsoft Teams integration supports events from all services in the Hybrid Cloud Console. The Microsoft Teams integration uses incoming webhooks to receive event data. Contacting support If you have any issues with integrating the Hybrid Cloud Console with Microsoft Teams, contact Red Hat for support. You can open a Red Hat support case directly from the Hybrid Cloud Console by clicking Help ( ? icon) > Open a support case , or view more options from ? > Support options . Microsoft will not provide troubleshooting. The Hybrid Cloud Console integration with Microsoft Teams is fully supported by Red Hat. 1.1.1. Configuring Microsoft Teams for integration with the Hybrid Cloud Console You can use incoming webhooks to configure Microsoft Teams to receive event notifications from the Red Hat Hybrid Cloud Console or a third-party application. Prerequisites You have admin permissions for Microsoft Teams. You have Organization Administrator or Notifications administrator permissions for the Hybrid Cloud Console. Procedure Create a new channel in Microsoft Teams or select an existing channel. Navigate to Apps and search for the Incoming Webhook application. Select the Incoming Webhook application and click Add to a team . Select the team or channel name and click Set up a connector . Enter a name for the incoming webhook (for example, Red Hat Notifications ). This name appears on all notifications that the Microsoft Teams channel receives from the Red Hat Hybrid Cloud Console through this incoming webhook. Optional: Upload an image to associate with the name of the incoming webhook. This image appears on all notifications that the Microsoft Teams channel receives from the Hybrid Cloud Console through this incoming webhook. Click Create to complete creation and display the webhook URL. Copy the URL to your clipboard. You need the URL to configure notifications in the Hybrid Cloud Console. Click Done . The Microsoft Teams page displays the channel and the incoming webhook. In the Hybrid Cloud Console, navigate to Settings > Integrations . Click the Communications tab. Click Add integration . Select Microsoft Office Teams as the integration type, and then click . In the Integration name field, enter a name for your integration (for example, console-teams ). Paste the incoming webhook URL that you copied from Microsoft Teams into the Endpoint URL field. Click . Optional: Associate events with the integration. Doing this automatically creates a behavior group. Note You can skip this step and associate the event types later. Select a product family, for example OpenShift , Red Hat Enterprise Linux , or Console . Select the event types you would like your integration to react to. To enable the integration, review the integration details and click Submit . Refresh the Integrations page to show the Microsoft Teams integration in the Integrations > Communications list. Under Last connection attempt , the status is Ready to show the connection can accept notifications from the Hybrid Cloud Console. Verification Create a test notification to confirm you have correctly connected Microsoft Teams to the Hybrid Cloud Console: to your Microsoft Teams integration on the Integrations > Communications page, click the options icon (...) and click Test . In the Integration Test screen, enter a message and click Send . If you leave the field empty, the Hybrid Cloud Console sends a default message. Open your Microsoft Teams channel and check for the message sent from the Hybrid Cloud Console. In the Hybrid Cloud Console, go to Notifications > Event Log and check that the Integration: Microsoft Teams event is listed with a green label. Additional resources For more information about setting up Notifications administrator permissions, see Configure User Access to manage notifications in the notifications documentation. 1.1.2. Creating the behavior group for the Microsoft Teams integration A behavior group defines which notifications will be sent to external services such as Microsoft Teams when a specific event is received by the notifications service. You can link events from any Red Hat Hybrid Cloud Console service to your behavior group. For more information about behavior groups, see Configuring Hybrid Cloud Console notification behavior groups . Prerequisites You are logged in to the Hybrid Cloud Console as an Organization Administrator or as a user with Notifications administrator permissions. The Microsoft Teams integration is configured. For information about configuring Microsoft Teams integration, see Section 1.1.1, "Configuring Microsoft Teams for integration with the Hybrid Cloud Console" . Procedure In the Hybrid Cloud Console, navigate to Settings > Notifications . Under Notifications , select Configure Events . Select the application bundle tab you want to configure event notification behavior for: Red Hat Enterprise Linux , Console , or OpenShift . Click the Behavior Groups tab. Click Create new group to open the Create behavior group wizard. Type a name for the behavior group and click . In the Actions and Recipients step, select Integration: Microsoft Teams from the Actions drop-down list. From the Recipient drop-down list, select the name of the integration you created (for example, console-teams ) and click . In the Associate event types step, select one or more events for which you want to send notifications (for example, Policies: Policy triggered ) and click . Review your behavior group settings and click Finish . The new behavior group appears on the Notifications > Configure Events page in the Behavior Groups tab. Verification Create an event that will trigger a Hybrid Cloud Console notification. For example, run insights-client on a system that will trigger a policy event. Wait a few minutes, and then navigate to Microsoft Teams. Select the channel that you configured from the left menu. If the setup process succeeded, the page displays a notification from the Hybrid Cloud Console. The notification contains the name of the host that triggered the event and a link to that host, as well as the number of events and a link that opens the corresponding Hybrid Cloud Console service. In the Hybrid Cloud Console, go to Settings > Notifications > Event Log and check for an event that shows the label Integration: Microsoft Teams . If the label is green, the notification succeeded. If the label is red, verify that the incoming webhook connector was properly created in Microsoft Teams, and that the correct incoming webhook URL is added in the Hybrid Cloud Console integration configuration. Note See Troubleshooting notification failures in the notifications documentation for more details. 1.1.3. Additional resources For information about troubleshooting your Microsoft Teams integration, see Troubleshooting Hybrid Cloud Console integrations . For more information about webhooks, see Create an Incoming Webhook and Webhooks and Connectors in the Microsoft Teams documentation. 1.2. Integrating Google Chat with the Red Hat Hybrid Cloud Console You can configure the Red Hat Hybrid Cloud Console to send event notifications to a new or existing Google space in Google Chat. The Google Chat integration supports events from all Hybrid Cloud Console services. The integration with the Hybrid Cloud Console notifications service uses incoming webhooks to receive event data. Each Red Hat account configures how and who can receive these events, with the ability to perform actions depending on the event type. Contacting Support If you have any issues with the Hybrid Cloud Console integration with Google Chat, contact Red Hat for support. You can open a Red Hat support case directly from the Hybrid Cloud Console by clicking Help > Open a support case , or view more options from Help > Support options . Google will not provide troubleshooting. The Hybrid Cloud Console integration with Google Chat is fully supported by Red Hat. 1.2.1. Configuring incoming webhooks in Google Chat In Google spaces, create a new webhook to connect with the Hybrid Cloud Console. Prerequisites You have a new or existing Google space in Google Chat. Procedure In your Google space, click the arrow on the space name to open the dropdown menu: Select Apps & Integrations . Click Webhooks . Enter the following information in the Incoming webhooks dialog: Enter a name for the integration (for example, Engineering Google Chat ). Optional: To add an avatar for the notifications, enter a URL to an image. Click Save to generate the webhook URL. Copy the webhook URL to use for configuration in the Hybrid Cloud Console. Additional resources See Send messages to Google Chat with incoming webhooks in the Google Chat documentation for more detailed information about Google Chat configuration. 1.2.2. Configuring the Google Chat integration in the Red Hat Hybrid Cloud Console Create a new integration in the Hybrid Cloud Console using the webhook URL from Google Chat. Prerequisites You are logged in to the Hybrid Cloud Console as an Organization Administrator or as a user with Notifications administrator permissions. You have a Google Chat incoming webhook. Procedure In the Hybrid Cloud Console, navigate to Settings > Integrations . Select the Communications tab. Click Add integration . Select Google Chat as the integration type, and then click . In the Integration name field, enter a name for your integration (for example, console-gchat ). Paste the incoming webhook URL that you copied from your Google space into the Endpoint URL field, and click . Optional: Associate events with the integration. Doing this automatically creates a behavior group. Note You can skip this step and associate the event types later. Select a product family, for example OpenShift , Red Hat Enterprise Linux , or Console . Select the event types you would like your integration to react to. To enable the integration, review the integration details and click Submit . Refresh the Integrations page to show the Google Chat integration in the Integrations > Communications list. Under Last connection attempt , the status is Ready to show the connection can accept notifications from the Hybrid Cloud Console. Verification Create a test notification to confirm you have successfully connected Google Chat to the Hybrid Cloud Console: to your Google Chat integration on the Integrations > Communications page, click the options icon (...) and click Test . In the Integration Test screen, enter a message and click Send . If you leave the field empty, the Hybrid Cloud Console sends a default message. Open your Google space and check for the message sent from the Hybrid Cloud Console. In the Hybrid Cloud Console, go to Notifications > Event Log and check that the Integration: Google Chat event is listed with a green label. Additional resources For more information about setting up Notifications administrator permissions, see Configure User Access to manage notifications in the notifications documentation. 1.2.3. Creating the behavior group for the Google Chat integration A behavior group defines which notifications will be sent to external services such as Google Chat when a specific event is received by the notifications service. You can link events from any Red Hat Hybrid Cloud Console service to your behavior group. Prerequisites You are logged in to the Hybrid Cloud Console as an Organization Administrator or as a user with Notifications administrator permissions. You have configured the Google Chat integration. Procedure In the Hybrid Cloud Console, navigate to Settings > Notifications . Under Notifications , select Configure Events . Select the application bundle tab you want to configure event notification behavior for: Red Hat Enterprise Linux , Console , or OpenShift . Click the Behavior Groups tab. Click Create new group to open the Create behavior group wizard. Type a name for the behavior group and click . In the Actions and Recipients step, select Integration: Google Chat from the Actions drop-down list. From the Recipient drop-down list, select the name of the integration you created (for example, console-gchat ) and click . In the Associate event types step, select one or more events for which you want to send notifications (for example, Policies: Policy triggered ) and click . Review your behavior group settings and click Finish . The new behavior group is listed on the Notifications page. Verification Create an event that will trigger a Hybrid Cloud Console notification. For example, run insights-client on a system that will trigger a policy event. Wait a few minutes, and then navigate to Google Chat. In your Google Space, check for notifications from the Hybrid Cloud Console. In the Hybrid Cloud Console, go to Settings > Notifications > Event Log and check for an event that shows the label Integration: Google Chat . If the label is green, the notification succeeded. If the label is red, the integration might need to be adjusted. If the integration is not working as expected, verify that the incoming webhook connector was properly created in Google Chat, and that the correct incoming webhook URL is added in the Hybrid Cloud Console integration configuration. Note See Troubleshooting notification failures in the notifications documentation for more details. 1.2.4. Additional resources For information about troubleshooting your Google Chat integration, see Troubleshooting Hybrid Cloud Console integrations . See the Google Chat documentation about incoming webhooks for more detailed information about Google Chat configuration. For more information about behavior groups, see Configuring Hybrid Cloud Console notification behavior groups . 1.3. Integrating Slack with the Hybrid Cloud Console You can configure the Hybrid Cloud Console to send event notifications to a Slack channel or directly to a user. The Slack integration supports events from all Hybrid Cloud Console services. Note The Slack integration in this example is configured for Red Hat Enterprise Linux. The integration also works with Red Hat OpenShift and Hybrid Cloud Console events. The Slack integration uses incoming webhooks to receive event data. For more information about webhooks, see Sending messages using incoming webhooks in the Slack API documentation. Contacting support If you have any issues with the Hybrid Cloud Console integration with Slack, contact Red Hat for support. Slack will not provide troubleshooting. The Hybrid Cloud Console integration with Slack is fully supported by Red Hat. You can open a Red Hat support case directly from the Hybrid Cloud Console by clicking Help > Open a support case , or view more options from Help > Support options . 1.3.1. Configuring incoming webhooks in Slack To prepare Slack for integration with the Hybrid Cloud Console, you must configure incoming webhooks in Slack. Prerequisites You have owner or admin permissions to the Slack instance where you want to add incoming webhooks. You have App Manager permissions to add Slack apps to a channel. You have a Slack channel or user to receive notifications. Procedure Create a Slack app: Go to the Slack API web page and click the Create your Slack app button. This opens the Create an app dialog. Select From scratch to use the Slack configuration UI to create your app. Enter a name for your app and select the workspace where you want to receive notifications. Note If you see a message that administrator approval is required, you can request approval in the step. Click Create App to finish creating the Slack app. Enable incoming webhooks: Under the Features heading in the navigation panel, click Incoming Webhooks . Toggle the Activate Incoming Webhooks switch to On . Click the Request to Add New Webhook button. If required, enter a message to your administrators to grant access to your app and click Submit Request . A success message confirms you have configured this correctly. Create an incoming webhook: Under Settings in the navigation panel, click Install App . In the Install App section, click the Install to workspace button. Select the channel where you want your Slack app to post notifications, or select a user to send notifications to as direct messages. Click Allow to save changes. Optional: Configure how your Hybrid Cloud Console notifications appear in Slack: Under Settings in the navigation panel, click Basic Information . Scroll down to Display Information . Edit your app name, description, icon, and background color as desired. Click Save Changes . Copy the webhook URL: Under Features , click Incoming Webhooks . Click the Copy button to the webhook URL. You will use the URL to set up the integration in the Hybrid Cloud Console in Section 1.3.2, "Configuring the Slack integration in the Red Hat Hybrid Cloud Console" . Verification Open the Slack channel or user you selected during configuration, and check for a message confirming you have added the integration. Additional resources For information about webhooks in Slack, see Sending messages using incoming webhooks . For information about workflows, see Build a workflow: Create a workflow that starts outside of Slack . For information about managing app approvals, see Managing app approvals in Enterprise Grid workspaces . For general help with Slack, see the Slack Help Center . 1.3.2. Configuring the Slack integration in the Red Hat Hybrid Cloud Console After you have configured an incoming webhook in Slack, you can configure the Hybrid Cloud Console to send event notifications to the Slack channel or user you configured. Prerequisites You have Organization Administrator or Notifications administrator permissions for the Red Hat Hybrid Cloud Console. Procedure If necessary, go to the Slack API web page and copy the webhook URL that you configured. Note See Section 1.3.1, "Configuring incoming webhooks in Slack" for the steps to create a Slack webhook URL. In the Hybrid Cloud Console, navigate to Settings > Integrations . Select the Communications tab. Click Add integration . Select Slack as the integration type and click . Enter a name for the integration (for example, My Slack notifications ). Paste the Slack webhook URL that you copied from Slack into the Workspace URL field and click . Optional: Associate events with the integration. Doing this automatically creates a behavior group. Note You can skip this step and associate the event types later. Select a product family, for example OpenShift , Red Hat Enterprise Linux , or Console . Select the event types you want your integration to react to and click . To enable the integration, review the integration details and click Submit . Refresh the Integrations page to show the Slack integration in the Integrations > Communications list. Under Last connection attempt , the status is Ready to show the connection can accept notifications from the Hybrid Cloud Console. Verification Create a test notification to confirm you have successfully connected Slack to the Hybrid Cloud Console: to your Slack integration on the Integrations > Communications page, click the options icon (...) and click Test . In the Integration Test screen, enter a message and click Send . If you leave the field empty, the Hybrid Cloud Console sends a default message. Open the Slack channel you configured and check for the message sent from the Hybrid Cloud Console. In the Hybrid Cloud Console, go to Notifications > Event Log and check that the Integration: Slack event is listed with a green label. Additional resources For more information about setting up Notifications administrator permissions, see Configure User Access to manage notifications in the notifications documentation. 1.3.3. Creating the behavior group for the Slack integration A behavior group defines which notifications will be sent to external services such as Slack when a specific event is received by the notifications service. You can link events from any Red Hat Hybrid Cloud Console service to your behavior group. Prerequisites You are logged in to the Hybrid Cloud Console as an Organization Administrator or as a user with Notifications administrator permissions. You have configured the Slack integration. Procedure In the Hybrid Cloud Console, navigate to Settings > Notifications . Under Notifications , select Configure Events . Select the application bundle tab you want to configure event notification behavior for: Red Hat Enterprise Linux , Console , or OpenShift . Click the Behavior Groups tab. Click Create new group to open the Create behavior group wizard. Enter a name for the behavior group and click . In the Actions and Recipients step, select Integration: Slack from the Actions drop-down list. From the Recipient drop-down list, select the name of the integration you created (for example, My Slack notifications ) and click . In the Associate event types step, select one or more events for which you want to send notifications (for example, Policies: Policy triggered ) and click . Review your behavior group settings and click Finish . The new behavior group appears on the Notifications > Configure Events page in the Behavior Groups tab. Note You can create and edit multiple behavior groups to include any additional platforms that the notifications service supports. Select Settings > Integrations and click the Communications tab. When the Slack integration is ready to send events to Slack, the Last connection attempt column shows Ready . If the notification reached Slack successfully, the Last connection attempt column shows Success . Verification Create an event that will trigger a Hybrid Cloud Console notification. For example, run insights-client on a system that will trigger a policy event. Wait a few minutes, and then navigate to Slack. In your Slack channel, check for notifications from the Hybrid Cloud Console. In the Hybrid Cloud Console, go to Settings > Notifications > Event Log and check for an event that shows the label Integration: Slack . If the label is green, the notification succeeded. If the label is red, the integration might need to be adjusted. If the integration is not working as expected, verify that the incoming webhook connector was properly created in Slack, and that the correct incoming webhook URL is added in the Hybrid Cloud Console integration configuration. Note See Troubleshooting notification failures in the notifications documentation for more details. 1.3.4. Additional resources For detailed information about Slack configuration, see Sending messages using incoming webhooks in the Slack documentation. For more information about behavior groups, see Configuring Hybrid Cloud Console notification behavior groups . For information about troubleshooting your Slack integration, see Troubleshooting Hybrid Cloud Console integrations .
null
https://docs.redhat.com/en/documentation/red_hat_hybrid_cloud_console/1-latest/html/integrating_the_red_hat_hybrid_cloud_console_with_third-party_applications/assembly-integrating-comms_integrations
Chapter 13. Deployment errors
Chapter 13. Deployment errors 13.1. Order of cleanup operations Depending on where deployment fails, you may need to perform a number of cleanup operations. Always perform cleanup for tasks in reverse order to the order of the tasks themselves. For example, during deployment, we perform the following tasks in order: Configure Network-Bound Disk Encryption using Ansible. Configure Red Hat Gluster Storage using the Web Console. Configure the Hosted Engine using the Web Console. If deployment fails at step 2, perform cleanup for step 2. Then, if necessary, perform cleanup for step 1. 13.2. Failed to deploy storage If an error occurs during storage deployment , the deployment process halts and Deployment failed is displayed. Deploying storage failed Review the Web Console output for error information. Click Clean up to remove any potentially incorrect changes to the system. If your deployment uses Network-Bound Disk Encryption, you must then follow the process in Cleaning up Network-Bound Disk Encryption after a failed deployment . Click Redeploy and correct any entered values that may have caused errors. If you need help resolving errors, contact Red Hat Support with details. Return to storage deployment to try again. 13.2.1. Cleaning up Network-Bound Disk Encryption after a failed deployment If you are using Network-Bound Disk Encryption and deployment fails, you cannot just click the Cleanup button in order to try again. You must also run the luks_device_cleanup.yml playbook to complete the cleaning process before you start again. Run this playbook as shown, providing the same luks_tang_inventory.yml file that you provided during setup. 13.2.2. Error: VDO signature detected on device During storage deployment, the Create VDO with specified size task may fail with the VDO signature detected on device error. This error occurs when the specified device is already a VDO device, or when the device was previously configured as a VDO device and was not cleaned up correctly. If you specified a VDO device accidentally , return to storage configuration and specify a different non-VDO device. If you specified a device that has been used as a VDO device previously: Check the device type. If you see TYPE="vdo" in the output, this device was not cleaned correctly. Follow the steps in Manually cleaning up a VDO device to use this device. Then return to storage deployment to try again. Avoid this error by specifying clean devices, and by using the Clean up button in the storage deployment window to clean up any failed deployments. 13.2.3. Manually cleaning up a VDO device Follow this process to manually clean up a VDO device that has caused a deployment failure. Warning This is a destructive process. You will lose all data on the device that you clean up. Procedure Clean the device using wipefs. Verify Confirm that the device does not have TYPE="vdo" set any more. steps Return to storage deployment to try again. 13.3. Failed to prepare virtual machine If an error occurs while preparing the virtual machine in Hosted Engine deployment , deployment pauses, and you see a screen similar to the following: Preparing virtual machine failed Review the Web Console output for error information. Click Back and correct any entered values that may have caused errors. Ensure proper values for network configurations are provided in VM tab. If you need help resolving errors, contact Red Hat Support with details. Ensure that the rhvm-appliance package is available on the first hyperconverged host. Return to Hosted Engine deployment to try again. If you closed the deployment wizard while you resolved errors, you can select Use existing configuration when you retry the deployment process. 13.4. Failed to deploy hosted engine If an error occurs during hosted engine deployment, deployment pauses and Deployment failed is displayed. Hosted engine deployment failed Review the Web Console output for error information. Remove the contents of the engine volume. Mount the engine volume. Remove the contents of the volume. Unmount the engine volume. Click Redeploy and correct any entered values that may have caused errors. If the deployment fails after performing the above steps a, b and c. Perform these steps again and this time clean the Hosted Engine: Return to Hosted Engine deployment to try again. If you closed the deployment wizard while you resolved errors, you can select Use existing configuration when you retry the deployment process. If you need help resolving errors, contact Red Hat Support with details.
[ "ansible-playbook -i luks_tang_inventory.yml /etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment/tasks/luks_device_cleanup.yml --ask-vault-pass", "TASK [gluster.infra/roles/backend_setup : Create VDO with specified size] task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vdo_create.yml:9 failed: [host1.example.com] (item={u'writepolicy': u'auto', u'name': u'vdo_sdb', u'readcachesize': u'20M', u'readcache': u'enabled', u'emulate512': u'off', u'logicalsize': u'11000G', u'device': u'/dev/sdb', u'slabsize': u'32G', u'blockmapcachesize': u'128M'}) => {\"ansible_loop_var\": \"item\", \"changed\": false, \"err\": \" vdo: ERROR - vdo signature detected on /dev/sdb at offset 0; use --force to override\\n\", \"item\": {\"blockmapcachesize\": \"128M\", \"device\": \"/dev/sdb\", \"emulate512\": \"off\", \"logicalsize\": \"11000G\", \"name\": \"vdo_sdb\", \"readcache\": \"enabled\", \"readcachesize\": \"20M\", \"slabsize\": \"32G\", \"writepolicy\": \"auto\"}, \"msg\": \"Creating VDO vdo_sdb failed.\", \"rc\": 5}", "blkid -p /dev/sdb /dev/sdb: UUID=\"fee52367-c2ca-4fab-a6e9-58267895fe3f\" TYPE=\"vdo\" USAGE=\"other\"", "wipefs -a /dev/sdX", "blkid -p /dev/sdb /dev/sdb: UUID=\"fee52367-c2ca-4fab-a6e9-58267895fe3f\" TYPE=\"vdo\" USAGE=\"other\"", "yum install rhvm-appliance", "mount -t glusterfs <server1>:/engine /mnt/test", "rm -rf /mnt/test/*", "umount /mnt/test", "ovirt-hosted-engine-cleanup" ]
https://docs.redhat.com/en/documentation/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/deploying_red_hat_hyperconverged_infrastructure_for_virtualization_on_a_single_node/tshoot-deploy-error
Chapter 46. Infinispan Embedded
Chapter 46. Infinispan Embedded Since Camel 2.13 Both producer and consumer are supported This component allows you to interact with Infinispan distributed data grid / cache. Infinispan is an extremely scalable, highly available key / value data store and data grid platform written in Java. The camel-infinispan-embedded component includes the following features. Local Camel Consumer - Receives cache change notifications and sends them to be processed. This can be done synchronously or asynchronously, and is also supported with a replicated or distributed cache. Local Camel Producer - A producer creates and sends messages to an endpoint. The camel-infinispan producer uses GET , PUT , REMOVE , and CLEAR operations. The local producer is also supported with a replicated or distributed cache. The events are processed asynchronously. 46.1. Dependencies When using infinispan-embedded with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-infinispan-embedded-starter</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> 46.2. URI format The producer allows sending messages to a local infinispan cache. The consumer allows listening for events from local infinispan cache. If no cache configuration is provided, embedded cacheContainer is created directly in the component. 46.3. Configuring Options Camel components are configured on two separate levels. component level endpoint level 46.3.1. Configuring Component Options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and more. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 46.3.2. Configuring Endpoint Options Endpoints have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL and DataFormat DSL as a type safe way of configuring endpoints and data formats in Java. Use Property Placeholders to configure options that allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 46.4. Component Options The Infinispan Embedded component supports 20 options that are listed below. Name Description Default Type configuration (common) Component configuration. InfinispanEmbeddedConfiguration queryBuilder (common) Specifies the query builder. InfinispanQueryBuilder bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean clusteredListener (consumer) If true, the listener will be installed for the entire cluster. false boolean customListener (consumer) Returns the custom listener in use, if provided. InfinispanEmbeddedCustomListener eventTypes (consumer) Specifies the set of event types to register by the consumer.Multiple event can be separated by comma. The possible event types are: CACHE_ENTRY_ACTIVATED, CACHE_ENTRY_PASSIVATED, CACHE_ENTRY_VISITED, CACHE_ENTRY_LOADED, CACHE_ENTRY_EVICTED, CACHE_ENTRY_CREATED, CACHE_ENTRY_REMOVED, CACHE_ENTRY_MODIFIED, TRANSACTION_COMPLETED, TRANSACTION_REGISTERED, CACHE_ENTRY_INVALIDATED, CACHE_ENTRY_EXPIRED, DATA_REHASHED, TOPOLOGY_CHANGED, PARTITION_STATUS_CHANGED, PERSISTENCE_AVAILABILITY_CHANGED. String sync (consumer) If true, the consumer will receive notifications synchronously. true boolean defaultValue (producer) Set a specific default value for some producer operations. Object key (producer) Set a specific key for producer operations. Object lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean oldValue (producer) Set a specific old value for some producer operations. Object operation (producer) The operation to perform. Enum values: * PUT * PUTASYNC * PUTALL * PUTALLASYNC * PUTIFABSENT * PUTIFABSENTASYNC * GET * GETORDEFAULT * CONTAINSKEY * CONTAINSVALUE * REMOVE * REMOVEASYNC * REPLACE * REPLACEASYNC * SIZE * CLEAR * CLEARASYNC * QUERY * STATS * COMPUTE * COMPUTEASYNC PUT InfinispanOperation value* (producer) Set a specific value for producer operations. Object autowiredEnabled (advanced) Whether auto-wiring is enabled. This is used for automatic auto-wiring options (the option must be marked as auto-wired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean cacheContainer (advanced) Autowired Specifies the cache Container to connect. EmbeddedCacheManager cacheContainerConfiguration (advanced) Autowired The CacheContainer configuration. Used if the cacheContainer is not defined. Configuration configurationUri (advanced) An implementation specific URI for the CacheManager. String flags (advanced) A comma separated list of org.infinispan.context.Flag to be applied by default on each cache invocation. String remappingFunction (advanced) Set a specific remappingFunction to use in a compute operation. BiFunction resultHeader (advanced) Store the operation result in a header instead of the message body. By default, resultHeader == null and the query result is stored in the message body, any existing content in the message body is discarded. If resultHeader is set, the value is used as the name of the header to store the query result and the original message body is preserved. This value can be overridden by an in message header named: CamelInfinispanOperationResultHeader. String 46.5. Endpoint Options The Infinispan Embedded endpoint is configured using URI syntax. Following are the path and query parameters. 46.5.1. Path Parameters (1 parameters) Name Description Default Type cacheName (common) Required The name of the cache to use. Use current to use the existing cache name from the currently configured cached manager. Or use default for the default cache manager name. String 46.5.2. Query Parameters (20 parameters) Name Description Default Type queryBuilder (common) Specifies the query builder. InfinispanQueryBuilder clusteredListener (consumer) If true, the listener will be installed for the entire cluster. false boolean customListener (consumer) Returns the custom listener in use, if provided. InfinispanEmbeddedCustomListener eventTypes (consumer) Specifies the set of event types to register by the consumer.Multiple event can be separated by comma. The possible event types are: CACHE_ENTRY_ACTIVATED, CACHE_ENTRY_PASSIVATED, CACHE_ENTRY_VISITED, CACHE_ENTRY_LOADED, CACHE_ENTRY_EVICTED, CACHE_ENTRY_CREATED, CACHE_ENTRY_REMOVED, CACHE_ENTRY_MODIFIED, TRANSACTION_COMPLETED, TRANSACTION_REGISTERED, CACHE_ENTRY_INVALIDATED, CACHE_ENTRY_EXPIRED, DATA_REHASHED, TOPOLOGY_CHANGED, PARTITION_STATUS_CHANGED, PERSISTENCE_AVAILABILITY_CHANGED. String sync (consumer) If true, the consumer will receive notifications synchronously. true boolean bridgeErrorHandler (consumer (advanced)) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: * InOnly * InOut * InOptionalOut ExchangePattern defaultValue (producer) Set a specific default value for some producer operations. Object key (producer) Set a specific key for producer operations. Object oldValue (producer) Set a specific old value for some producer operations. Object operation (producer) The operation to perform. Enum values: * PUT * PUTASYNC * PUTALL * PUTALLASYNC * PUTIFABSENT * PUTIFABSENTASYNC * GET * GETORDEFAULT * CONTAINSKEY * CONTAINSVALUE * REMOVE * REMOVEASYNC * REPLACE * REPLACEASYNC * SIZE * CLEAR * CLEARASYNC * QUERY * STATS * COMPUTE * COMPUTEASYNC PUT InfinispanOperation value (producer) Set a specific value for producer operations. Object lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean cacheContainer (advanced) Autowired Specifies the cache Container to connect. EmbeddedCacheManager cacheContainerConfiguration (advanced) Autowired The CacheContainer configuration. Used if the cacheContainer is not defined. Configuration configurationUri (advanced) An implementation specific URI for the CacheManager. String flags (advanced) A comma separated list of org.infinispan.context.Flag to be applied by default on each cache invocation. String remappingFunction (advanced) Set a specific remappingFunction to use in a compute operation. BiFunction resultHeader (advanced) Store the operation result in a header instead of the message body. By default, resultHeader == null and the query result is stored in the message body, any existing content in the message body is discarded. If resultHeader is set, the value is used as the name of the header to store the query result and the original message body is preserved. This value can be overridden by an in message header named: CamelInfinispanOperationResultHeader. String 46.6. Message Headers The Infinispan Embedded component supports 22 message headers that are listed below. Name Description Default Type CamelInfinispanEventType (consumer) Constant: EVENT_TYPE The type of the received event. String CamelInfinispanIsPre (consumer) Constant: IS_PRE true if the notification is before the event has occurred, false if after the event has occurred. boolean CamelInfinispanCacheName (common) Constant: CACHE_NAME The cache participating in the operation or event. String CamelInfinispanKey (common) Constant: KEY The key to perform the operation to or the key generating the event. Object CamelInfinispanValue (producer) Constant: VALUE The value to use for the operation. Object CamelInfinispanDefaultValue (producer) Constant: DEFAULT_VALUE The default value to use for a getOrDefault. Object CamelInfinispanOldValue (producer) Constant: OLD_VALUE The old value to use for a replace. Object CamelInfinispanMap (producer) Constant: MAP A Map to use in case of CamelInfinispanOperationPutAll operation. Map CamelInfinispanOperation (producer) Constant: OPERATION The operation to perform. Enum values: * PUT * PUTASYNC * PUTALL * PUTALLASYNC * PUTIFABSENT * PUTIFABSENTASYNC * GET * GETORDEFAULT * CONTAINSKEY * CONTAINSVALUE * REMOVE * REMOVEASYNC * REPLACE * REPLACEASYNC * SIZE * CLEAR * CLEARASYNC * QUERY * STATS * COMPUTE * COMPUTEASYNC InfinispanOperation CamelInfinispanOperationResult (producer) Constant: RESULT The name of the header whose value is the result. String CamelInfinispanOperationResultHeader (producer) Constant: RESULT_HEADER Store the operation result in a header instead of the message body. String CamelInfinispanLifespanTime (producer) Constant: LIFESPAN_TIME The Lifespan time of a value inside the cache. Negative values are interpreted as infinity. long CamelInfinispanTimeUnit (producer) Constant: LIFESPAN_TIME_UNIT The Time Unit of an entry Lifespan Time. Enum values: * NANOSECONDS * MICROSECONDS * MILLISECONDS * SECONDS * MINUTES * HOURS * DAYS TimeUnit CamelInfinispanMaxIdleTime (producer) Constant: MAX_IDLE_TIME The maximum amount of time an entry is allowed to be idle for before it is considered as expired. long CamelInfinispanMaxIdleTimeUnit (producer) Constant: MAX_IDLE_TIME_UNIT The Time Unit of an entry Max Idle Time. Enum values: * NANOSECONDS * MICROSECONDS * MILLISECONDS * SECONDS * MINUTES * HOURS * DAYS TimeUnit CamelInfinispanIgnoreReturnValues (consumer) Constant: IGNORE_RETURN_VALUES Signals that write operation's return value are ignored, so reading the existing value from a store or from a remote node is not necessary. false boolean CamelInfinispanEventData (consumer) Constant: EVENT_DATA The event data. Object CamelInfinispanQueryBuilder (producer) Constant: QUERY_BUILDER The QueryBuilder to use for QUERY command, if not present the command defaults to InifinispanConfiguration's one. InfinispanQueryBuilder CamelInfinispanCommandRetried (consumer) Constant: COMMAND_RETRIED This will be true if the write command that caused this had to be retried again due to a topology change. boolean CamelInfinispanEntryCreated (consumer) Constant: ENTRY_CREATED Indicates whether the cache entry modification event is the result of the cache entry being created. boolean CamelInfinispanOriginLocal (consumer) Constant: ORIGIN_LOCAL true if the call originated on the local cache instance; false if originated from a remote one. boolean CamelInfinispanCurrentState (consumer) Constant: CURRENT_STATE True if this event is generated from an existing entry as the listener has Listener. boolean 46.7. Camel Operations This section lists all available operations along with their header information. Table 46.1. Table 1. Put Operations Operation Name Description InfinispanOperation.PUT Puts a key/value pair in the cache, optionally with expiration InfinispanOperation.PUTASYNC Asynchronously puts a key/value pair in the cache, optionally with expiration InfinispanOperation.PUTIFABSENT Puts a key/value pair in the cache if it did not exist, optionally with expiration InfinispanOperation.PUTIFABSENTASYNC Asynchronously puts a key/value pair in the cache if it did not exist, optionally with expiration Required Headers : CamelInfinispanKey CamelInfinispanValue Optional Headers : CamelInfinispanLifespanTime CamelInfinispanLifespanTimeUnit CamelInfinispanMaxIdleTime CamelInfinispanMaxIdleTimeUnit Result Header : CamelInfinispanOperationResult Table 46.2. Table 2. Put All Operations Operation Name Description InfinispanOperation.PUTALL Adds multiple entries to a cache, optionally with expiration CamelInfinispanOperation.PUTALLASYNC Asynchronously adds multiple entries to a cache, optionally with expiration Required Headers : CamelInfinispanMap Optional Headers : CamelInfinispanLifespanTime CamelInfinispanLifespanTimeUnit CamelInfinispanMaxIdleTime CamelInfinispanMaxIdleTimeUnit Table 46.3. Table 3. Get Operations Operation Name Description InfinispanOperation.GET Retrieves the value associated with a specific key from the cache InfinispanOperation.GETORDEFAULT Retrieves the value, or default value, associated with a specific key from the cache Required Headers : CamelInfinispanKey Table 46.4. Table 4. Contains Key Operation Operation Name Description InfinispanOperation.CONTAINSKEY Determines whether a cache contains a specific key Required Headers CamelInfinispanKey Result Header CamelInfinispanOperationResult Table 46.5. Table 5. Contains Value Operation Operation Name Description InfinispanOperation.CONTAINSVALUE Determines whether a cache contains a specific value Required Headers : CamelInfinispanKey Table 46.6. Table 6. Remove Operations Operation Name Description InfinispanOperation.REMOVE Removes an entry from a cache, optionally only if the value matches a given one InfinispanOperation.REMOVEASYNC Asynchronously removes an entry from a cache, optionally only if the value matches a given one Required Headers : CamelInfinispanKey Optional Headers : CamelInfinispanValue Result Header : CamelInfinispanOperationResult Table 46.7. Table 7. Replace Operations Operation Name Description InfinispanOperation.REPLACE Conditionally replaces an entry in the cache, optionally with expiration InfinispanOperation.REPLACEASYNC Asynchronously conditionally replaces an entry in the cache, optionally with expiration Required Headers : CamelInfinispanKey CamelInfinispanValue CamelInfinispanOldValue Optional Headers : CamelInfinispanLifespanTime CamelInfinispanLifespanTimeUnit CamelInfinispanMaxIdleTime CamelInfinispanMaxIdleTimeUnit Result Header : CamelInfinispanOperationResult Table 46.8. Table 8. Clear Operations Operation Name Description InfinispanOperation.CLEAR Clears the cache InfinispanOperation.CLEARASYNC Asynchronously clears the cache Table 46.9. Table 9. Size Operation Operation Name Description InfinispanOperation.SIZE Returns the number of entries in the cache Result Header CamelInfinispanOperationResult Table 46.10. Table 10. Stats Operation Operation Name Description InfinispanOperation.STATS Returns statistics about the cache Result Header : CamelInfinispanOperationResult Table 46.11. Table 11. Query Operation Operation Name Description InfinispanOperation.QUERY Executes a query on the cache Required Headers : CamelInfinispanQueryBuilder Result Header : CamelInfinispanOperationResult Note Write methods like put(key, value) and remove(key) do not return the value by default. 46.8. Examples Put a key/value into a named cache: from("direct:start") .setHeader(InfinispanConstants.OPERATION).constant(InfinispanOperation.PUT) (1) .setHeader(InfinispanConstants.KEY).constant("123") (2) .to("infinispan:myCacheName&cacheContainer=#cacheContainer"); (3) Set the operation to perform Set the key used to identify the element in the cache Use the configured cache manager cacheContainer from the registry to put an element to the cache named myCacheName It is possible to configure the lifetime and/or the idle time before the entry expires and gets evicted from the cache, as example. from("direct:start") .setHeader(InfinispanConstants.OPERATION).constant(InfinispanOperation.GET) .setHeader(InfinispanConstants.KEY).constant("123") .setHeader(InfinispanConstants.LIFESPAN_TIME).constant(100L) (1) .setHeader(InfinispanConstants.LIFESPAN_TIME_UNIT.constant(TimeUnit.MILLISECONDS.toString()) (2) .to("infinispan:myCacheName"); Set the lifespan of the entry Set the time unit for the lifespan Queries from("direct:start") .setHeader(InfinispanConstants.OPERATION, InfinispanConstants.QUERY) .setHeader(InfinispanConstants.QUERY_BUILDER, new InfinispanQueryBuilder() { @Override public Query build(QueryFactory<Query> qf) { return qf.from(User.class).having("name").like("%abc%").build(); } }) .to("infinispan:myCacheName?cacheContainer=#cacheManager") ; Custom Listeners from("infinispan://?cacheContainer=#cacheManager&customListener=#myCustomListener") .to("mock:result"); The instance of myCustomListener must exist and Camel should be able to look it up from the Registry . Users are encouraged to extend the org.apache.camel.component.infinispan.embedded.InfinispanEmbeddedCustomListener class and annotate the resulting class with @Listener which can be found in package org.infinispan.notifications . 46.9. Using the Infinispan based idempotent repository Java Example InfinispanEmbeddedConfiguration conf = new InfinispanEmbeddedConfiguration(); (1) conf.setConfigurationUri("classpath:infinispan.xml") InfinispanEmbeddedIdempotentRepository repo = new InfinispanEmbeddedIdempotentRepository("idempotent"); (2) repo.setConfiguration(conf); context.addRoutes(new RouteBuilder() { @Override public void configure() { from("direct:start") .idempotentConsumer(header("MessageID"), repo) (3) .to("mock:result"); } }); Configure the cache Configure the repository bean Set the repository to the route XML Example <bean id="infinispanRepo" class="org.apache.camel.component.infinispan.embedded.InfinispanEmbeddedIdempotentRepository" destroy-method="stop"> <constructor-arg value="idempotent"/> (1) <property name="configuration"> (2) <bean class="org.apache.camel.component.infinispan.embedded.InfinispanEmbeddedConfiguration"> <property name="configurationUrl" value="classpath:infinispan.xml"/> </bean> </property> </bean> <camelContext xmlns="http://camel.apache.org/schema/spring"> <route> <from uri="direct:start" /> <idempotentConsumer idempotentRepository="infinispanRepo"> (3) <header>MessageID</header> <to uri="mock:result" /> </idempotentConsumer> </route> </camelContext> Set the name of the cache that will be used by the repository Configure the repository bean Set the repository to the route 46.10. Using the Infinispan based aggregation repository Java Example InfinispanEmbeddedConfiguration conf = new InfinispanEmbeddedConfiguration(); (1) conf.setConfigurationUri("classpath:infinispan.xml") InfinispanEmbeddedAggregationRepository repo = new InfinispanEmbeddedAggregationRepository("aggregation"); (2) repo.setConfiguration(conf); context.addRoutes(new RouteBuilder() { @Override public void configure() { from("direct:start") .aggregate(header("MessageID")) .completionSize(3) .aggregationRepository(repo) (3) .aggregationStrategy("myStrategy") .to("mock:result"); } }); Configure the cache Create the repository bean Set the repository to the route XML Example <bean id="infinispanRepo" class="org.apache.camel.component.infinispan.embedded.InfinispanEmbeddedAggregationRepository" destroy-method="stop"> <constructor-arg value="aggregation"/> (1) <property name="configuration"> (2) <bean class="org.apache.camel.component.infinispan.embedded.InfinispanEmbeddedConfiguration"> <property name="configurationUrl" value="classpath:infinispan.xml"/> </bean> </property> </bean> <camelContext xmlns="http://camel.apache.org/schema/spring"> <route> <from uri="direct:start" /> <aggregate aggregationStrategy="myStrategy" completionSize="3" aggregationRepository="infinispanRepo"> (3) <correlationExpression> <header>MessageID</header> </correlationExpression> <to uri="mock:result"/> </aggregate> </route> </camelContext> Set the name of the cache that will be used by the repository Configure the repository bean Set the repository to the route Note With the release of Infinispan 11, it is required to set the encoding configuration on any cache created. This is critical for consuming events too. For more information have a look at Data Encoding and MediaTypes in the official Infinispan documentation. 46.11. Spring Boot Auto-Configuration The component supports 17 options that are listed below. Name Description Default Type camel.component.infinispan-embedded.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.infinispan-embedded.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.infinispan-embedded.cache-container Specifies the cache Container to connect. The option is a org.infinispan.manager.EmbeddedCacheManager type. EmbeddedCacheManager camel.component.infinispan-embedded.cache-container-configuration The CacheContainer configuration. Used if the cacheContainer is not defined. The option is a org.infinispan.configuration.cache.Configuration type. Configuration camel.component.infinispan-embedded.clustered-listener If true, the listener will be installed for the entire cluster. false Boolean camel.component.infinispan-embedded.configuration Component configuration. The option is a org.apache.camel.component.infinispan.embedded.InfinispanEmbeddedConfiguration type. InfinispanEmbeddedConfiguration camel.component.infinispan-embedded.configuration-uri An implementation specific URI for the CacheManager. String camel.component.infinispan-embedded.custom-listener Returns the custom listener in use, if provided. The option is a org.apache.camel.component.infinispan.embedded.InfinispanEmbeddedCustomListener type. InfinispanEmbeddedCustomListener camel.component.infinispan-embedded.enabled Whether to enable auto configuration of the infinispan-embedded component. This is enabled by default. Boolean camel.component.infinispan-embedded.event-types Specifies the set of event types to register by the consumer.Multiple event can be separated by comma. The possible event types are: CACHE_ENTRY_ACTIVATED, CACHE_ENTRY_PASSIVATED, CACHE_ENTRY_VISITED, CACHE_ENTRY_LOADED, CACHE_ENTRY_EVICTED, CACHE_ENTRY_CREATED, CACHE_ENTRY_REMOVED, CACHE_ENTRY_MODIFIED, TRANSACTION_COMPLETED, TRANSACTION_REGISTERED, CACHE_ENTRY_INVALIDATED, CACHE_ENTRY_EXPIRED, DATA_REHASHED, TOPOLOGY_CHANGED, PARTITION_STATUS_CHANGED, PERSISTENCE_AVAILABILITY_CHANGED. String camel.component.infinispan-embedded.flags A comma separated list of org.infinispan.context.Flag to be applied by default on each cache invocation. String camel.component.infinispan-embedded.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.infinispan-embedded.operation The operation to perform. InfinispanOperation camel.component.infinispan-embedded.query-builder Specifies the query builder. The option is a org.apache.camel.component.infinispan.InfinispanQueryBuilder type. InfinispanQueryBuilder camel.component.infinispan-embedded.remapping-function Set a specific remappingFunction to use in a compute operation. The option is a java.util.function.BiFunction type. BiFunction camel.component.infinispan-embedded.result-header Store the operation result in a header instead of the message body. By default, resultHeader == null and the query result is stored in the message body, any existing content in the message body is discarded. If resultHeader is set, the value is used as the name of the header to store the query result and the original message body is preserved. This value can be overridden by an in message header named: CamelInfinispanOperationResultHeader. String camel.component.infinispan-embedded.sync If true, the consumer will receive notifications synchronously. true Boolean
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-infinispan-embedded-starter</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>", "infinispan-embedded://cacheName?[options]", "infinispan-embedded:cacheName", "from(\"direct:start\") .setHeader(InfinispanConstants.OPERATION).constant(InfinispanOperation.PUT) (1) .setHeader(InfinispanConstants.KEY).constant(\"123\") (2) .to(\"infinispan:myCacheName&cacheContainer=#cacheContainer\"); (3)", "from(\"direct:start\") .setHeader(InfinispanConstants.OPERATION).constant(InfinispanOperation.GET) .setHeader(InfinispanConstants.KEY).constant(\"123\") .setHeader(InfinispanConstants.LIFESPAN_TIME).constant(100L) (1) .setHeader(InfinispanConstants.LIFESPAN_TIME_UNIT.constant(TimeUnit.MILLISECONDS.toString()) (2) .to(\"infinispan:myCacheName\");", "from(\"direct:start\") .setHeader(InfinispanConstants.OPERATION, InfinispanConstants.QUERY) .setHeader(InfinispanConstants.QUERY_BUILDER, new InfinispanQueryBuilder() { @Override public Query build(QueryFactory<Query> qf) { return qf.from(User.class).having(\"name\").like(\"%abc%\").build(); } }) .to(\"infinispan:myCacheName?cacheContainer=#cacheManager\") ;", "from(\"infinispan://?cacheContainer=#cacheManager&customListener=#myCustomListener\") .to(\"mock:result\");", "InfinispanEmbeddedConfiguration conf = new InfinispanEmbeddedConfiguration(); (1) conf.setConfigurationUri(\"classpath:infinispan.xml\") InfinispanEmbeddedIdempotentRepository repo = new InfinispanEmbeddedIdempotentRepository(\"idempotent\"); (2) repo.setConfiguration(conf); context.addRoutes(new RouteBuilder() { @Override public void configure() { from(\"direct:start\") .idempotentConsumer(header(\"MessageID\"), repo) (3) .to(\"mock:result\"); } });", "<bean id=\"infinispanRepo\" class=\"org.apache.camel.component.infinispan.embedded.InfinispanEmbeddedIdempotentRepository\" destroy-method=\"stop\"> <constructor-arg value=\"idempotent\"/> (1) <property name=\"configuration\"> (2) <bean class=\"org.apache.camel.component.infinispan.embedded.InfinispanEmbeddedConfiguration\"> <property name=\"configurationUrl\" value=\"classpath:infinispan.xml\"/> </bean> </property> </bean> <camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"direct:start\" /> <idempotentConsumer idempotentRepository=\"infinispanRepo\"> (3) <header>MessageID</header> <to uri=\"mock:result\" /> </idempotentConsumer> </route> </camelContext>", "InfinispanEmbeddedConfiguration conf = new InfinispanEmbeddedConfiguration(); (1) conf.setConfigurationUri(\"classpath:infinispan.xml\") InfinispanEmbeddedAggregationRepository repo = new InfinispanEmbeddedAggregationRepository(\"aggregation\"); (2) repo.setConfiguration(conf); context.addRoutes(new RouteBuilder() { @Override public void configure() { from(\"direct:start\") .aggregate(header(\"MessageID\")) .completionSize(3) .aggregationRepository(repo) (3) .aggregationStrategy(\"myStrategy\") .to(\"mock:result\"); } });", "<bean id=\"infinispanRepo\" class=\"org.apache.camel.component.infinispan.embedded.InfinispanEmbeddedAggregationRepository\" destroy-method=\"stop\"> <constructor-arg value=\"aggregation\"/> (1) <property name=\"configuration\"> (2) <bean class=\"org.apache.camel.component.infinispan.embedded.InfinispanEmbeddedConfiguration\"> <property name=\"configurationUrl\" value=\"classpath:infinispan.xml\"/> </bean> </property> </bean> <camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"direct:start\" /> <aggregate aggregationStrategy=\"myStrategy\" completionSize=\"3\" aggregationRepository=\"infinispanRepo\"> (3) <correlationExpression> <header>MessageID</header> </correlationExpression> <to uri=\"mock:result\"/> </aggregate> </route> </camelContext>" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-infinispan-embedded-component
Chapter 3. Service Mesh 1.x
Chapter 3. Service Mesh 1.x 3.1. Service Mesh Release Notes Warning You are viewing documentation for a Red Hat OpenShift Service Mesh release that is no longer supported. Service Mesh version 1.0 and 1.1 control planes are no longer supported. For information about upgrading your service mesh control plane, see Upgrading Service Mesh . For information about the support status of a particular Red Hat OpenShift Service Mesh release, see the Product lifecycle page . 3.1.1. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . 3.1.2. Introduction to Red Hat OpenShift Service Mesh Red Hat OpenShift Service Mesh addresses a variety of problems in a microservice architecture by creating a centralized point of control in an application. It adds a transparent layer on existing distributed applications without requiring any changes to the application code. Microservice architectures split the work of enterprise applications into modular services, which can make scaling and maintenance easier. However, as an enterprise application built on a microservice architecture grows in size and complexity, it becomes difficult to understand and manage. Service Mesh can address those architecture problems by capturing or intercepting traffic between services and can modify, redirect, or create new requests to other services. Service Mesh, which is based on the open source Istio project , provides an easy way to create a network of deployed services that provides discovery, load balancing, service-to-service authentication, failure recovery, metrics, and monitoring. A service mesh also provides more complex operational functionality, including A/B testing, canary releases, access control, and end-to-end authentication. Note Red Hat OpenShift Service Mesh 3 is generally available. For more information, see Red Hat OpenShift Service Mesh 3.0 . 3.1.3. Getting support If you experience difficulty with a procedure described in this documentation, or with OpenShift Container Platform in general, visit the Red Hat Customer Portal . From the Customer Portal, you can: Search or browse through the Red Hat Knowledgebase of articles and solutions relating to Red Hat products. Submit a support case to Red Hat Support. Access other product documentation. To identify issues with your cluster, you can use Insights in OpenShift Cluster Manager . Insights provides details about issues and, if available, information on how to solve a problem. If you have a suggestion for improving this documentation or have found an error, submit a Jira issue for the most relevant documentation component. Please provide specific details, such as the section name and OpenShift Container Platform version. When opening a support case, it is helpful to provide debugging information about your cluster to Red Hat Support. The must-gather tool enables you to collect diagnostic information about your OpenShift Container Platform cluster, including virtual machines and other data related to Red Hat OpenShift Service Mesh. For prompt support, supply diagnostic information for both OpenShift Container Platform and Red Hat OpenShift Service Mesh. 3.1.3.1. About the must-gather tool The oc adm must-gather CLI command collects the information from your cluster that is most likely needed for debugging issues, including: Resource definitions Service logs By default, the oc adm must-gather command uses the default plugin image and writes into ./must-gather.local . Alternatively, you can collect specific information by running the command with the appropriate arguments as described in the following sections: To collect data related to one or more specific features, use the --image argument with an image, as listed in a following section. For example: USD oc adm must-gather \ --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.15.9 To collect the audit logs, use the -- /usr/bin/gather_audit_logs argument, as described in a following section. For example: USD oc adm must-gather -- /usr/bin/gather_audit_logs Note Audit logs are not collected as part of the default set of information to reduce the size of the files. When you run oc adm must-gather , a new pod with a random name is created in a new project on the cluster. The data is collected on that pod and saved in a new directory that starts with must-gather.local in the current working directory. For example: NAMESPACE NAME READY STATUS RESTARTS AGE ... openshift-must-gather-5drcj must-gather-bklx4 2/2 Running 0 72s openshift-must-gather-5drcj must-gather-s8sdh 2/2 Running 0 72s ... Optionally, you can run the oc adm must-gather command in a specific namespace by using the --run-namespace option. For example: USD oc adm must-gather --run-namespace <namespace> \ --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.15.9 3.1.3.2. Prerequisites Access to the cluster as a user with the cluster-admin role. The OpenShift Container Platform CLI ( oc ) installed. 3.1.3.3. About collecting service mesh data You can use the oc adm must-gather CLI command to collect information about your cluster, including features and objects associated with Red Hat OpenShift Service Mesh. Prerequisites Access to the cluster as a user with the cluster-admin role. The OpenShift Container Platform CLI ( oc ) installed. Procedure To collect Red Hat OpenShift Service Mesh data with must-gather , you must specify the Red Hat OpenShift Service Mesh image. USD oc adm must-gather --image=registry.redhat.io/openshift-service-mesh/istio-must-gather-rhel8:2.6 To collect Red Hat OpenShift Service Mesh data for a specific Service Mesh control plane namespace with must-gather , you must specify the Red Hat OpenShift Service Mesh image and namespace. In this example, after gather, replace <namespace> with your Service Mesh control plane namespace, such as istio-system . USD oc adm must-gather --image=registry.redhat.io/openshift-service-mesh/istio-must-gather-rhel8:2.6 gather <namespace> This creates a local directory that contains the following items: The Istio Operator namespace and its child objects All control plane namespaces and their children objects All namespaces and their children objects that belong to any service mesh All Istio custom resource definitions (CRD) All Istio CRD objects, such as VirtualServices, in a given namespace All Istio webhooks 3.1.4. Red Hat OpenShift Service Mesh supported configurations The following are the only supported configurations for the Red Hat OpenShift Service Mesh: OpenShift Container Platform version 4.6 or later. Note OpenShift Online and Red Hat OpenShift Dedicated are not supported for Red Hat OpenShift Service Mesh. The deployment must be contained within a single OpenShift Container Platform cluster that is not federated. This release of Red Hat OpenShift Service Mesh is only available on OpenShift Container Platform x86_64. This release only supports configurations where all Service Mesh components are contained in the OpenShift Container Platform cluster in which it operates. It does not support management of microservices that reside outside of the cluster, or in a multi-cluster scenario. This release only supports configurations that do not integrate external services such as virtual machines. For additional information about Red Hat OpenShift Service Mesh lifecycle and supported configurations, refer to the Support Policy . 3.1.4.1. Supported configurations for Kiali on Red Hat OpenShift Service Mesh The Kiali observability console is only supported on the two most recent releases of the Chrome, Edge, Firefox, or Safari browsers. 3.1.4.2. Supported Mixer adapters This release only supports the following Mixer adapter: 3scale Istio Adapter 3.1.5. New Features Red Hat OpenShift Service Mesh provides a number of key capabilities uniformly across a network of services: Traffic Management - Control the flow of traffic and API calls between services, make calls more reliable, and make the network more robust in the face of adverse conditions. Service Identity and Security - Provide services in the mesh with a verifiable identity and provide the ability to protect service traffic as it flows over networks of varying degrees of trustworthiness. Policy Enforcement - Apply organizational policy to the interaction between services, ensure access policies are enforced and resources are fairly distributed among consumers. Policy changes are made by configuring the mesh, not by changing application code. Telemetry - Gain understanding of the dependencies between services and the nature and flow of traffic between them, providing the ability to quickly identify issues. 3.1.5.1. New features Red Hat OpenShift Service Mesh 1.1.18.2 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs). 3.1.5.1.1. Component versions included in Red Hat OpenShift Service Mesh version 1.1.18.2 Component Version Istio 1.4.10 Jaeger 1.30.2 Kiali 1.12.21.1 3scale Istio Adapter 1.0.0 3.1.5.2. New features Red Hat OpenShift Service Mesh 1.1.18.1 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs). 3.1.5.2.1. Component versions included in Red Hat OpenShift Service Mesh version 1.1.18.1 Component Version Istio 1.4.10 Jaeger 1.30.2 Kiali 1.12.20.1 3scale Istio Adapter 1.0.0 3.1.5.3. New features Red Hat OpenShift Service Mesh 1.1.18 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs). 3.1.5.3.1. Component versions included in Red Hat OpenShift Service Mesh version 1.1.18 Component Version Istio 1.4.10 Jaeger 1.24.0 Kiali 1.12.18 3scale Istio Adapter 1.0.0 3.1.5.4. New features Red Hat OpenShift Service Mesh 1.1.17.1 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs). 3.1.5.4.1. Change in how Red Hat OpenShift Service Mesh handles URI fragments Red Hat OpenShift Service Mesh contains a remotely exploitable vulnerability, CVE-2021-39156 , where an HTTP request with a fragment (a section in the end of a URI that begins with a # character) in the URI path could bypass the Istio URI path-based authorization policies. For instance, an Istio authorization policy denies requests sent to the URI path /user/profile . In the vulnerable versions, a request with URI path /user/profile#section1 bypasses the deny policy and routes to the backend (with the normalized URI path /user/profile%23section1 ), possibly leading to a security incident. You are impacted by this vulnerability if you use authorization policies with DENY actions and operation.paths , or ALLOW actions and operation.notPaths . With the mitigation, the fragment part of the request's URI is removed before the authorization and routing. This prevents a request with a fragment in its URI from bypassing authorization policies which are based on the URI without the fragment part. 3.1.5.4.2. Required update for authorization policies Istio generates hostnames for both the hostname itself and all matching ports. For instance, a virtual service or Gateway for a host of "httpbin.foo" generates a config matching "httpbin.foo and httpbin.foo:*". However, exact match authorization policies only match the exact string given for the hosts or notHosts fields. Your cluster is impacted if you have AuthorizationPolicy resources using exact string comparison for the rule to determine hosts or notHosts . You must update your authorization policy rules to use prefix match instead of exact match. For example, replacing hosts: ["httpbin.com"] with hosts: ["httpbin.com:*"] in the first AuthorizationPolicy example. First example AuthorizationPolicy using prefix match apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: httpbin namespace: foo spec: action: DENY rules: - from: - source: namespaces: ["dev"] to: - operation: hosts: ["httpbin.com","httpbin.com:*"] Second example AuthorizationPolicy using prefix match apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: httpbin namespace: default spec: action: DENY rules: - to: - operation: hosts: ["httpbin.example.com:*"] 3.1.5.5. New features Red Hat OpenShift Service Mesh 1.1.17 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 3.1.5.6. New features Red Hat OpenShift Service Mesh 1.1.16 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 3.1.5.7. New features Red Hat OpenShift Service Mesh 1.1.15 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 3.1.5.8. New features Red Hat OpenShift Service Mesh 1.1.14 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. Important There are manual steps that must be completed to address CVE-2021-29492 and CVE-2021-31920. 3.1.5.8.1. Manual updates required by CVE-2021-29492 and CVE-2021-31920 Istio contains a remotely exploitable vulnerability where an HTTP request path with multiple slashes or escaped slash characters ( %2F or %5C ) could potentially bypass an Istio authorization policy when path-based authorization rules are used. For example, assume an Istio cluster administrator defines an authorization DENY policy to reject the request at path /admin . A request sent to the URL path //admin will NOT be rejected by the authorization policy. According to RFC 3986 , the path //admin with multiple slashes should technically be treated as a different path from the /admin . However, some backend services choose to normalize the URL paths by merging multiple slashes into a single slash. This can result in a bypass of the authorization policy ( //admin does not match /admin ), and a user can access the resource at path /admin in the backend; this would represent a security incident. Your cluster is impacted by this vulnerability if you have authorization policies using ALLOW action + notPaths field or DENY action + paths field patterns. These patterns are vulnerable to unexpected policy bypasses. Your cluster is NOT impacted by this vulnerability if: You don't have authorization policies. Your authorization policies don't define paths or notPaths fields. Your authorization policies use ALLOW action + paths field or DENY action + notPaths field patterns. These patterns could only cause unexpected rejection instead of policy bypasses. The upgrade is optional for these cases. Note The Red Hat OpenShift Service Mesh configuration location for path normalization is different from the Istio configuration. 3.1.5.8.2. Updating the path normalization configuration Istio authorization policies can be based on the URL paths in the HTTP request. Path normalization , also known as URI normalization, modifies and standardizes the incoming requests' paths so that the normalized paths can be processed in a standard way. Syntactically different paths may be equivalent after path normalization. Istio supports the following normalization schemes on the request paths before evaluating against the authorization policies and routing the requests: Table 3.1. Normalization schemes Option Description Example Notes NONE No normalization is done. Anything received by Envoy will be forwarded exactly as-is to any backend service. ../%2Fa../b is evaluated by the authorization policies and sent to your service. This setting is vulnerable to CVE-2021-31920. BASE This is currently the option used in the default installation of Istio. This applies the normalize_path option on Envoy proxies, which follows RFC 3986 with extra normalization to convert backslashes to forward slashes. /a/../b is normalized to /b . \da is normalized to /da . This setting is vulnerable to CVE-2021-31920. MERGE_SLASHES Slashes are merged after the BASE normalization. /a//b is normalized to /a/b . Update to this setting to mitigate CVE-2021-31920. DECODE_AND_MERGE_SLASHES The strictest setting when you allow all traffic by default. This setting is recommended, with the caveat that you must thoroughly test your authorization policies routes. Percent-encoded slash and backslash characters ( %2F , %2f , %5C and %5c ) are decoded to / or \ , before the MERGE_SLASHES normalization. /a%2fb is normalized to /a/b . Update to this setting to mitigate CVE-2021-31920. This setting is more secure, but also has the potential to break applications. Test your applications before deploying to production. The normalization algorithms are conducted in the following order: Percent-decode %2F , %2f , %5C and %5c . The RFC 3986 and other normalization implemented by the normalize_path option in Envoy. Merge slashes. Warning While these normalization options represent recommendations from HTTP standards and common industry practices, applications may interpret a URL in any way it chooses to. When using denial policies, ensure that you understand how your application behaves. 3.1.5.8.3. Path normalization configuration examples Ensuring Envoy normalizes request paths to match your backend services' expectations is critical to the security of your system. The following examples can be used as a reference for you to configure your system. The normalized URL paths, or the original URL paths if NONE is selected, will be: Used to check against the authorization policies. Forwarded to the backend application. Table 3.2. Configuration examples If your application... Choose... Relies on the proxy to do normalization BASE , MERGE_SLASHES or DECODE_AND_MERGE_SLASHES Normalizes request paths based on RFC 3986 and does not merge slashes. BASE Normalizes request paths based on RFC 3986 and merges slashes, but does not decode percent-encoded slashes. MERGE_SLASHES Normalizes request paths based on RFC 3986 , decodes percent-encoded slashes, and merges slashes. DECODE_AND_MERGE_SLASHES Processes request paths in a way that is incompatible with RFC 3986 . NONE 3.1.5.8.4. Configuring your SMCP for path normalization To configure path normalization for Red Hat OpenShift Service Mesh, specify the following in your ServiceMeshControlPlane . Use the configuration examples to help determine the settings for your system. SMCP v1 pathNormalization spec: global: pathNormalization: <option> 3.1.5.9. New features Red Hat OpenShift Service Mesh 1.1.13 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 3.1.5.10. New features Red Hat OpenShift Service Mesh 1.1.12 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 3.1.5.11. New features Red Hat OpenShift Service Mesh 1.1.11 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 3.1.5.12. New features Red Hat OpenShift Service Mesh 1.1.10 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 3.1.5.13. New features Red Hat OpenShift Service Mesh 1.1.9 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 3.1.5.14. New features Red Hat OpenShift Service Mesh 1.1.8 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 3.1.5.15. New features Red Hat OpenShift Service Mesh 1.1.7 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 3.1.5.16. New features Red Hat OpenShift Service Mesh 1.1.6 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 3.1.5.17. New features Red Hat OpenShift Service Mesh 1.1.5 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. This release also added support for configuring cipher suites. 3.1.5.18. New features Red Hat OpenShift Service Mesh 1.1.4 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. Note There are manual steps that must be completed to address CVE-2020-8663. 3.1.5.18.1. Manual updates required by CVE-2020-8663 The fix for CVE-2020-8663 : envoy: Resource exhaustion when accepting too many connections added a configurable limit on downstream connections. The configuration option for this limit must be configured to mitigate this vulnerability. Important These manual steps are required to mitigate this CVE whether you are using the 1.1 version or the 1.0 version of Red Hat OpenShift Service Mesh. This new configuration option is called overload.global_downstream_max_connections , and it is configurable as a proxy runtime setting. Perform the following steps to configure limits at the Ingress Gateway. Procedure Create a file named bootstrap-override.json with the following text to force the proxy to override the bootstrap template and load runtime configuration from disk: Create a secret from the bootstrap-override.json file, replacing <SMCPnamespace> with the namespace where you created the service mesh control plane (SMCP): USD oc create secret generic -n <SMCPnamespace> gateway-bootstrap --from-file=bootstrap-override.json Update the SMCP configuration to activate the override. Updated SMCP configuration example #1 apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: gateways: istio-ingressgateway: env: ISTIO_BOOTSTRAP_OVERRIDE: /var/lib/istio/envoy/custom-bootstrap/bootstrap-override.json secretVolumes: - mountPath: /var/lib/istio/envoy/custom-bootstrap name: custom-bootstrap secretName: gateway-bootstrap To set the new configuration option, create a secret that has the desired value for the overload.global_downstream_max_connections setting. The following example uses a value of 10000 : USD oc create secret generic -n <SMCPnamespace> gateway-settings --from-literal=overload.global_downstream_max_connections=10000 Update the SMCP again to mount the secret in the location where Envoy is looking for runtime configuration: Updated SMCP configuration example #2 apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: template: default #Change the version to "v1.0" if you are on the 1.0 stream. version: v1.1 istio: gateways: istio-ingressgateway: env: ISTIO_BOOTSTRAP_OVERRIDE: /var/lib/istio/envoy/custom-bootstrap/bootstrap-override.json secretVolumes: - mountPath: /var/lib/istio/envoy/custom-bootstrap name: custom-bootstrap secretName: gateway-bootstrap # below is the new secret mount - mountPath: /var/lib/istio/envoy/runtime name: gateway-settings secretName: gateway-settings 3.1.5.18.2. Upgrading from Elasticsearch 5 to Elasticsearch 6 When updating from Elasticsearch 5 to Elasticsearch 6, you must delete your Jaeger instance, then recreate the Jaeger instance because of an issue with certificates. Re-creating the Jaeger instance triggers creating a new set of certificates. If you are using persistent storage the same volumes can be mounted for the new Jaeger instance as long as the Jaeger name and namespace for the new Jaeger instance are the same as the deleted Jaeger instance. Procedure if Jaeger is installed as part of Red Hat Service Mesh Determine the name of your Jaeger custom resource file: USD oc get jaeger -n istio-system You should see something like the following: NAME AGE jaeger 3d21h Copy the generated custom resource file into a temporary directory: USD oc get jaeger jaeger -oyaml -n istio-system > /tmp/jaeger-cr.yaml Delete the Jaeger instance: USD oc delete jaeger jaeger -n istio-system Recreate the Jaeger instance from your copy of the custom resource file: USD oc create -f /tmp/jaeger-cr.yaml -n istio-system Delete the copy of the generated custom resource file: USD rm /tmp/jaeger-cr.yaml Procedure if Jaeger not installed as part of Red Hat Service Mesh Before you begin, create a copy of your Jaeger custom resource file. Delete the Jaeger instance by deleting the custom resource file: USD oc delete -f <jaeger-cr-file> For example: USD oc delete -f jaeger-prod-elasticsearch.yaml Recreate your Jaeger instance from the backup copy of your custom resource file: USD oc create -f <jaeger-cr-file> Validate that your Pods have restarted: USD oc get pods -n jaeger-system -w 3.1.5.19. New features Red Hat OpenShift Service Mesh 1.1.3 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 3.1.5.20. New features Red Hat OpenShift Service Mesh 1.1.2 This release of Red Hat OpenShift Service Mesh addresses a security vulnerability. 3.1.5.21. New features Red Hat OpenShift Service Mesh 1.1.1 This release of Red Hat OpenShift Service Mesh adds support for a disconnected installation. 3.1.5.22. New features Red Hat OpenShift Service Mesh 1.1.0 This release of Red Hat OpenShift Service Mesh adds support for Istio 1.4.6 and Jaeger 1.17.1. 3.1.5.22.1. Manual updates from 1.0 to 1.1 If you are updating from Red Hat OpenShift Service Mesh 1.0 to 1.1, you must update the ServiceMeshControlPlane resource to update the control plane components to the new version. In the web console, click the Red Hat OpenShift Service Mesh Operator. Click the Project menu and choose the project where your ServiceMeshControlPlane is deployed from the list, for example istio-system . Click the name of your control plane, for example basic-install . Click YAML and add a version field to the spec: of your ServiceMeshControlPlane resource. For example, to update to Red Hat OpenShift Service Mesh 1.1.0, add version: v1.1 . The version field specifies the version of Service Mesh to install and defaults to the latest available version. Note Note that support for Red Hat OpenShift Service Mesh v1.0 ended in October, 2020. You must upgrade to either v1.1 or v2.0. 3.1.6. Deprecated features Some features available in releases have been deprecated or removed. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. 3.1.6.1. Deprecated features Red Hat OpenShift Service Mesh 1.1.5 The following custom resources were deprecated in release 1.1.5 and were removed in release 1.1.12 Policy - The Policy resource is deprecated and will be replaced by the PeerAuthentication resource in a future release. MeshPolicy - The MeshPolicy resource is deprecated and will be replaced by the PeerAuthentication resource in a future release. v1alpha1 RBAC API -The v1alpha1 RBAC policy is deprecated by the v1beta1 AuthorizationPolicy . RBAC (Role Based Access Control) defines ServiceRole and ServiceRoleBinding objects. ServiceRole ServiceRoleBinding RbacConfig - RbacConfig implements the Custom Resource Definition for controlling Istio RBAC behavior. ClusterRbacConfig (versions prior to Red Hat OpenShift Service Mesh 1.0) ServiceMeshRbacConfig (Red Hat OpenShift Service Mesh version 1.0 and later) In Kiali, the login and LDAP strategies are deprecated. A future version will introduce authentication using OpenID providers. The following components are also deprecated in this release and will be replaced by the Istiod component in a future release. Mixer - access control and usage policies Pilot - service discovery and proxy configuration Citadel - certificate generation Galley - configuration validation and distribution 3.1.7. Known issues These limitations exist in Red Hat OpenShift Service Mesh: Red Hat OpenShift Service Mesh does not support IPv6 , as it is not supported by the upstream Istio project, nor fully supported by OpenShift Container Platform. Graph layout - The layout for the Kiali graph can render differently, depending on your application architecture and the data to display (number of graph nodes and their interactions). Because it is difficult if not impossible to create a single layout that renders nicely for every situation, Kiali offers a choice of several different layouts. To choose a different layout, you can choose a different Layout Schema from the Graph Settings menu. The first time you access related services such as Jaeger and Grafana, from the Kiali console, you must accept the certificate and re-authenticate using your OpenShift Container Platform login credentials. This happens due to an issue with how the framework displays embedded pages in the console. 3.1.7.1. Service Mesh known issues These are the known issues in Red Hat OpenShift Service Mesh: Jaeger/Kiali Operator upgrade blocked with operator pending When upgrading the Jaeger or Kiali Operators with Service Mesh 1.0.x installed, the operator status shows as Pending. Workaround: See the linked Knowledge Base article for more information. Istio-14743 Due to limitations in the version of Istio that this release of Red Hat OpenShift Service Mesh is based on, there are several applications that are currently incompatible with Service Mesh. See the linked community issue for details. MAISTRA-858 The following Envoy log messages describing deprecated options and configurations associated with Istio 1.1.x are expected: [2019-06-03 07:03:28.943][19][warning][misc] [external/envoy/source/common/protobuf/utility.cc:129] Using deprecated option 'envoy.api.v2.listener.Filter.config'. This configuration will be removed from Envoy soon. [2019-08-12 22:12:59.001][13][warning][misc] [external/envoy/source/common/protobuf/utility.cc:174] Using deprecated option 'envoy.api.v2.Listener.use_original_dst' from file lds.proto. This configuration will be removed from Envoy soon. MAISTRA-806 Evicted Istio Operator Pod causes mesh and CNI not to deploy. Workaround: If the istio-operator pod is evicted while deploying the control pane, delete the evicted istio-operator pod. MAISTRA-681 When the control plane has many namespaces, it can lead to performance issues. MAISTRA-465 The Maistra Operator fails to create a service for operator metrics. MAISTRA-453 If you create a new project and deploy pods immediately, sidecar injection does not occur. The operator fails to add the maistra.io/member-of before the pods are created, therefore the pods must be deleted and recreated for sidecar injection to occur. MAISTRA-158 Applying multiple gateways referencing the same hostname will cause all gateways to stop functioning. 3.1.7.2. Kiali known issues Note New issues for Kiali should be created in the OpenShift Service Mesh project with the Component set to Kiali . These are the known issues in Kiali: KIALI-2206 When you are accessing the Kiali console for the first time, and there is no cached browser data for Kiali, the "View in Grafana" link on the Metrics tab of the Kiali Service Details page redirects to the wrong location. The only way you would encounter this issue is if you are accessing Kiali for the first time. KIALI-507 Kiali does not support Internet Explorer 11. This is because the underlying frameworks do not support Internet Explorer. To access the Kiali console, use one of the two most recent versions of the Chrome, Edge, Firefox or Safari browser. 3.1.8. Fixed issues The following issues been resolved in the current release: 3.1.8.1. Service Mesh fixed issues MAISTRA-2371 Handle tombstones in listerInformer. The updated cache codebase was not handling tombstones when translating the events from the namespace caches to the aggregated cache, leading to a panic in the go routine. OSSM-542 Galley is not using the new certificate after rotation. OSSM-99 Workloads generated from direct pod without labels may crash Kiali. OSSM-93 IstioConfigList can't filter by two or more names. OSSM-92 Cancelling unsaved changes on the VS/DR YAML edit page does not cancel the changes. OSSM-90 Traces not available on the service details page. MAISTRA-1649 Headless services conflict when in different namespaces. When deploying headless services within different namespaces the endpoint configuration is merged and results in invalid Envoy configurations being pushed to the sidecars. MAISTRA-1541 Panic in kubernetesenv when the controller is not set on owner reference. If a pod has an ownerReference which does not specify the controller, this will cause a panic within the kubernetesenv cache.go code. MAISTRA-1352 Cert-manager Custom Resource Definitions (CRD) from the control plane installation have been removed for this release and future releases. If you have already installed Red Hat OpenShift Service Mesh, the CRDs must be removed manually if cert-manager is not being used. MAISTRA-1001 Closing HTTP/2 connections could lead to segmentation faults in istio-proxy . MAISTRA-932 Added the requires metadata to add dependency relationship between Jaeger Operator and OpenShift Elasticsearch Operator. Ensures that when the Jaeger Operator is installed, it automatically deploys the OpenShift Elasticsearch Operator if it is not available. MAISTRA-862 Galley dropped watches and stopped providing configuration to other components after many namespace deletions and re-creations. MAISTRA-833 Pilot stopped delivering configuration after many namespace deletions and re-creations. MAISTRA-684 The default Jaeger version in the istio-operator is 1.12.0, which does not match Jaeger version 1.13.1 that shipped in Red Hat OpenShift Service Mesh 0.12.TechPreview. MAISTRA-622 In Maistra 0.12.0/TP12, permissive mode does not work. The user has the option to use Plain text mode or Mutual TLS mode, but not permissive. MAISTRA-572 Jaeger cannot be used with Kiali. In this release Jaeger is configured to use the OAuth proxy, but is also only configured to work through a browser and does not allow service access. Kiali cannot properly communicate with the Jaeger endpoint and it considers Jaeger to be disabled. See also TRACING-591 . MAISTRA-357 In OpenShift 4 Beta on AWS, it is not possible, by default, to access a TCP or HTTPS service through the ingress gateway on a port other than port 80. The AWS load balancer has a health check that verifies if port 80 on the service endpoint is active. Without a service running on port 80, the load balancer health check fails. MAISTRA-348 OpenShift 4 Beta on AWS does not support ingress gateway traffic on ports other than 80 or 443. If you configure your ingress gateway to handle TCP traffic with a port number other than 80 or 443, you have to use the service hostname provided by the AWS load balancer rather than the OpenShift router as a workaround. MAISTRA-193 Unexpected console info messages are visible when health checking is enabled for citadel. Bug 1821432 Toggle controls in OpenShift Container Platform Control Resource details page do not update the CR correctly. UI Toggle controls in the Service Mesh Control Plane (SMCP) Overview page in the OpenShift Container Platform web console sometimes update the wrong field in the resource. To update a ServiceMeshControlPlane resource, edit the YAML content directly or update the resource from the command line instead of clicking the toggle controls. 3.1.8.2. Kiali fixed issues KIALI-3239 If a Kiali Operator pod has failed with a status of "Evicted" it blocks the Kiali operator from deploying. The workaround is to delete the Evicted pod and redeploy the Kiali operator. KIALI-3118 After changes to the ServiceMeshMemberRoll, for example adding or removing projects, the Kiali pod restarts and then displays errors on the Graph page while the Kiali pod is restarting. KIALI-3096 Runtime metrics fail in Service Mesh. There is an OAuth filter between the Service Mesh and Prometheus, requiring a bearer token to be passed to Prometheus before access is granted. Kiali has been updated to use this token when communicating to the Prometheus server, but the application metrics are currently failing with 403 errors. KIALI-3070 This bug only affects custom dashboards, not the default dashboards. When you select labels in metrics settings and refresh the page, your selections are retained in the menu but your selections are not displayed on the charts. KIALI-2686 When the control plane has many namespaces, it can lead to performance issues. 3.2. Understanding Service Mesh Warning You are viewing documentation for a Red Hat OpenShift Service Mesh release that is no longer supported. Service Mesh version 1.0 and 1.1 control planes are no longer supported. For information about upgrading your service mesh control plane, see Upgrading Service Mesh . For information about the support status of a particular Red Hat OpenShift Service Mesh release, see the Product lifecycle page . Red Hat OpenShift Service Mesh provides a platform for behavioral insight and operational control over your networked microservices in a service mesh. With Red Hat OpenShift Service Mesh, you can connect, secure, and monitor microservices in your OpenShift Container Platform environment. 3.2.1. What is Red Hat OpenShift Service Mesh? A service mesh is the network of microservices that make up applications in a distributed microservice architecture and the interactions between those microservices. When a Service Mesh grows in size and complexity, it can become harder to understand and manage. Based on the open source Istio project, Red Hat OpenShift Service Mesh adds a transparent layer on existing distributed applications without requiring any changes to the service code. You add Red Hat OpenShift Service Mesh support to services by deploying a special sidecar proxy to relevant services in the mesh that intercepts all network communication between microservices. You configure and manage the Service Mesh using the Service Mesh control plane features. Red Hat OpenShift Service Mesh gives you an easy way to create a network of deployed services that provide: Discovery Load balancing Service-to-service authentication Failure recovery Metrics Monitoring Red Hat OpenShift Service Mesh also provides more complex operational functions including: A/B testing Canary releases Access control End-to-end authentication 3.2.2. Red Hat OpenShift Service Mesh Architecture Red Hat OpenShift Service Mesh is logically split into a data plane and a control plane: The data plane is a set of intelligent proxies deployed as sidecars. These proxies intercept and control all inbound and outbound network communication between microservices in the service mesh. Sidecar proxies also communicate with Mixer, the general-purpose policy and telemetry hub. Envoy proxy intercepts all inbound and outbound traffic for all services in the service mesh. Envoy is deployed as a sidecar to the relevant service in the same pod. The control plane manages and configures proxies to route traffic, and configures Mixers to enforce policies and collect telemetry. Mixer enforces access control and usage policies (such as authorization, rate limits, quotas, authentication, and request tracing) and collects telemetry data from the Envoy proxy and other services. Pilot configures the proxies at runtime. Pilot provides service discovery for the Envoy sidecars, traffic management capabilities for intelligent routing (for example, A/B tests or canary deployments), and resiliency (timeouts, retries, and circuit breakers). Citadel issues and rotates certificates. Citadel provides strong service-to-service and end-user authentication with built-in identity and credential management. You can use Citadel to upgrade unencrypted traffic in the service mesh. Operators can enforce policies based on service identity rather than on network controls using Citadel. Galley ingests the service mesh configuration, then validates, processes, and distributes the configuration. Galley protects the other service mesh components from obtaining user configuration details from OpenShift Container Platform. Red Hat OpenShift Service Mesh also uses the istio-operator to manage the installation of the control plane. An Operator is a piece of software that enables you to implement and automate common activities in your OpenShift Container Platform cluster. It acts as a controller, allowing you to set or change the desired state of objects in your cluster. 3.2.3. Understanding Kiali Kiali provides visibility into your service mesh by showing you the microservices in your service mesh, and how they are connected. 3.2.3.1. Kiali overview Kiali provides observability into the Service Mesh running on OpenShift Container Platform. Kiali helps you define, validate, and observe your Istio service mesh. It helps you to understand the structure of your service mesh by inferring the topology, and also provides information about the health of your service mesh. Kiali provides an interactive graph view of your namespace in real time that provides visibility into features like circuit breakers, request rates, latency, and even graphs of traffic flows. Kiali offers insights about components at different levels, from Applications to Services and Workloads, and can display the interactions with contextual information and charts on the selected graph node or edge. Kiali also provides the ability to validate your Istio configurations, such as gateways, destination rules, virtual services, mesh policies, and more. Kiali provides detailed metrics, and a basic Grafana integration is available for advanced queries. Distributed tracing is provided by integrating Jaeger into the Kiali console. Kiali is installed by default as part of the Red Hat OpenShift Service Mesh. 3.2.3.2. Kiali architecture Kiali is based on the open source Kiali project . Kiali is composed of two components: the Kiali application and the Kiali console. Kiali application (back end) - This component runs in the container application platform and communicates with the service mesh components, retrieves and processes data, and exposes this data to the console. The Kiali application does not need storage. When deploying the application to a cluster, configurations are set in ConfigMaps and secrets. Kiali console (front end) - The Kiali console is a web application. The Kiali application serves the Kiali console, which then queries the back end for data to present it to the user. In addition, Kiali depends on external services and components provided by the container application platform and Istio. Red Hat Service Mesh (Istio) - Istio is a Kiali requirement. Istio is the component that provides and controls the service mesh. Although Kiali and Istio can be installed separately, Kiali depends on Istio and will not work if it is not present. Kiali needs to retrieve Istio data and configurations, which are exposed through Prometheus and the cluster API. Prometheus - A dedicated Prometheus instance is included as part of the Red Hat OpenShift Service Mesh installation. When Istio telemetry is enabled, metrics data are stored in Prometheus. Kiali uses this Prometheus data to determine the mesh topology, display metrics, calculate health, show possible problems, and so on. Kiali communicates directly with Prometheus and assumes the data schema used by Istio Telemetry. Prometheus is an Istio dependency and a hard dependency for Kiali, and many of Kiali's features will not work without Prometheus. Cluster API - Kiali uses the API of the OpenShift Container Platform (cluster API) to fetch and resolve service mesh configurations. Kiali queries the cluster API to retrieve, for example, definitions for namespaces, services, deployments, pods, and other entities. Kiali also makes queries to resolve relationships between the different cluster entities. The cluster API is also queried to retrieve Istio configurations like virtual services, destination rules, route rules, gateways, quotas, and so on. Jaeger - Jaeger is optional, but is installed by default as part of the Red Hat OpenShift Service Mesh installation. When you install the distributed tracing platform (Jaeger) as part of the default Red Hat OpenShift Service Mesh installation, the Kiali console includes a tab to display distributed tracing data. Note that tracing data will not be available if you disable Istio's distributed tracing feature. Also note that user must have access to the namespace where the Service Mesh control plane is installed to view tracing data. Grafana - Grafana is optional, but is installed by default as part of the Red Hat OpenShift Service Mesh installation. When available, the metrics pages of Kiali display links to direct the user to the same metric in Grafana. Note that user must have access to the namespace where the Service Mesh control plane is installed to view links to the Grafana dashboard and view Grafana data. 3.2.3.3. Kiali features The Kiali console is integrated with Red Hat Service Mesh and provides the following capabilities: Health - Quickly identify issues with applications, services, or workloads. Topology - Visualize how your applications, services, or workloads communicate via the Kiali graph. Metrics - Predefined metrics dashboards let you chart service mesh and application performance for Go, Node.js. Quarkus, Spring Boot, Thorntail and Vert.x. You can also create your own custom dashboards. Tracing - Integration with Jaeger lets you follow the path of a request through various microservices that make up an application. Validations - Perform advanced validations on the most common Istio objects (Destination Rules, Service Entries, Virtual Services, and so on). Configuration - Optional ability to create, update and delete Istio routing configuration using wizards or directly in the YAML editor in the Kiali Console. 3.2.4. Understanding Jaeger Every time a user takes an action in an application, a request is executed by the architecture that may require dozens of different services to participate to produce a response. The path of this request is a distributed transaction. Jaeger lets you perform distributed tracing, which follows the path of a request through various microservices that make up an application. Distributed tracing is a technique that is used to tie the information about different units of work together-usually executed in different processes or hosts-to understand a whole chain of events in a distributed transaction. Distributed tracing lets developers visualize call flows in large service oriented architectures. It can be invaluable in understanding serialization, parallelism, and sources of latency. Jaeger records the execution of individual requests across the whole stack of microservices, and presents them as traces. A trace is a data/execution path through the system. An end-to-end trace is comprised of one or more spans. A span represents a logical unit of work in Jaeger that has an operation name, the start time of the operation, and the duration. Spans may be nested and ordered to model causal relationships. 3.2.4.1. Distributed tracing overview As a service owner, you can use distributed tracing to instrument your services to gather insights into your service architecture. You can use the Red Hat OpenShift distributed tracing platform for monitoring, network profiling, and troubleshooting the interaction between components in modern, cloud-native, microservices-based applications. With the distributed tracing platform, you can perform the following functions: Monitor distributed transactions Optimize performance and latency Perform root cause analysis 3.2.4.2. Distributed tracing architecture The distributed tracing platform (Jaeger) is based on the open source Jaeger project . The distributed tracing platform (Jaeger) is made up of several components that work together to collect, store, and display tracing data. Jaeger Client (Tracer, Reporter, instrumented application, client libraries)- Jaeger clients are language specific implementations of the OpenTracing API. They can be used to instrument applications for distributed tracing either manually or with a variety of existing open source frameworks, such as Camel (Fuse), Spring Boot (RHOAR), MicroProfile (RHOAR/Thorntail), Wildfly (EAP), and many more, that are already integrated with OpenTracing. Jaeger Agent (Server Queue, Processor Workers) - The Jaeger agent is a network daemon that listens for spans sent over User Datagram Protocol (UDP), which it batches and sends to the collector. The agent is meant to be placed on the same host as the instrumented application. This is typically accomplished by having a sidecar in container environments like Kubernetes. Jaeger Collector (Queue, Workers) - Similar to the Agent, the Collector is able to receive spans and place them in an internal queue for processing. This allows the collector to return immediately to the client/agent instead of waiting for the span to make its way to the storage. Storage (Data Store) - Collectors require a persistent storage backend. Jaeger has a pluggable mechanism for span storage. Note that for this release, the only supported storage is Elasticsearch. Query (Query Service) - Query is a service that retrieves traces from storage. Ingester (Ingester Service) - Jaeger can use Apache Kafka as a buffer between the collector and the actual backing storage (Elasticsearch). Ingester is a service that reads data from Kafka and writes to another storage backend (Elasticsearch). Jaeger Console - Jaeger provides a user interface that lets you visualize your distributed tracing data. On the Search page, you can find traces and explore details of the spans that make up an individual trace. 3.2.4.3. Red Hat OpenShift distributed tracing platform features Red Hat OpenShift distributed tracing platform provides the following capabilities: Integration with Kiali - When properly configured, you can view distributed tracing platform data from the Kiali console. High scalability - The distributed tracing platform back end is designed to have no single points of failure and to scale with the business needs. Distributed Context Propagation - Enables you to connect data from different components together to create a complete end-to-end trace. Backwards compatibility with Zipkin - Red Hat OpenShift distributed tracing platform has APIs that enable it to be used as a drop-in replacement for Zipkin, but Red Hat is not supporting Zipkin compatibility in this release. 3.2.5. steps Prepare to install Red Hat OpenShift Service Mesh in your OpenShift Container Platform environment. 3.3. Service Mesh and Istio differences Warning You are viewing documentation for a Red Hat OpenShift Service Mesh release that is no longer supported. Service Mesh version 1.0 and 1.1 control planes are no longer supported. For information about upgrading your service mesh control plane, see Upgrading Service Mesh . For information about the support status of a particular Red Hat OpenShift Service Mesh release, see the Product lifecycle page . An installation of Red Hat OpenShift Service Mesh differs from upstream Istio community installations in multiple ways. The modifications to Red Hat OpenShift Service Mesh are sometimes necessary to resolve issues, provide additional features, or to handle differences when deploying on OpenShift Container Platform. The current release of Red Hat OpenShift Service Mesh differs from the current upstream Istio community release in the following ways: 3.3.1. Multitenant installations Whereas upstream Istio takes a single tenant approach, Red Hat OpenShift Service Mesh supports multiple independent control planes within the cluster. Red Hat OpenShift Service Mesh uses a multitenant operator to manage the control plane lifecycle. Red Hat OpenShift Service Mesh installs a multitenant control plane by default. You specify the projects that can access the Service Mesh, and isolate the Service Mesh from other control plane instances. 3.3.1.1. Multitenancy versus cluster-wide installations The main difference between a multitenant installation and a cluster-wide installation is the scope of privileges used by istod. The components no longer use cluster-scoped Role Based Access Control (RBAC) resource ClusterRoleBinding . Every project in the ServiceMeshMemberRoll members list will have a RoleBinding for each service account associated with the control plane deployment and each control plane deployment will only watch those member projects. Each member project has a maistra.io/member-of label added to it, where the member-of value is the project containing the control plane installation. Red Hat OpenShift Service Mesh configures each member project to ensure network access between itself, the control plane, and other member projects. The exact configuration differs depending on how OpenShift Container Platform software-defined networking (SDN) is configured. See About OpenShift SDN for additional details. If the OpenShift Container Platform cluster is configured to use the SDN plugin: NetworkPolicy : Red Hat OpenShift Service Mesh creates a NetworkPolicy resource in each member project allowing ingress to all pods from the other members and the control plane. If you remove a member from Service Mesh, this NetworkPolicy resource is deleted from the project. Note This also restricts ingress to only member projects. If you require ingress from non-member projects, you need to create a NetworkPolicy to allow that traffic through. Multitenant : Red Hat OpenShift Service Mesh joins the NetNamespace for each member project to the NetNamespace of the control plane project (the equivalent of running oc adm pod-network join-projects --to control-plane-project member-project ). If you remove a member from the Service Mesh, its NetNamespace is isolated from the control plane (the equivalent of running oc adm pod-network isolate-projects member-project ). Subnet : No additional configuration is performed. 3.3.1.2. Cluster scoped resources Upstream Istio has two cluster scoped resources that it relies on. The MeshPolicy and the ClusterRbacConfig . These are not compatible with a multitenant cluster and have been replaced as described below. ServiceMeshPolicy replaces MeshPolicy for configuration of control-plane-wide authentication policies. This must be created in the same project as the control plane. ServicemeshRbacConfig replaces ClusterRbacConfig for configuration of control-plane-wide role based access control. This must be created in the same project as the control plane. 3.3.2. Differences between Istio and Red Hat OpenShift Service Mesh An installation of Red Hat OpenShift Service Mesh differs from an installation of Istio in multiple ways. The modifications to Red Hat OpenShift Service Mesh are sometimes necessary to resolve issues, provide additional features, or to handle differences when deploying on OpenShift Container Platform. 3.3.2.1. Command line tool The command line tool for Red Hat OpenShift Service Mesh is oc. Red Hat OpenShift Service Mesh does not support istioctl. 3.3.2.2. Automatic injection The upstream Istio community installation automatically injects the sidecar into pods within the projects you have labeled. Red Hat OpenShift Service Mesh does not automatically inject the sidecar to any pods, but requires you to opt in to injection using an annotation without labeling projects. This method requires fewer privileges and does not conflict with other OpenShift capabilities such as builder pods. To enable automatic injection you specify the sidecar.istio.io/inject annotation as described in the Automatic sidecar injection section. 3.3.2.3. Istio Role Based Access Control features Istio Role Based Access Control (RBAC) provides a mechanism you can use to control access to a service. You can identify subjects by user name or by specifying a set of properties and apply access controls accordingly. The upstream Istio community installation includes options to perform exact header matches, match wildcards in headers, or check for a header containing a specific prefix or suffix. Red Hat OpenShift Service Mesh extends the ability to match request headers by using a regular expression. Specify a property key of request.regex.headers with a regular expression. Upstream Istio community matching request headers example apiVersion: "rbac.istio.io/v1alpha1" kind: ServiceRoleBinding metadata: name: httpbin-client-binding namespace: httpbin spec: subjects: - user: "cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account" properties: request.headers[<header>]: "value" Red Hat OpenShift Service Mesh matching request headers by using regular expressions apiVersion: "rbac.istio.io/v1alpha1" kind: ServiceRoleBinding metadata: name: httpbin-client-binding namespace: httpbin spec: subjects: - user: "cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account" properties: request.regex.headers[<header>]: "<regular expression>" 3.3.2.4. OpenSSL Red Hat OpenShift Service Mesh replaces BoringSSL with OpenSSL. OpenSSL is a software library that contains an open source implementation of the Secure Sockets Layer (SSL) and Transport Layer Security (TLS) protocols. The Red Hat OpenShift Service Mesh Proxy binary dynamically links the OpenSSL libraries (libssl and libcrypto) from the underlying Red Hat Enterprise Linux operating system. 3.3.2.5. Component modifications A maistra-version label has been added to all resources. All Ingress resources have been converted to OpenShift Route resources. Grafana, Tracing (Jaeger), and Kiali are enabled by default and exposed through OpenShift routes. Godebug has been removed from all templates The istio-multi ServiceAccount and ClusterRoleBinding have been removed, as well as the istio-reader ClusterRole. 3.3.2.6. Envoy, Secret Discovery Service, and certificates Red Hat OpenShift Service Mesh does not support QUIC-based services. Deployment of TLS certificates using the Secret Discovery Service (SDS) functionality of Istio is not currently supported in Red Hat OpenShift Service Mesh. The Istio implementation depends on a nodeagent container that uses hostPath mounts. 3.3.2.7. Istio Container Network Interface (CNI) plugin Red Hat OpenShift Service Mesh includes CNI plugin, which provides you with an alternate way to configure application pod networking. The CNI plugin replaces the init-container network configuration eliminating the need to grant service accounts and projects access to Security Context Constraints (SCCs) with elevated privileges. 3.3.2.8. Routes for Istio Gateways OpenShift routes for Istio Gateways are automatically managed in Red Hat OpenShift Service Mesh. Every time an Istio Gateway is created, updated or deleted inside the service mesh, an OpenShift route is created, updated or deleted. A Red Hat OpenShift Service Mesh control plane component called Istio OpenShift Routing (IOR) synchronizes the gateway route. For more information, see Automatic route creation. 3.3.2.8.1. Catch-all domains Catch-all domains ("*") are not supported. If one is found in the Gateway definition, Red Hat OpenShift Service Mesh will create the route, but will rely on OpenShift to create a default hostname. This means that the newly created route will not be a catch all ("*") route, instead it will have a hostname in the form <route-name>[-<project>].<suffix> . See the OpenShift documentation for more information about how default hostnames work and how a cluster administrator can customize it. 3.3.2.8.2. Subdomains Subdomains (e.g.: "*.domain.com") are supported. However this ability doesn't come enabled by default in OpenShift Container Platform. This means that Red Hat OpenShift Service Mesh will create the route with the subdomain, but it will only be in effect if OpenShift Container Platform is configured to enable it. 3.3.2.8.3. Transport layer security Transport Layer Security (TLS) is supported. This means that, if the Gateway contains a tls section, the OpenShift Route will be configured to support TLS. Additional resources Automatic route creation 3.3.3. Kiali and service mesh Installing Kiali via the Service Mesh on OpenShift Container Platform differs from community Kiali installations in multiple ways. These modifications are sometimes necessary to resolve issues, provide additional features, or to handle differences when deploying on OpenShift Container Platform. Kiali has been enabled by default. Ingress has been enabled by default. Updates have been made to the Kiali ConfigMap. Updates have been made to the ClusterRole settings for Kiali. Do not edit the ConfigMap, because your changes might be overwritten by the Service Mesh or Kiali Operators. Files that the Kiali Operator manages have a kiali.io/ label or annotation. Updating the Operator files should be restricted to those users with cluster-admin privileges. If you use Red Hat OpenShift Dedicated, updating the Operator files should be restricted to those users with dedicated-admin privileges. 3.3.4. Distributed tracing and service mesh Installing the distributed tracing platform (Jaeger) with the Service Mesh on OpenShift Container Platform differs from community Jaeger installations in multiple ways. These modifications are sometimes necessary to resolve issues, provide additional features, or to handle differences when deploying on OpenShift Container Platform. Distributed tracing has been enabled by default for Service Mesh. Ingress has been enabled by default for Service Mesh. The name for the Zipkin port name has changed to jaeger-collector-zipkin (from http ) Jaeger uses Elasticsearch for storage by default when you select either the production or streaming deployment option. The community version of Istio provides a generic "tracing" route. Red Hat OpenShift Service Mesh uses a "jaeger" route that is installed by the Red Hat OpenShift distributed tracing platform (Jaeger) Operator and is already protected by OAuth. Red Hat OpenShift Service Mesh uses a sidecar for the Envoy proxy, and Jaeger also uses a sidecar, for the Jaeger agent. These two sidecars are configured separately and should not be confused with each other. The proxy sidecar creates spans related to the pod's ingress and egress traffic. The agent sidecar receives the spans emitted by the application and sends them to the Jaeger Collector. 3.4. Preparing to install Service Mesh Warning You are viewing documentation for a Red Hat OpenShift Service Mesh release that is no longer supported. Service Mesh version 1.0 and 1.1 control planes are no longer supported. For information about upgrading your service mesh control plane, see Upgrading Service Mesh . For information about the support status of a particular Red Hat OpenShift Service Mesh release, see the Product lifecycle page . Before you can install Red Hat OpenShift Service Mesh, review the installation activities, ensure that you meet the prerequisites: 3.4.1. Prerequisites Possess an active OpenShift Container Platform subscription on your Red Hat account. If you do not have a subscription, contact your sales representative for more information. Review the OpenShift Container Platform 4.15 overview . Install OpenShift Container Platform 4.15. Install OpenShift Container Platform 4.15 on AWS Install OpenShift Container Platform 4.15 on user-provisioned AWS Install OpenShift Container Platform 4.15 on bare metal Install OpenShift Container Platform 4.15 on vSphere Note If you are installing Red Hat OpenShift Service Mesh on a restricted network , follow the instructions for your chosen OpenShift Container Platform infrastructure. Install the version of the OpenShift Container Platform command line utility (the oc client tool) that matches your OpenShift Container Platform version and add it to your path. If you are using OpenShift Container Platform 4.15, see About the OpenShift CLI . 3.4.2. Red Hat OpenShift Service Mesh supported configurations The following are the only supported configurations for the Red Hat OpenShift Service Mesh: OpenShift Container Platform version 4.6 or later. Note OpenShift Online and Red Hat OpenShift Dedicated are not supported for Red Hat OpenShift Service Mesh. The deployment must be contained within a single OpenShift Container Platform cluster that is not federated. This release of Red Hat OpenShift Service Mesh is only available on OpenShift Container Platform x86_64. This release only supports configurations where all Service Mesh components are contained in the OpenShift Container Platform cluster in which it operates. It does not support management of microservices that reside outside of the cluster, or in a multi-cluster scenario. This release only supports configurations that do not integrate external services such as virtual machines. For additional information about Red Hat OpenShift Service Mesh lifecycle and supported configurations, refer to the Support Policy . 3.4.2.1. Supported configurations for Kiali on Red Hat OpenShift Service Mesh The Kiali observability console is only supported on the two most recent releases of the Chrome, Edge, Firefox, or Safari browsers. 3.4.2.2. Supported Mixer adapters This release only supports the following Mixer adapter: 3scale Istio Adapter 3.4.3. Service Mesh Operators overview Red Hat OpenShift Service Mesh requires the use of the Red Hat OpenShift Service Mesh Operator which allows you to connect, secure, control, and observe the microservices that comprise your applications. You can also install other Operators to enhance your service mesh experience. Warning Do not install Community versions of the Operators. Community Operators are not supported. The following Operator is required: Red Hat OpenShift Service Mesh Operator Allows you to connect, secure, control, and observe the microservices that comprise your applications. It also defines and monitors the ServiceMeshControlPlane resources that manage the deployment, updating, and deletion of the Service Mesh components. It is based on the open source Istio project. The following Operators are optional: Kiali Operator provided by Red Hat Provides observability for your service mesh. You can view configurations, monitor traffic, and analyze traces in a single console. It is based on the open source Kiali project. Red Hat OpenShift distributed tracing platform (Tempo) Provides distributed tracing to monitor and troubleshoot transactions in complex distributed systems. It is based on the open source Grafana Tempo project. The following optional Operators are deprecated: Important Starting with Red Hat OpenShift Service Mesh 2.5, Red Hat OpenShift distributed tracing platform (Jaeger) and OpenShift Elasticsearch Operator are deprecated and will be removed in a future release. Red Hat will provide bug fixes and support for these features during the current release lifecycle, but these features will no longer receive enhancements and will be removed. As an alternative to Red Hat OpenShift distributed tracing platform (Jaeger), you can use Red Hat OpenShift distributed tracing platform (Tempo) instead. Red Hat OpenShift distributed tracing platform (Jaeger) Provides distributed tracing to monitor and troubleshoot transactions in complex distributed systems. It is based on the open source Jaeger project. OpenShift Elasticsearch Operator Provides database storage for tracing and logging with the distributed tracing platform (Jaeger). It is based on the open source Elasticsearch project. Warning See Configuring the Elasticsearch log store for details on configuring the default Jaeger parameters for Elasticsearch in a production environment. 3.4.4. steps Install Red Hat OpenShift Service Mesh in your OpenShift Container Platform environment. 3.5. Installing Service Mesh Warning You are viewing documentation for a Red Hat OpenShift Service Mesh release that is no longer supported. Service Mesh version 1.0 and 1.1 control planes are no longer supported. For information about upgrading your service mesh control plane, see Upgrading Service Mesh . For information about the support status of a particular Red Hat OpenShift Service Mesh release, see the Product lifecycle page . Installing the Service Mesh involves installing the OpenShift Elasticsearch, Jaeger, Kiali and Service Mesh Operators, creating and managing a ServiceMeshControlPlane resource to deploy the control plane, and creating a ServiceMeshMemberRoll resource to specify the namespaces associated with the Service Mesh. Note Mixer's policy enforcement is disabled by default. You must enable it to run policy tasks. See Update Mixer policy enforcement for instructions on enabling Mixer policy enforcement. Note Multi-tenant control plane installations are the default configuration. Note The Service Mesh documentation uses istio-system as the example project, but you can deploy the service mesh to any project. 3.5.1. Prerequisites Follow the Preparing to install Red Hat OpenShift Service Mesh process. An account with the cluster-admin role. The Service Mesh installation process uses the OperatorHub to install the ServiceMeshControlPlane custom resource definition within the openshift-operators project. The Red Hat OpenShift Service Mesh defines and monitors the ServiceMeshControlPlane related to the deployment, update, and deletion of the control plane. Starting with Red Hat OpenShift Service Mesh 1.1.18.2, you must install the OpenShift Elasticsearch Operator, the Jaeger Operator, and the Kiali Operator before the Red Hat OpenShift Service Mesh Operator can install the control plane. 3.5.2. Installing the OpenShift Elasticsearch Operator The default Red Hat OpenShift distributed tracing platform (Jaeger) deployment uses in-memory storage because it is designed to be installed quickly for those evaluating Red Hat OpenShift distributed tracing platform, giving demonstrations, or using Red Hat OpenShift distributed tracing platform (Jaeger) in a test environment. If you plan to use Red Hat OpenShift distributed tracing platform (Jaeger) in production, you must install and configure a persistent storage option, in this case, Elasticsearch. Prerequisites You have access to the OpenShift Container Platform web console. You have access to the cluster as a user with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. Warning Do not install Community versions of the Operators. Community Operators are not supported. Note If you have already installed the OpenShift Elasticsearch Operator as part of OpenShift Logging, you do not need to install the OpenShift Elasticsearch Operator again. The Red Hat OpenShift distributed tracing platform (Jaeger) Operator creates the Elasticsearch instance using the installed OpenShift Elasticsearch Operator. Procedure Log in to the OpenShift Container Platform web console as a user with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. Navigate to Operators OperatorHub . Type Elasticsearch into the filter box to locate the OpenShift Elasticsearch Operator. Click the OpenShift Elasticsearch Operator provided by Red Hat to display information about the Operator. Click Install . On the Install Operator page, select the stable Update Channel. This automatically updates your Operator as new versions are released. Accept the default All namespaces on the cluster (default) . This installs the Operator in the default openshift-operators-redhat project and makes the Operator available to all projects in the cluster. Note The Elasticsearch installation requires the openshift-operators-redhat namespace for the OpenShift Elasticsearch Operator. The other Red Hat OpenShift distributed tracing platform Operators are installed in the openshift-operators namespace. Accept the default Automatic approval strategy. By accepting the default, when a new version of this Operator is available, Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without human intervention. If you select Manual updates, when a newer version of an Operator is available, OLM creates an update request. As a cluster administrator, you must then manually approve that update request to have the Operator updated to the new version. Note The Manual approval strategy requires a user with appropriate credentials to approve the Operator install and subscription process. Click Install . On the Installed Operators page, select the openshift-operators-redhat project. Wait for the InstallSucceeded status of the OpenShift Elasticsearch Operator before continuing. 3.5.3. Installing the Red Hat OpenShift distributed tracing platform Operator You can install the Red Hat OpenShift distributed tracing platform Operator through the OperatorHub . By default, the Operator is installed in the openshift-operators project. Prerequisites You have access to the OpenShift Container Platform web console. You have access to the cluster as a user with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. If you require persistent storage, you must install the OpenShift Elasticsearch Operator before installing the Red Hat OpenShift distributed tracing platform Operator. Procedure Log in to the OpenShift Container Platform web console as a user with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. Navigate to Operators OperatorHub . Search for the Red Hat OpenShift distributed tracing platform Operator by entering distributed tracing platform in the search field. Select the Red Hat OpenShift distributed tracing platform Operator, which is provided by Red Hat , to display information about the Operator. Click Install . For the Update channel on the Install Operator page, select stable to automatically update the Operator when new versions are released. Accept the default All namespaces on the cluster (default) . This installs the Operator in the default openshift-operators project and makes the Operator available to all projects in the cluster. Accept the default Automatic approval strategy. Note If you accept this default, the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of this Operator when a new version of the Operator becomes available. If you select Manual updates, the OLM creates an update request when a new version of the Operator becomes available. To update the Operator to the new version, you must then manually approve the update request as a cluster administrator. The Manual approval strategy requires a cluster administrator to manually approve Operator installation and subscription. Click Install . Navigate to Operators Installed Operators . On the Installed Operators page, select the openshift-operators project. Wait for the Succeeded status of the Red Hat OpenShift distributed tracing platform Operator before continuing. 3.5.4. Installing the Kiali Operator You must install the Kiali Operator for the Red Hat OpenShift Service Mesh Operator to install the Service Mesh control plane. Warning Do not install Community versions of the Operators. Community Operators are not supported. Prerequisites Access to the OpenShift Container Platform web console. Procedure Log in to the OpenShift Container Platform web console. Navigate to Operators OperatorHub . Type Kiali into the filter box to find the Kiali Operator. Click the Kiali Operator provided by Red Hat to display information about the Operator. Click Install . On the Operator Installation page, select the stable Update Channel. Select All namespaces on the cluster (default) . This installs the Operator in the default openshift-operators project and makes the Operator available to all projects in the cluster. Select the Automatic Approval Strategy. Note The Manual approval strategy requires a user with appropriate credentials to approve the Operator install and subscription process. Click Install . The Installed Operators page displays the Kiali Operator's installation progress. 3.5.5. Installing the Operators To install Red Hat OpenShift Service Mesh, you must install the Red Hat OpenShift Service Mesh Operator. Repeat the procedure for each additional Operator you want to install. Additional Operators include: Kiali Operator provided by Red Hat Tempo Operator Deprecated additional Operators include: Important Starting with Red Hat OpenShift Service Mesh 2.5, Red Hat OpenShift distributed tracing platform (Jaeger) and OpenShift Elasticsearch Operator are deprecated and will be removed in a future release. Red Hat will provide bug fixes and support for these features during the current release lifecycle, but this feature will no longer receive enhancements and will be removed. As an alternative to Red Hat OpenShift distributed tracing platform (Jaeger), you can use Red Hat OpenShift distributed tracing platform (Tempo) instead. Red Hat OpenShift distributed tracing platform (Jaeger) OpenShift Elasticsearch Operator Note If you have already installed the OpenShift Elasticsearch Operator as part of OpenShift Logging, you do not need to install the OpenShift Elasticsearch Operator again. The Red Hat OpenShift distributed tracing platform (Jaeger) Operator creates the Elasticsearch instance using the installed OpenShift Elasticsearch Operator. Procedure Log in to the OpenShift Container Platform web console as a user with the cluster-admin role. In the OpenShift Container Platform web console, click Operators OperatorHub . Type the name of the Operator into the filter box and select the Red Hat version of the Operator. Community versions of the Operators are not supported. Click Install . On the Install Operator page for each Operator, accept the default settings. Click Install . Wait until the Operator installs before repeating the steps for the Operator you want to install. The Red Hat OpenShift Service Mesh Operator installs in the openshift-operators namespace and is available for all namespaces in the cluster. The Kiali Operator provided by Red Hat installs in the openshift-operators namespace and is available for all namespaces in the cluster. The Tempo Operator installs in the openshift-tempo-operator namespace and is available for all namespaces in the cluster. The Red Hat OpenShift distributed tracing platform (Jaeger) installs in the openshift-distributed-tracing namespace and is available for all namespaces in the cluster. Important Starting with Red Hat OpenShift Service Mesh 2.5, Red Hat OpenShift distributed tracing platform (Jaeger) is deprecated and will be removed in a future release. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed. As an alternative to Red Hat OpenShift distributed tracing platform (Jaeger), you can use Red Hat OpenShift distributed tracing platform (Tempo) instead. The OpenShift Elasticsearch Operator installs in the openshift-operators-redhat namespace and is available for all namespaces in the cluster. Important Starting with Red Hat OpenShift Service Mesh 2.5, OpenShift Elasticsearch Operator is deprecated and will be removed in a future release. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed. Verification After all you have installed all four Operators, click Operators Installed Operators to verify that your Operators are installed. 3.5.6. Deploying the Red Hat OpenShift Service Mesh control plane The ServiceMeshControlPlane resource defines the configuration to be used during installation. You can deploy the default configuration provided by Red Hat or customize the ServiceMeshControlPlane file to fit your business needs. You can deploy the Service Mesh control plane by using the OpenShift Container Platform web console or from the command line using the oc client tool. 3.5.6.1. Deploying the control plane from the web console Follow this procedure to deploy the Red Hat OpenShift Service Mesh control plane by using the web console. In this example, istio-system is the name of the control plane project. Prerequisites The Red Hat OpenShift Service Mesh Operator must be installed. Review the instructions for how to customize the Red Hat OpenShift Service Mesh installation. An account with the cluster-admin role. Procedure Log in to the OpenShift Container Platform web console as a user with the cluster-admin role. Create a project named istio-system . Navigate to Home Projects . Click Create Project . Enter istio-system in the Name field. Click Create . Navigate to Operators Installed Operators . If necessary, select istio-system from the Project menu. You may have to wait a few moments for the Operators to be copied to the new project. Click the Red Hat OpenShift Service Mesh Operator. Under Provided APIs , the Operator provides links to create two resource types: A ServiceMeshControlPlane resource A ServiceMeshMemberRoll resource Under Istio Service Mesh Control Plane click Create ServiceMeshControlPlane . On the Create Service Mesh Control Plane page, modify the YAML for the default ServiceMeshControlPlane template as needed. Note For additional information about customizing the control plane, see customizing the Red Hat OpenShift Service Mesh installation. For production, you must change the default Jaeger template. Click Create to create the control plane. The Operator creates pods, services, and Service Mesh control plane components based on your configuration parameters. Click the Istio Service Mesh Control Plane tab. Click the name of the new control plane. Click the Resources tab to see the Red Hat OpenShift Service Mesh control plane resources the Operator created and configured. 3.5.6.2. Deploying the control plane from the CLI Follow this procedure to deploy the Red Hat OpenShift Service Mesh control plane the command line. Prerequisites The Red Hat OpenShift Service Mesh Operator must be installed. Review the instructions for how to customize the Red Hat OpenShift Service Mesh installation. An account with the cluster-admin role. Access to the OpenShift CLI ( oc ). Procedure Log in to the OpenShift Container Platform CLI as a user with the cluster-admin role. USD oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443 Create a project named istio-system . USD oc new-project istio-system Create a ServiceMeshControlPlane file named istio-installation.yaml using the example found in "Customize the Red Hat OpenShift Service Mesh installation". You can customize the values as needed to match your use case. For production deployments you must change the default Jaeger template. Run the following command to deploy the control plane: USD oc create -n istio-system -f istio-installation.yaml Execute the following command to see the status of the control plane installation. USD oc get smcp -n istio-system The installation has finished successfully when the STATUS column is ComponentsReady . Run the following command to watch the progress of the Pods during the installation process: You should see output similar to the following: Example output NAME READY STATUS RESTARTS AGE grafana-7bf5764d9d-2b2f6 2/2 Running 0 28h istio-citadel-576b9c5bbd-z84z4 1/1 Running 0 28h istio-egressgateway-5476bc4656-r4zdv 1/1 Running 0 28h istio-galley-7d57b47bb7-lqdxv 1/1 Running 0 28h istio-ingressgateway-dbb8f7f46-ct6n5 1/1 Running 0 28h istio-pilot-546bf69578-ccg5x 2/2 Running 0 28h istio-policy-77fd498655-7pvjw 2/2 Running 0 28h istio-sidecar-injector-df45bd899-ctxdt 1/1 Running 0 28h istio-telemetry-66f697d6d5-cj28l 2/2 Running 0 28h jaeger-896945cbc-7lqrr 2/2 Running 0 11h kiali-78d9c5b87c-snjzh 1/1 Running 0 22h prometheus-6dff867c97-gr2n5 2/2 Running 0 28h For a multitenant installation, Red Hat OpenShift Service Mesh supports multiple independent control planes within the cluster. You can create reusable configurations with ServiceMeshControlPlane templates. For more information, see Creating control plane templates . 3.5.7. Creating the Red Hat OpenShift Service Mesh member roll The ServiceMeshMemberRoll lists the projects that belong to the Service Mesh control plane. Only projects listed in the ServiceMeshMemberRoll are affected by the control plane. A project does not belong to a service mesh until you add it to the member roll for a particular control plane deployment. You must create a ServiceMeshMemberRoll resource named default in the same project as the ServiceMeshControlPlane , for example istio-system . 3.5.7.1. Creating the member roll from the web console You can add one or more projects to the Service Mesh member roll from the web console. In this example, istio-system is the name of the Service Mesh control plane project. Prerequisites An installed, verified Red Hat OpenShift Service Mesh Operator. List of existing projects to add to the service mesh. Procedure Log in to the OpenShift Container Platform web console. If you do not already have services for your mesh, or you are starting from scratch, create a project for your applications. It must be different from the project where you installed the Service Mesh control plane. Navigate to Home Projects . Enter a name in the Name field. Click Create . Navigate to Operators Installed Operators . Click the Project menu and choose the project where your ServiceMeshControlPlane resource is deployed from the list, for example istio-system . Click the Red Hat OpenShift Service Mesh Operator. Click the Istio Service Mesh Member Roll tab. Click Create ServiceMeshMemberRoll Click Members , then enter the name of your project in the Value field. You can add any number of projects, but a project can only belong to one ServiceMeshMemberRoll resource. Click Create . 3.5.7.2. Creating the member roll from the CLI You can add a project to the ServiceMeshMemberRoll from the command line. Prerequisites An installed, verified Red Hat OpenShift Service Mesh Operator. List of projects to add to the service mesh. Access to the OpenShift CLI ( oc ). Procedure Log in to the OpenShift Container Platform CLI. USD oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443 If you do not already have services for your mesh, or you are starting from scratch, create a project for your applications. It must be different from the project where you installed the Service Mesh control plane. USD oc new-project <your-project> To add your projects as members, modify the following example YAML. You can add any number of projects, but a project can only belong to one ServiceMeshMemberRoll resource. In this example, istio-system is the name of the Service Mesh control plane project. Example servicemeshmemberroll-default.yaml apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default namespace: istio-system spec: members: # a list of projects joined into the service mesh - your-project-name - another-project-name Run the following command to upload and create the ServiceMeshMemberRoll resource in the istio-system namespace. USD oc create -n istio-system -f servicemeshmemberroll-default.yaml Run the following command to verify the ServiceMeshMemberRoll was created successfully. USD oc get smmr -n istio-system default The installation has finished successfully when the STATUS column is Configured . 3.5.8. Adding or removing projects from the service mesh You can add or remove projects from an existing Service Mesh ServiceMeshMemberRoll resource using the web console. You can add any number of projects, but a project can only belong to one ServiceMeshMemberRoll resource. The ServiceMeshMemberRoll resource is deleted when its corresponding ServiceMeshControlPlane resource is deleted. 3.5.8.1. Adding or removing projects from the member roll using the web console Prerequisites An installed, verified Red Hat OpenShift Service Mesh Operator. An existing ServiceMeshMemberRoll resource. Name of the project with the ServiceMeshMemberRoll resource. Names of the projects you want to add or remove from the mesh. Procedure Log in to the OpenShift Container Platform web console. Navigate to Operators Installed Operators . Click the Project menu and choose the project where your ServiceMeshControlPlane resource is deployed from the list, for example istio-system . Click the Red Hat OpenShift Service Mesh Operator. Click the Istio Service Mesh Member Roll tab. Click the default link. Click the YAML tab. Modify the YAML to add or remove projects as members. You can add any number of projects, but a project can only belong to one ServiceMeshMemberRoll resource. Click Save . Click Reload . 3.5.8.2. Adding or removing projects from the member roll using the CLI You can modify an existing Service Mesh member roll using the command line. Prerequisites An installed, verified Red Hat OpenShift Service Mesh Operator. An existing ServiceMeshMemberRoll resource. Name of the project with the ServiceMeshMemberRoll resource. Names of the projects you want to add or remove from the mesh. Access to the OpenShift CLI ( oc ). Procedure Log in to the OpenShift Container Platform CLI. Edit the ServiceMeshMemberRoll resource. USD oc edit smmr -n <controlplane-namespace> Modify the YAML to add or remove projects as members. You can add any number of projects, but a project can only belong to one ServiceMeshMemberRoll resource. Example servicemeshmemberroll-default.yaml apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default namespace: istio-system #control plane project spec: members: # a list of projects joined into the service mesh - your-project-name - another-project-name 3.5.9. Manual updates If you choose to update manually, the Operator Lifecycle Manager (OLM) controls the installation, upgrade, and role-based access control (RBAC) of Operators in a cluster. OLM runs by default in OpenShift Container Platform. OLM uses CatalogSources, which use the Operator Registry API, to query for available Operators as well as upgrades for installed Operators. For more information about how OpenShift Container Platform handled upgrades, refer to the Operator Lifecycle Manager documentation. 3.5.9.1. Updating sidecar proxies In order to update the configuration for sidecar proxies the application administrator must restart the application pods. If your deployment uses automatic sidecar injection, you can update the pod template in the deployment by adding or modifying an annotation. Run the following command to redeploy the pods: USD oc patch deployment/<deployment> -p '{"spec":{"template":{"metadata":{"annotations":{"kubectl.kubernetes.io/restartedAt": "'`date -Iseconds`'"}}}}}' If your deployment does not use automatic sidecar injection, you must manually update the sidecars by modifying the sidecar container image specified in the deployment or pod, and then restart the pods. 3.5.10. steps Prepare to deploy applications on Red Hat OpenShift Service Mesh. 3.6. Customizing security in a Service Mesh Warning You are viewing documentation for a Red Hat OpenShift Service Mesh release that is no longer supported. Service Mesh version 1.0 and 1.1 control planes are no longer supported. For information about upgrading your service mesh control plane, see Upgrading Service Mesh . For information about the support status of a particular Red Hat OpenShift Service Mesh release, see the Product lifecycle page . If your service mesh application is constructed with a complex array of microservices, you can use Red Hat OpenShift Service Mesh to customize the security of the communication between those services. The infrastructure of OpenShift Container Platform along with the traffic management features of Service Mesh can help you manage the complexity of your applications and provide service and identity security for microservices. 3.6.1. Enabling mutual Transport Layer Security (mTLS) Mutual Transport Layer Security (mTLS) is a protocol where two parties authenticate each other. It is the default mode of authentication in some protocols (IKE, SSH) and optional in others (TLS). mTLS can be used without changes to the application or service code. The TLS is handled entirely by the service mesh infrastructure and between the two sidecar proxies. By default, Red Hat OpenShift Service Mesh is set to permissive mode, where the sidecars in Service Mesh accept both plain-text traffic and connections that are encrypted using mTLS. If a service in your mesh is communicating with a service outside the mesh, strict mTLS could break communication between those services. Use permissive mode while you migrate your workloads to Service Mesh. 3.6.1.1. Enabling strict mTLS across the mesh If your workloads do not communicate with services outside your mesh and communication will not be interrupted by only accepting encrypted connections, you can enable mTLS across your mesh quickly. Set spec.istio.global.mtls.enabled to true in your ServiceMeshControlPlane resource. The operator creates the required resources. apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: global: mtls: enabled: true 3.6.1.1.1. Configuring sidecars for incoming connections for specific services You can also configure mTLS for individual services or namespaces by creating a policy. apiVersion: "authentication.istio.io/v1alpha1" kind: "Policy" metadata: name: default namespace: <NAMESPACE> spec: peers: - mtls: {} 3.6.1.2. Configuring sidecars for outgoing connections Create a destination rule to configure Service Mesh to use mTLS when sending requests to other services in the mesh. apiVersion: "networking.istio.io/v1alpha3" kind: "DestinationRule" metadata: name: "default" namespace: <CONTROL_PLANE_NAMESPACE>> spec: host: "*.local" trafficPolicy: tls: mode: ISTIO_MUTUAL 3.6.1.3. Setting the minimum and maximum protocol versions If your environment has specific requirements for encrypted traffic in your service mesh, you can control the cryptographic functions that are allowed by setting the spec.security.controlPlane.tls.minProtocolVersion or spec.security.controlPlane.tls.maxProtocolVersion in your ServiceMeshControlPlane resource. Those values, configured in your control plane resource, define the minimum and maximum TLS version used by mesh components when communicating securely over TLS. apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: global: tls: minProtocolVersion: TLSv1_2 maxProtocolVersion: TLSv1_3 The default is TLS_AUTO and does not specify a version of TLS. Table 3.3. Valid values Value Description TLS_AUTO default TLSv1_0 TLS version 1.0 TLSv1_1 TLS version 1.1 TLSv1_2 TLS version 1.2 TLSv1_3 TLS version 1.3 3.6.2. Configuring cipher suites and ECDH curves Cipher suites and Elliptic-curve Diffie-Hellman (ECDH curves) can help you secure your service mesh. You can define a comma separated list of cipher suites using spec.istio.global.tls.cipherSuites and ECDH curves using spec.istio.global.tls.ecdhCurves in your ServiceMeshControlPlane resource. If either of these attributes are empty, then the default values are used. The cipherSuites setting is effective if your service mesh uses TLS 1.2 or earlier. It has no effect when negotiating with TLS 1.3. Set your cipher suites in the comma separated list in order of priority. For example, ecdhCurves: CurveP256, CurveP384 sets CurveP256 as a higher priority than CurveP384 . Note You must include either TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 or TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 when you configure the cipher suite. HTTP/2 support requires at least one of these cipher suites. The supported cipher suites are: TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256 TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA TLS_RSA_WITH_AES_128_GCM_SHA256 TLS_RSA_WITH_AES_256_GCM_SHA384 TLS_RSA_WITH_AES_128_CBC_SHA256 TLS_RSA_WITH_AES_128_CBC_SHA TLS_RSA_WITH_AES_256_CBC_SHA TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA TLS_RSA_WITH_3DES_EDE_CBC_SHA The supported ECDH Curves are: CurveP256 CurveP384 CurveP521 X25519 3.6.3. Adding an external certificate authority key and certificate By default, Red Hat OpenShift Service Mesh generates self-signed root certificate and key, and uses them to sign the workload certificates. You can also use the user-defined certificate and key to sign workload certificates, with user-defined root certificate. This task demonstrates an example to plug certificates and key into Service Mesh. Prerequisites You must have installed Red Hat OpenShift Service Mesh with mutual TLS enabled to configure certificates. This example uses the certificates from the Maistra repository . For production, use your own certificates from your certificate authority. You must deploy the Bookinfo sample application to verify the results with these instructions. 3.6.3.1. Adding an existing certificate and key To use an existing signing (CA) certificate and key, you must create a chain of trust file that includes the CA certificate, key, and root certificate. You must use the following exact file names for each of the corresponding certificates. The CA certificate is called ca-cert.pem , the key is ca-key.pem , and the root certificate, which signs ca-cert.pem , is called root-cert.pem . If your workload uses intermediate certificates, you must specify them in a cert-chain.pem file. Add the certificates to Service Mesh by following these steps. Save the example certificates from the Maistra repo locally and replace <path> with the path to your certificates. Create a secret cacert that includes the input files ca-cert.pem , ca-key.pem , root-cert.pem and cert-chain.pem . USD oc create secret generic cacerts -n istio-system --from-file=<path>/ca-cert.pem \ --from-file=<path>/ca-key.pem --from-file=<path>/root-cert.pem \ --from-file=<path>/cert-chain.pem In the ServiceMeshControlPlane resource set global.mtls.enabled to true and security.selfSigned set to false . Service Mesh reads the certificates and key from the secret-mount files. apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: global: mtls: enabled: true security: selfSigned: false To make sure the workloads add the new certificates promptly, delete the secrets generated by Service Mesh, named istio.* . In this example, istio.default . Service Mesh issues new certificates for the workloads. USD oc delete secret istio.default 3.6.3.2. Verifying your certificates Use the Bookinfo sample application to verify your certificates are mounted correctly. First, retrieve the mounted certificates. Then, verify the certificates mounted on the pod. Store the pod name in the variable RATINGSPOD . USD RATINGSPOD=`oc get pods -l app=ratings -o jsonpath='{.items[0].metadata.name}'` Run the following commands to retrieve the certificates mounted on the proxy. USD oc exec -it USDRATINGSPOD -c istio-proxy -- /bin/cat /etc/certs/root-cert.pem > /tmp/pod-root-cert.pem The file /tmp/pod-root-cert.pem contains the root certificate propagated to the pod. USD oc exec -it USDRATINGSPOD -c istio-proxy -- /bin/cat /etc/certs/cert-chain.pem > /tmp/pod-cert-chain.pem The file /tmp/pod-cert-chain.pem contains the workload certificate and the CA certificate propagated to the pod. Verify the root certificate is the same as the one specified by the Operator. Replace <path> with the path to your certificates. USD openssl x509 -in <path>/root-cert.pem -text -noout > /tmp/root-cert.crt.txt USD openssl x509 -in /tmp/pod-root-cert.pem -text -noout > /tmp/pod-root-cert.crt.txt USD diff /tmp/root-cert.crt.txt /tmp/pod-root-cert.crt.txt Expect the output to be empty. Verify the CA certificate is the same as the one specified by Operator. Replace <path> with the path to your certificates. USD sed '0,/^-----END CERTIFICATE-----/d' /tmp/pod-cert-chain.pem > /tmp/pod-cert-chain-ca.pem USD openssl x509 -in <path>/ca-cert.pem -text -noout > /tmp/ca-cert.crt.txt USD openssl x509 -in /tmp/pod-cert-chain-ca.pem -text -noout > /tmp/pod-cert-chain-ca.crt.txt USD diff /tmp/ca-cert.crt.txt /tmp/pod-cert-chain-ca.crt.txt Expect the output to be empty. Verify the certificate chain from the root certificate to the workload certificate. Replace <path> with the path to your certificates. USD head -n 21 /tmp/pod-cert-chain.pem > /tmp/pod-cert-chain-workload.pem USD openssl verify -CAfile <(cat <path>/ca-cert.pem <path>/root-cert.pem) /tmp/pod-cert-chain-workload.pem Example output /tmp/pod-cert-chain-workload.pem: OK 3.6.3.3. Removing the certificates To remove the certificates you added, follow these steps. Remove the secret cacerts . USD oc delete secret cacerts -n istio-system Redeploy Service Mesh with a self-signed root certificate in the ServiceMeshControlPlane resource. apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: global: mtls: enabled: true security: selfSigned: true 3.7. Traffic management Warning You are viewing documentation for a Red Hat OpenShift Service Mesh release that is no longer supported. Service Mesh version 1.0 and 1.1 control planes are no longer supported. For information about upgrading your service mesh control plane, see Upgrading Service Mesh . For information about the support status of a particular Red Hat OpenShift Service Mesh release, see the Product lifecycle page . You can control the flow of traffic and API calls between services in Red Hat OpenShift Service Mesh. For example, some services in your service mesh may need to communicate within the mesh and others may need to be hidden. Manage the traffic to hide specific backend services, expose services, create testing or versioning deployments, or add a security layer on a set of services. 3.7.1. Using gateways You can use a gateway to manage inbound and outbound traffic for your mesh to specify which traffic you want to enter or leave the mesh. Gateway configurations are applied to standalone Envoy proxies that are running at the edge of the mesh, rather than sidecar Envoy proxies running alongside your service workloads. Unlike other mechanisms for controlling traffic entering your systems, such as the Kubernetes Ingress APIs, Red Hat OpenShift Service Mesh gateways use the full power and flexibility of traffic routing. The Red Hat OpenShift Service Mesh gateway resource can use layer 4-6 load balancing properties, such as ports, to expose and configure Red Hat OpenShift Service Mesh TLS settings. Instead of adding application-layer traffic routing (L7) to the same API resource, you can bind a regular Red Hat OpenShift Service Mesh virtual service to the gateway and manage gateway traffic like any other data plane traffic in a service mesh. Gateways are primarily used to manage ingress traffic, but you can also configure egress gateways. An egress gateway lets you configure a dedicated exit node for the traffic leaving the mesh. This enables you to limit which services have access to external networks, which adds security control to your service mesh. You can also use a gateway to configure a purely internal proxy. Gateway example A gateway resource describes a load balancer operating at the edge of the mesh receiving incoming or outgoing HTTP/TCP connections. The specification describes a set of ports that should be exposed, the type of protocol to use, SNI configuration for the load balancer, and so on. The following example shows a sample gateway configuration for external HTTPS ingress traffic: apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: ext-host-gwy spec: selector: istio: ingressgateway # use istio default controller servers: - port: number: 443 name: https protocol: HTTPS hosts: - ext-host.example.com tls: mode: SIMPLE serverCertificate: /tmp/tls.crt privateKey: /tmp/tls.key This gateway configuration lets HTTPS traffic from ext-host.example.com into the mesh on port 443, but doesn't specify any routing for the traffic. To specify routing and for the gateway to work as intended, you must also bind the gateway to a virtual service. You do this using the virtual service's gateways field, as shown in the following example: apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: virtual-svc spec: hosts: - ext-host.example.com gateways: - ext-host-gwy You can then configure the virtual service with routing rules for the external traffic. 3.7.2. Configuring an ingress gateway An ingress gateway is a load balancer operating at the edge of the mesh that receives incoming HTTP/TCP connections. It configures exposed ports and protocols but does not include any traffic routing configuration. Traffic routing for ingress traffic is instead configured with routing rules, the same way as for internal service requests. The following steps show how to create a gateway and configure a VirtualService to expose a service in the Bookinfo sample application to outside traffic for paths /productpage and /login . Procedure Create a gateway to accept traffic. Create a YAML file, and copy the following YAML into it. Gateway example gateway.yaml apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: bookinfo-gateway spec: selector: istio: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - "*" Apply the YAML file. USD oc apply -f gateway.yaml Create a VirtualService object to rewrite the host header. Create a YAML file, and copy the following YAML into it. Virtual service example apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: bookinfo spec: hosts: - "*" gateways: - bookinfo-gateway http: - match: - uri: exact: /productpage - uri: prefix: /static - uri: exact: /login - uri: exact: /logout - uri: prefix: /api/v1/products route: - destination: host: productpage port: number: 9080 Apply the YAML file. USD oc apply -f vs.yaml Test that the gateway and VirtualService have been set correctly. Set the Gateway URL. export GATEWAY_URL=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.host}') Set the port number. In this example, istio-system is the name of the Service Mesh control plane project. export TARGET_PORT=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.port.targetPort}') Test a page that has been explicitly exposed. curl -s -I "USDGATEWAY_URL/productpage" The expected result is 200 . 3.7.3. Managing ingress traffic In Red Hat OpenShift Service Mesh, the Ingress Gateway enables features such as monitoring, security, and route rules to apply to traffic that enters the cluster. Use a Service Mesh gateway to expose a service outside of the service mesh. 3.7.3.1. Determining the ingress IP and ports Ingress configuration differs depending on if your environment supports an external load balancer. An external load balancer is set in the ingress IP and ports for the cluster. To determine if your cluster's IP and ports are configured for external load balancers, run the following command. In this example, istio-system is the name of the Service Mesh control plane project. USD oc get svc istio-ingressgateway -n istio-system That command returns the NAME , TYPE , CLUSTER-IP , EXTERNAL-IP , PORT(S) , and AGE of each item in your namespace. If the EXTERNAL-IP value is set, your environment has an external load balancer that you can use for the ingress gateway. If the EXTERNAL-IP value is <none> , or perpetually <pending> , your environment does not provide an external load balancer for the ingress gateway. 3.7.3.1.1. Determining ingress ports with a load balancer Follow these instructions if your environment has an external load balancer. Procedure Run the following command to set the ingress IP and ports. This command sets a variable in your terminal. USD export INGRESS_HOST=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}') Run the following command to set the ingress port. USD export INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].port}') Run the following command to set the secure ingress port. USD export SECURE_INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="https")].port}') Run the following command to set the TCP ingress port. USD export TCP_INGRESS_PORT=USD(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="tcp")].port}') Note In some environments, the load balancer may be exposed using a hostname instead of an IP address. For that case, the ingress gateway's EXTERNAL-IP value is not an IP address. Instead, it's a hostname, and the command fails to set the INGRESS_HOST environment variable. In that case, use the following command to correct the INGRESS_HOST value: USD export INGRESS_HOST=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].hostname}') 3.7.3.1.2. Determining ingress ports without a load balancer If your environment does not have an external load balancer, determine the ingress ports and use a node port instead. Procedure Set the ingress ports. USD export INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}') Run the following command to set the secure ingress port. USD export SECURE_INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="https")].nodePort}') Run the following command to set the TCP ingress port. USD export TCP_INGRESS_PORT=USD(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="tcp")].nodePort}') 3.7.4. Automatic route creation OpenShift routes for Istio Gateways are automatically managed in Red Hat OpenShift Service Mesh. Every time an Istio Gateway is created, updated or deleted inside the service mesh, an OpenShift route is created, updated or deleted. 3.7.4.1. Enabling Automatic Route Creation A Red Hat OpenShift Service Mesh control plane component called Istio OpenShift Routing (IOR) synchronizes the gateway route. Enable IOR as part of the control plane deployment. If the Gateway contains a TLS section, the OpenShift Route will be configured to support TLS. In the ServiceMeshControlPlane resource, add the ior_enabled parameter and set it to true . For example, see the following resource snippet: spec: istio: gateways: istio-egressgateway: autoscaleEnabled: false autoscaleMin: 1 autoscaleMax: 5 istio-ingressgateway: autoscaleEnabled: false autoscaleMin: 1 autoscaleMax: 5 ior_enabled: true 3.7.4.2. Subdomains Red Hat OpenShift Service Mesh creates the route with the subdomain, but OpenShift Container Platform must be configured to enable it. Subdomains, for example *.domain.com , are supported but not by default. Configure an OpenShift Container Platform wildcard policy before configuring a wildcard host Gateway. For more information, see the "Links" section. If the following gateway is created: apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: gateway1 spec: selector: istio: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - www.bookinfo.com - bookinfo.example.com Then, the following OpenShift Routes are created automatically. You can check that the routes are created with the following command. USD oc -n <control_plane_namespace> get routes Expected output NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD gateway1-lvlfn bookinfo.example.com istio-ingressgateway <all> None gateway1-scqhv www.bookinfo.com istio-ingressgateway <all> None If the gateway is deleted, Red Hat OpenShift Service Mesh deletes the routes. However, routes created manually are never modified by Red Hat OpenShift Service Mesh. 3.7.5. Understanding service entries A service entry adds an entry to the service registry that Red Hat OpenShift Service Mesh maintains internally. After you add the service entry, the Envoy proxies send traffic to the service as if it is a service in your mesh. Service entries allow you to do the following: Manage traffic for services that run outside of the service mesh. Redirect and forward traffic for external destinations (such as, APIs consumed from the web) or traffic to services in legacy infrastructure. Define retry, timeout, and fault injection policies for external destinations. Run a mesh service in a Virtual Machine (VM) by adding VMs to your mesh. Note Add services from a different cluster to the mesh to configure a multicluster Red Hat OpenShift Service Mesh mesh on Kubernetes. Service entry examples The following example is a mesh-external service entry that adds the ext-resource external dependency to the Red Hat OpenShift Service Mesh service registry: apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: svc-entry spec: hosts: - ext-svc.example.com ports: - number: 443 name: https protocol: HTTPS location: MESH_EXTERNAL resolution: DNS Specify the external resource using the hosts field. You can qualify it fully or use a wildcard prefixed domain name. You can configure virtual services and destination rules to control traffic to a service entry in the same way you configure traffic for any other service in the mesh. For example, the following destination rule configures the traffic route to use mutual TLS to secure the connection to the ext-svc.example.com external service that is configured using the service entry: apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: ext-res-dr spec: host: ext-svc.example.com trafficPolicy: tls: mode: MUTUAL clientCertificate: /etc/certs/myclientcert.pem privateKey: /etc/certs/client_private_key.pem caCertificates: /etc/certs/rootcacerts.pem 3.7.6. Using VirtualServices You can route requests dynamically to multiple versions of a microservice through Red Hat OpenShift Service Mesh with a virtual service. With virtual services, you can: Address multiple application services through a single virtual service. If your mesh uses Kubernetes, for example, you can configure a virtual service to handle all services in a specific namespace. A virtual service enables you to turn a monolithic application into a service consisting of distinct microservices with a seamless consumer experience. Configure traffic rules in combination with gateways to control ingress and egress traffic. 3.7.6.1. Configuring VirtualServices Requests are routed to services within a service mesh with virtual services. Each virtual service consists of a set of routing rules that are evaluated in order. Red Hat OpenShift Service Mesh matches each given request to the virtual service to a specific real destination within the mesh. Without virtual services, Red Hat OpenShift Service Mesh distributes traffic using least requests load balancing between all service instances. With a virtual service, you can specify traffic behavior for one or more hostnames. Routing rules in the virtual service tell Red Hat OpenShift Service Mesh how to send the traffic for the virtual service to appropriate destinations. Route destinations can be versions of the same service or entirely different services. Procedure Create a YAML file using the following example to route requests to different versions of the Bookinfo sample application service depending on which user connects to the application. Example VirtualService.yaml apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews spec: hosts: - reviews http: - match: - headers: end-user: exact: jason route: - destination: host: reviews subset: v2 - route: - destination: host: reviews subset: v3 Run the following command to apply VirtualService.yaml , where VirtualService.yaml is the path to the file. USD oc apply -f <VirtualService.yaml> 3.7.6.2. VirtualService configuration reference Parameter Description The hosts field lists the virtual service's destination address to which the routing rules apply. This is the address(es) that are used to send requests to the service. The virtual service hostname can be an IP address, a DNS name, or a short name that resolves to a fully qualified domain name. The http section contains the virtual service's routing rules which describe match conditions and actions for routing HTTP/1.1, HTTP2, and gRPC traffic sent to the destination as specified in the hosts field. A routing rule consists of the destination where you want the traffic to go and any specified match conditions. The first routing rule in the example has a condition that begins with the match field. In this example, this routing applies to all requests from the user jason . Add the headers , end-user , and exact fields to select the appropriate requests. The destination field in the route section specifies the actual destination for traffic that matches this condition. Unlike the virtual service's host, the destination's host must be a real destination that exists in the Red Hat OpenShift Service Mesh service registry. This can be a mesh service with proxies or a non-mesh service added using a service entry. In this example, the hostname is a Kubernetes service name: 3.7.7. Understanding destination rules Destination rules are applied after virtual service routing rules are evaluated, so they apply to the traffic's real destination. Virtual services route traffic to a destination. Destination rules configure what happens to traffic at that destination. By default, Red Hat OpenShift Service Mesh uses a least requests load balancing policy, where the service instance in the pool with the least number of active connections receives the request. Red Hat OpenShift Service Mesh also supports the following models, which you can specify in destination rules for requests to a particular service or service subset. Random: Requests are forwarded at random to instances in the pool. Weighted: Requests are forwarded to instances in the pool according to a specific percentage. Least requests: Requests are forwarded to instances with the least number of requests. Destination rule example The following example destination rule configures three different subsets for the my-svc destination service, with different load balancing policies: apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: my-destination-rule spec: host: my-svc trafficPolicy: loadBalancer: simple: RANDOM subsets: - name: v1 labels: version: v1 - name: v2 labels: version: v2 trafficPolicy: loadBalancer: simple: ROUND_ROBIN - name: v3 labels: version: v3 This guide references the Bookinfo sample application to provide examples of routing in an example application. Install the Bookinfo application to learn how these routing examples work. 3.7.8. Bookinfo routing tutorial The Service Mesh Bookinfo sample application consists of four separate microservices, each with multiple versions. After installing the Bookinfo sample application, three different versions of the reviews microservice run concurrently. When you access the Bookinfo app /product page in a browser and refresh several times, sometimes the book review output contains star ratings and other times it does not. Without an explicit default service version to route to, Service Mesh routes requests to all available versions one after the other. This tutorial helps you apply rules that route all traffic to v1 (version 1) of the microservices. Later, you can apply a rule to route traffic based on the value of an HTTP request header. Prerequisites Deploy the Bookinfo sample application to work with the following examples. 3.7.8.1. Applying a virtual service In the following procedure, the virtual service routes all traffic to v1 of each micro-service by applying virtual services that set the default version for the micro-services. Procedure Apply the virtual services. USD oc apply -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.6/samples/bookinfo/networking/virtual-service-all-v1.yaml To verify that you applied the virtual services, display the defined routes with the following command: USD oc get virtualservices -o yaml That command returns a resource of kind: VirtualService in YAML format. You have configured Service Mesh to route to the v1 version of the Bookinfo microservices including the reviews service version 1. 3.7.8.2. Testing the new route configuration Test the new configuration by refreshing the /productpage of the Bookinfo application. Procedure Set the value for the GATEWAY_URL parameter. You can use this variable to find the URL for your Bookinfo product page later. In this example, istio-system is the name of the control plane project. export GATEWAY_URL=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.host}') Run the following command to retrieve the URL for the product page. echo "http://USDGATEWAY_URL/productpage" Open the Bookinfo site in your browser. The reviews part of the page displays with no rating stars, no matter how many times you refresh. This is because you configured Service Mesh to route all traffic for the reviews service to the version reviews:v1 and this version of the service does not access the star ratings service. Your service mesh now routes traffic to one version of a service. 3.7.8.3. Route based on user identity Change the route configuration so that all traffic from a specific user is routed to a specific service version. In this case, all traffic from a user named jason will be routed to the service reviews:v2 . Service Mesh does not have any special, built-in understanding of user identity. This example is enabled by the fact that the productpage service adds a custom end-user header to all outbound HTTP requests to the reviews service. Procedure Run the following command to enable user-based routing in the Bookinfo sample application. USD oc apply -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.6/samples/bookinfo/networking/virtual-service-reviews-test-v2.yaml Run the following command to confirm the rule is created. This command returns all resources of kind: VirtualService in YAML format. USD oc get virtualservice reviews -o yaml On the /productpage of the Bookinfo app, log in as user jason with no password. Refresh the browser. The star ratings appear to each review. Log in as another user (pick any name you want). Refresh the browser. Now the stars are gone. Traffic is now routed to reviews:v1 for all users except Jason. You have successfully configured the Bookinfo sample application to route traffic based on user identity. 3.7.9. Additional resources For more information about configuring an OpenShift Container Platform wildcard policy, see Using wildcard routes . 3.8. Deploying applications on Service Mesh Warning You are viewing documentation for a Red Hat OpenShift Service Mesh release that is no longer supported. Service Mesh version 1.0 and 1.1 control planes are no longer supported. For information about upgrading your service mesh control plane, see Upgrading Service Mesh . For information about the support status of a particular Red Hat OpenShift Service Mesh release, see the Product lifecycle page . When you deploy an application into the Service Mesh, there are several differences between the behavior of applications in the upstream community version of Istio and the behavior of applications within a Red Hat OpenShift Service Mesh installation. 3.8.1. Prerequisites Review Comparing Red Hat OpenShift Service Mesh and upstream Istio community installations Review Installing Red Hat OpenShift Service Mesh 3.8.2. Creating control plane templates You can create reusable configurations with ServiceMeshControlPlane templates. Individual users can extend the templates they create with their own configurations. Templates can also inherit configuration information from other templates. For example, you can create an accounting control plane for the accounting team and a marketing control plane for the marketing team. If you create a development template and a production template, members of the marketing team and the accounting team can extend the development and production templates with team specific customization. When you configure control plane templates, which follow the same syntax as the ServiceMeshControlPlane , users inherit settings in a hierarchical fashion. The Operator is delivered with a default template with default settings for Red Hat OpenShift Service Mesh. To add custom templates you must create a ConfigMap named smcp-templates in the openshift-operators project and mount the ConfigMap in the Operator container at /usr/local/share/istio-operator/templates . 3.8.2.1. Creating the ConfigMap Follow this procedure to create the ConfigMap. Prerequisites An installed, verified Service Mesh Operator. An account with the cluster-admin role. Location of the Operator deployment. Access to the OpenShift CLI ( oc ). Procedure Log in to the OpenShift Container Platform CLI as a cluster administrator. From the CLI, run this command to create the ConfigMap named smcp-templates in the openshift-operators project and replace <templates-directory> with the location of the ServiceMeshControlPlane files on your local disk: USD oc create configmap --from-file=<templates-directory> smcp-templates -n openshift-operators Locate the Operator ClusterServiceVersion name. USD oc get clusterserviceversion -n openshift-operators | grep 'Service Mesh' Example output maistra.v1.0.0 Red Hat OpenShift Service Mesh 1.0.0 Succeeded Edit the Operator cluster service version to instruct the Operator to use the smcp-templates ConfigMap. USD oc edit clusterserviceversion -n openshift-operators maistra.v1.0.0 Add a volume mount and volume to the Operator deployment. deployments: - name: istio-operator spec: template: spec: containers: volumeMounts: - name: discovery-cache mountPath: /home/istio-operator/.kube/cache/discovery - name: smcp-templates mountPath: /usr/local/share/istio-operator/templates/ volumes: - name: discovery-cache emptyDir: medium: Memory - name: smcp-templates configMap: name: smcp-templates ... Save your changes and exit the editor. You can now use the template parameter in the ServiceMeshControlPlane to specify a template. apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane metadata: name: minimal-install spec: template: default 3.8.3. Enabling automatic sidecar injection When deploying an application, you must opt-in to injection by configuring the label sidecar.istio.io/inject in spec.template.metadata.labels to true in the deployment object. Opting in ensures that the sidecar injection does not interfere with other OpenShift Container Platform features such as builder pods used by numerous frameworks within the OpenShift Container Platform ecosystem. Prerequisites Identify the namespaces that are part of your service mesh and the deployments that need automatic sidecar injection. Procedure To find your deployments use the oc get command. USD oc get deployment -n <namespace> For example, to view the Deployment YAML file for the 'ratings-v1' microservice in the bookinfo namespace, use the following command to see the resource in YAML format. oc get deployment -n bookinfo ratings-v1 -o yaml Open the application's Deployment YAML file in an editor. Add spec.template.metadata.labels.sidecar.istio/inject to your Deployment YAML file and set sidecar.istio.io/inject to true as shown in the following example. Example snippet from bookinfo deployment-ratings-v1.yaml apiVersion: apps/v1 kind: Deployment metadata: name: ratings-v1 namespace: bookinfo labels: app: ratings version: v1 spec: template: metadata: labels: sidecar.istio.io/inject: 'true' Note Using the annotations parameter when enabling automatic sidecar injection is deprecated and is replaced by using the labels parameter. Save the Deployment YAML file. Add the file back to the project that contains your app. USD oc apply -n <namespace> -f deployment.yaml In this example, bookinfo is the name of the project that contains the ratings-v1 app and deployment-ratings-v1.yaml is the file you edited. USD oc apply -n bookinfo -f deployment-ratings-v1.yaml To verify that the resource uploaded successfully, run the following command. USD oc get deployment -n <namespace> <deploymentName> -o yaml For example, USD oc get deployment -n bookinfo ratings-v1 -o yaml 3.8.4. Setting proxy environment variables through annotations Configuration for the Envoy sidecar proxies is managed by the ServiceMeshControlPlane . You can set environment variables for the sidecar proxy for applications by adding pod annotations to the deployment in the injection-template.yaml file. The environment variables are injected to the sidecar. Example injection-template.yaml apiVersion: apps/v1 kind: Deployment metadata: name: resource spec: replicas: 7 selector: matchLabels: app: resource template: metadata: annotations: sidecar.maistra.io/proxyEnv: "{ \"maistra_test_env\": \"env_value\", \"maistra_test_env_2\": \"env_value_2\" }" Warning You should never include maistra.io/ labels and annotations when creating your own custom resources. These labels and annotations indicate that the resources are generated and managed by the Operator. If you are copying content from an Operator-generated resource when creating your own resources, do not include labels or annotations that start with maistra.io/ . Resources that include these labels or annotations will be overwritten or deleted by the Operator during the reconciliation. 3.8.5. Updating Mixer policy enforcement In versions of Red Hat OpenShift Service Mesh, Mixer's policy enforcement was enabled by default. Mixer policy enforcement is now disabled by default. You must enable it before running policy tasks. Prerequisites Access to the OpenShift CLI ( oc ). Note The examples use istio-system as the control plane namespace. Replace this value with the namespace where you deployed the Service Mesh Control Plane (SMCP). Procedure Log in to the OpenShift Container Platform CLI. Run this command to check the current Mixer policy enforcement status: USD oc get cm -n istio-system istio -o jsonpath='{.data.mesh}' | grep disablePolicyChecks If disablePolicyChecks: true , edit the Service Mesh ConfigMap: USD oc edit cm -n istio-system istio Locate disablePolicyChecks: true within the ConfigMap and change the value to false . Save the configuration and exit the editor. Re-check the Mixer policy enforcement status to ensure it is set to false . 3.8.5.1. Setting the correct network policy Service Mesh creates network policies in the Service Mesh control plane and member namespaces to allow traffic between them. Before you deploy, consider the following conditions to ensure the services in your service mesh that were previously exposed through an OpenShift Container Platform route. Traffic into the service mesh must always go through the ingress-gateway for Istio to work properly. Deploy services external to the service mesh in separate namespaces that are not in any service mesh. Non-mesh services that need to be deployed within a service mesh enlisted namespace should label their deployments maistra.io/expose-route: "true" , which ensures OpenShift Container Platform routes to these services still work. 3.8.6. Bookinfo example application The Bookinfo example application allows you to test your Red Hat OpenShift Service Mesh 2.6.6 installation on OpenShift Container Platform. The Bookinfo application displays information about a book, similar to a single catalog entry of an online book store. The application displays a page that describes the book, book details (ISBN, number of pages, and other information), and book reviews. The Bookinfo application consists of these microservices: The productpage microservice calls the details and reviews microservices to populate the page. The details microservice contains book information. The reviews microservice contains book reviews. It also calls the ratings microservice. The ratings microservice contains book ranking information that accompanies a book review. There are three versions of the reviews microservice: Version v1 does not call the ratings Service. Version v2 calls the ratings Service and displays each rating as one to five black stars. Version v3 calls the ratings Service and displays each rating as one to five red stars. 3.8.6.1. Installing the Bookinfo application This tutorial walks you through how to create a sample application by creating a project, deploying the Bookinfo application to that project, and viewing the running application in Service Mesh. Prerequisites OpenShift Container Platform 4.1 or higher installed. Red Hat OpenShift Service Mesh 2.6.6 installed. Access to the OpenShift CLI ( oc ). You are logged in to OpenShift Container Platform as`cluster-admin`. Note The Bookinfo sample application cannot be installed on IBM Z(R) and IBM Power(R). Note The commands in this section assume the Service Mesh control plane project is istio-system . If you installed the control plane in another namespace, edit each command before you run it. Procedure Click Home Projects . Click Create Project . Enter bookinfo as the Project Name , enter a Display Name , and enter a Description , then click Create . Alternatively, you can run this command from the CLI to create the bookinfo project. USD oc new-project bookinfo Click Operators Installed Operators . Click the Project menu and use the Service Mesh control plane namespace. In this example, use istio-system . Click the Red Hat OpenShift Service Mesh Operator. Click the Istio Service Mesh Member Roll tab. If you have already created a Istio Service Mesh Member Roll, click the name, then click the YAML tab to open the YAML editor. If you have not created a ServiceMeshMemberRoll , click Create ServiceMeshMemberRoll . Click Members , then enter the name of your project in the Value field. Click Create to save the updated Service Mesh Member Roll. Or, save the following example to a YAML file. Bookinfo ServiceMeshMemberRoll example servicemeshmemberroll-default.yaml apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default spec: members: - bookinfo Run the following command to upload that file and create the ServiceMeshMemberRoll resource in the istio-system namespace. In this example, istio-system is the name of the Service Mesh control plane project. USD oc create -n istio-system -f servicemeshmemberroll-default.yaml Run the following command to verify the ServiceMeshMemberRoll was created successfully. USD oc get smmr -n istio-system -o wide The installation has finished successfully when the STATUS column is Configured . NAME READY STATUS AGE MEMBERS default 1/1 Configured 70s ["bookinfo"] From the CLI, deploy the Bookinfo application in the `bookinfo` project by applying the bookinfo.yaml file: USD oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.6/samples/bookinfo/platform/kube/bookinfo.yaml You should see output similar to the following: service/details created serviceaccount/bookinfo-details created deployment.apps/details-v1 created service/ratings created serviceaccount/bookinfo-ratings created deployment.apps/ratings-v1 created service/reviews created serviceaccount/bookinfo-reviews created deployment.apps/reviews-v1 created deployment.apps/reviews-v2 created deployment.apps/reviews-v3 created service/productpage created serviceaccount/bookinfo-productpage created deployment.apps/productpage-v1 created Create the ingress gateway by applying the bookinfo-gateway.yaml file: USD oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.6/samples/bookinfo/networking/bookinfo-gateway.yaml You should see output similar to the following: gateway.networking.istio.io/bookinfo-gateway created virtualservice.networking.istio.io/bookinfo created Set the value for the GATEWAY_URL parameter: USD export GATEWAY_URL=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.host}') 3.8.6.2. Adding default destination rules Before you can use the Bookinfo application, you must first add default destination rules. There are two preconfigured YAML files, depending on whether or not you enabled mutual transport layer security (TLS) authentication. Procedure To add destination rules, run one of the following commands: If you did not enable mutual TLS: USD oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.6/samples/bookinfo/networking/destination-rule-all.yaml If you enabled mutual TLS: USD oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.6/samples/bookinfo/networking/destination-rule-all-mtls.yaml You should see output similar to the following: destinationrule.networking.istio.io/productpage created destinationrule.networking.istio.io/reviews created destinationrule.networking.istio.io/ratings created destinationrule.networking.istio.io/details created 3.8.6.3. Verifying the Bookinfo installation To confirm that the sample Bookinfo application was successfully deployed, perform the following steps. Prerequisites Red Hat OpenShift Service Mesh installed. Complete the steps for installing the Bookinfo sample app. You are logged in to OpenShift Container Platform as`cluster-admin`. Procedure from CLI Verify that all pods are ready with this command: USD oc get pods -n bookinfo All pods should have a status of Running . You should see output similar to the following: NAME READY STATUS RESTARTS AGE details-v1-55b869668-jh7hb 2/2 Running 0 12m productpage-v1-6fc77ff794-nsl8r 2/2 Running 0 12m ratings-v1-7d7d8d8b56-55scn 2/2 Running 0 12m reviews-v1-868597db96-bdxgq 2/2 Running 0 12m reviews-v2-5b64f47978-cvssp 2/2 Running 0 12m reviews-v3-6dfd49b55b-vcwpf 2/2 Running 0 12m Run the following command to retrieve the URL for the product page: echo "http://USDGATEWAY_URL/productpage" Copy and paste the output in a web browser to verify the Bookinfo product page is deployed. Procedure from Kiali web console Obtain the address for the Kiali web console. Log in to the OpenShift Container Platform web console. Navigate to Networking Routes . On the Routes page, select the Service Mesh control plane project, for example istio-system , from the Namespace menu. The Location column displays the linked address for each route. Click the link in the Location column for Kiali. Click Log In With OpenShift . The Kiali Overview screen presents tiles for each project namespace. In Kiali, click Graph . Select bookinfo from the Namespace list, and App graph from the Graph Type list. Click Display idle nodes from the Display menu. This displays nodes that are defined but have not received or sent requests. It can confirm that an application is properly defined, but that no request traffic has been reported. Use the Duration menu to increase the time period to help ensure older traffic is captured. Use the Refresh Rate menu to refresh traffic more or less often, or not at all. Click Services , Workloads or Istio Config to see list views of bookinfo components, and confirm that they are healthy. 3.8.6.4. Removing the Bookinfo application Follow these steps to remove the Bookinfo application. Prerequisites OpenShift Container Platform 4.1 or higher installed. Red Hat OpenShift Service Mesh 2.6.6 installed. Access to the OpenShift CLI ( oc ). 3.8.6.4.1. Delete the Bookinfo project Procedure Log in to the OpenShift Container Platform web console. Click to Home Projects . Click the bookinfo menu , and then click Delete Project . Type bookinfo in the confirmation dialog box, and then click Delete . Alternatively, you can run this command using the CLI to create the bookinfo project. USD oc delete project bookinfo 3.8.6.4.2. Remove the Bookinfo project from the Service Mesh member roll Procedure Log in to the OpenShift Container Platform web console. Click Operators Installed Operators . Click the Project menu and choose istio-system from the list. Click the Istio Service Mesh Member Roll link under Provided APIS for the Red Hat OpenShift Service Mesh Operator. Click the ServiceMeshMemberRoll menu and select Edit Service Mesh Member Roll . Edit the default Service Mesh Member Roll YAML and remove bookinfo from the members list. Alternatively, you can run this command using the CLI to remove the bookinfo project from the ServiceMeshMemberRoll . In this example, istio-system is the name of the Service Mesh control plane project. USD oc -n istio-system patch --type='json' smmr default -p '[{"op": "remove", "path": "/spec/members", "value":["'"bookinfo"'"]}]' Click Save to update Service Mesh Member Roll. 3.8.7. Generating example traces and analyzing trace data Jaeger is an open source distributed tracing system. With Jaeger, you can perform a trace that follows the path of a request through various microservices which make up an application. Jaeger is installed by default as part of the Service Mesh. This tutorial uses Service Mesh and the Bookinfo sample application to demonstrate how you can use Jaeger to perform distributed tracing. Prerequisites OpenShift Container Platform 4.1 or higher installed. Red Hat OpenShift Service Mesh 2.6.6 installed. Jaeger enabled during the installation. Bookinfo example application installed. Procedure After installing the Bookinfo sample application, send traffic to the mesh. Enter the following command several times. USD curl "http://USDGATEWAY_URL/productpage" This command simulates a user visiting the productpage microservice of the application. In the OpenShift Container Platform console, navigate to Networking Routes and search for the Jaeger route, which is the URL listed under Location . Alternatively, use the CLI to query for details of the route. In this example, istio-system is the Service Mesh control plane namespace: USD export JAEGER_URL=USD(oc get route -n istio-system jaeger -o jsonpath='{.spec.host}') Enter the following command to reveal the URL for the Jaeger console. Paste the result in a browser and navigate to that URL. echo USDJAEGER_URL Log in using the same user name and password as you use to access the OpenShift Container Platform console. In the left pane of the Jaeger dashboard, from the Service menu, select productpage.bookinfo and click Find Traces at the bottom of the pane. A list of traces is displayed. Click one of the traces in the list to open a detailed view of that trace. If you click the first one in the list, which is the most recent trace, you see the details that correspond to the latest refresh of the /productpage . 3.9. Data visualization and observability Warning You are viewing documentation for a Red Hat OpenShift Service Mesh release that is no longer supported. Service Mesh version 1.0 and 1.1 control planes are no longer supported. For information about upgrading your service mesh control plane, see Upgrading Service Mesh . For information about the support status of a particular Red Hat OpenShift Service Mesh release, see the Product lifecycle page . You can view your application's topology, health and metrics in the Kiali console. If your service is having issues, the Kiali console offers ways to visualize the data flow through your service. You can view insights about the mesh components at different levels, including abstract applications, services, and workloads. It also provides an interactive graph view of your namespace in real time. Before you begin You can observe the data flow through your application if you have an application installed. If you don't have your own application installed, you can see how observability works in Red Hat OpenShift Service Mesh by installing the Bookinfo sample application . 3.9.1. Viewing service mesh data The Kiali operator works with the telemetry data gathered in Red Hat OpenShift Service Mesh to provide graphs and real-time network diagrams of the applications, services, and workloads in your namespace. To access the Kiali console you must have Red Hat OpenShift Service Mesh installed and projects configured for the service mesh. Procedure Use the perspective switcher to switch to the Administrator perspective. Click Home Projects . Click the name of your project. For example, click bookinfo . In the Launcher section, click Kiali . Log in to the Kiali console with the same user name and password that you use to access the OpenShift Container Platform console. When you first log in to the Kiali Console, you see the Overview page which displays all the namespaces in your service mesh that you have permission to view. If you are validating the console installation, there might not be any data to display. 3.9.2. Viewing service mesh data in the Kiali console The Kiali Graph offers a powerful visualization of your mesh traffic. The topology combines real-time request traffic with your Istio configuration information to present immediate insight into the behavior of your service mesh, letting you quickly pinpoint issues. Multiple Graph Types let you visualize traffic as a high-level service topology, a low-level workload topology, or as an application-level topology. There are several graphs to choose from: The App graph shows an aggregate workload for all applications that are labeled the same. The Service graph shows a node for each service in your mesh but excludes all applications and workloads from the graph. It provides a high level view and aggregates all traffic for defined services. The Versioned App graph shows a node for each version of an application. All versions of an application are grouped together. The Workload graph shows a node for each workload in your service mesh. This graph does not require you to use the application and version labels. If your application does not use version labels, use this the graph. Graph nodes are decorated with a variety of information, pointing out various route routing options like virtual services and service entries, as well as special configuration like fault-injection and circuit breakers. It can identify mTLS issues, latency issues, error traffic and more. The Graph is highly configurable, can show traffic animation, and has powerful Find and Hide abilities. Click the Legend button to view information about the shapes, colors, arrows, and badges displayed in the graph. To view a summary of metrics, select any node or edge in the graph to display its metric details in the summary details panel. 3.9.2.1. Changing graph layouts in Kiali The layout for the Kiali graph can render differently depending on your application architecture and the data to display. For example, the number of graph nodes and their interactions can determine how the Kiali graph is rendered. Because it is not possible to create a single layout that renders nicely for every situation, Kiali offers a choice of several different layouts. Prerequisites If you do not have your own application installed, install the Bookinfo sample application. Then generate traffic for the Bookinfo application by entering the following command several times. USD curl "http://USDGATEWAY_URL/productpage" This command simulates a user visiting the productpage microservice of the application. Procedure Launch the Kiali console. Click Log In With OpenShift . In Kiali console, click Graph to view a namespace graph. From the Namespace menu, select your application namespace, for example, bookinfo . To choose a different graph layout, do either or both of the following: Select different graph data groupings from the menu at the top of the graph. App graph Service graph Versioned App graph (default) Workload graph Select a different graph layout from the Legend at the bottom of the graph. Layout default dagre Layout 1 cose-bilkent Layout 2 cola 3.10. Custom resources Warning You are viewing documentation for a Red Hat OpenShift Service Mesh release that is no longer supported. Service Mesh version 1.0 and 1.1 control planes are no longer supported. For information about upgrading your service mesh control plane, see Upgrading Service Mesh . For information about the support status of a particular Red Hat OpenShift Service Mesh release, see the Product lifecycle page . You can customize your Red Hat OpenShift Service Mesh by modifying the default Service Mesh custom resource or by creating a new custom resource. 3.10.1. Prerequisites An account with the cluster-admin role. Completed the Preparing to install Red Hat OpenShift Service Mesh process. Have installed the operators. 3.10.2. Red Hat OpenShift Service Mesh custom resources Note The istio-system project is used as an example throughout the Service Mesh documentation, but you can use other projects as necessary. A custom resource allows you to extend the API in an Red Hat OpenShift Service Mesh project or cluster. When you deploy Service Mesh it creates a default ServiceMeshControlPlane that you can modify to change the project parameters. The Service Mesh operator extends the API by adding the ServiceMeshControlPlane resource type, which enables you to create ServiceMeshControlPlane objects within projects. By creating a ServiceMeshControlPlane object, you instruct the Operator to install a Service Mesh control plane into the project, configured with the parameters you set in the ServiceMeshControlPlane object. This example ServiceMeshControlPlane definition contains all of the supported parameters and deploys Red Hat OpenShift Service Mesh 1.1.18.2 images based on Red Hat Enterprise Linux (RHEL). Important The 3scale Istio Adapter is deployed and configured in the custom resource file. It also requires a working 3scale account ( SaaS or On-Premises ). Example istio-installation.yaml apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane metadata: name: basic-install spec: istio: global: proxy: resources: requests: cpu: 100m memory: 128Mi limits: cpu: 500m memory: 128Mi gateways: istio-egressgateway: autoscaleEnabled: false istio-ingressgateway: autoscaleEnabled: false ior_enabled: false mixer: policy: autoscaleEnabled: false telemetry: autoscaleEnabled: false resources: requests: cpu: 100m memory: 1G limits: cpu: 500m memory: 4G pilot: autoscaleEnabled: false traceSampling: 100 kiali: enabled: true grafana: enabled: true tracing: enabled: true jaeger: template: all-in-one 3.10.3. ServiceMeshControlPlane parameters The following examples illustrate use of the ServiceMeshControlPlane parameters and the tables provide additional information about supported parameters. Important The resources you configure for Red Hat OpenShift Service Mesh with these parameters, including CPUs, memory, and the number of pods, are based on the configuration of your OpenShift Container Platform cluster. Configure these parameters based on the available resources in your current cluster configuration. 3.10.3.1. Istio global example Here is an example that illustrates the Istio global parameters for the ServiceMeshControlPlane and a description of the available parameters with appropriate values. Note In order for the 3scale Istio Adapter to work, disablePolicyChecks must be false . Example global parameters istio: global: tag: 1.1.0 hub: registry.redhat.io/openshift-service-mesh/ proxy: resources: requests: cpu: 10m memory: 128Mi limits: mtls: enabled: false disablePolicyChecks: true policyCheckFailOpen: false imagePullSecrets: - MyPullSecret Table 3.4. Global parameters Parameter Description Values Default value disablePolicyChecks This parameter enables/disables policy checks. true / false true policyCheckFailOpen This parameter indicates whether traffic is allowed to pass through to the Envoy sidecar when the Mixer policy service cannot be reached. true / false false tag The tag that the Operator uses to pull the Istio images. A valid container image tag. 1.1.0 hub The hub that the Operator uses to pull Istio images. A valid image repository. maistra/ or registry.redhat.io/openshift-service-mesh/ mtls This parameter controls whether to enable/disable Mutual Transport Layer Security (mTLS) between services by default. true / false false imagePullSecrets If access to the registry providing the Istio images is secure, list an imagePullSecret here. redhat-registry-pullsecret OR quay-pullsecret None These parameters are specific to the proxy subset of global parameters. Table 3.5. Proxy parameters Type Parameter Description Values Default value requests cpu The amount of CPU resources requested for Envoy proxy. CPU resources, specified in cores or millicores (for example, 200m, 0.5, 1) based on your environment's configuration. 10m memory The amount of memory requested for Envoy proxy Available memory in bytes(for example, 200Ki, 50Mi, 5Gi) based on your environment's configuration. 128Mi limits cpu The maximum amount of CPU resources requested for Envoy proxy. CPU resources, specified in cores or millicores (for example, 200m, 0.5, 1) based on your environment's configuration. 2000m memory The maximum amount of memory Envoy proxy is permitted to use. Available memory in bytes (for example, 200Ki, 50Mi, 5Gi) based on your environment's configuration. 1024Mi 3.10.3.2. Istio gateway configuration Here is an example that illustrates the Istio gateway parameters for the ServiceMeshControlPlane and a description of the available parameters with appropriate values. Example gateway parameters gateways: egress: enabled: true runtime: deployment: autoScaling: enabled: true maxReplicas: 5 minReplicas: 1 enabled: true ingress: enabled: true runtime: deployment: autoScaling: enabled: true maxReplicas: 5 minReplicas: 1 Table 3.6. Istio Gateway parameters Parameter Description Values Default value gateways.egress.runtime.deployment.autoScaling.enabled This parameter enables/disables autoscaling. true / false true gateways.egress.runtime.deployment.autoScaling.minReplicas The minimum number of pods to deploy for the egress gateway based on the autoscaleEnabled setting. A valid number of allocatable pods based on your environment's configuration. 1 gateways.egress.runtime.deployment.autoScaling.maxReplicas The maximum number of pods to deploy for the egress gateway based on the autoscaleEnabled setting. A valid number of allocatable pods based on your environment's configuration. 5 gateways.ingress.runtime.deployment.autoScaling.enabled This parameter enables/disables autoscaling. true / false true gateways.ingress.runtime.deployment.autoScaling.minReplicas The minimum number of pods to deploy for the ingress gateway based on the autoscaleEnabled setting. A valid number of allocatable pods based on your environment's configuration. 1 gateways.ingress.runtime.deployment.autoScaling.maxReplicas The maximum number of pods to deploy for the ingress gateway based on the autoscaleEnabled setting. A valid number of allocatable pods based on your environment's configuration. 5 Cluster administrators can refer to Using wildcard routes for instructions on how to enable subdomains. 3.10.3.3. Istio Mixer configuration Here is an example that illustrates the Mixer parameters for the ServiceMeshControlPlane and a description of the available parameters with appropriate values. Example mixer parameters mixer: enabled: true policy: autoscaleEnabled: false telemetry: autoscaleEnabled: false resources: requests: cpu: 10m memory: 128Mi limits: Table 3.7. Istio Mixer policy parameters Parameter Description Values Default value enabled This parameter enables/disables Mixer. true / false true autoscaleEnabled This parameter enables/disables autoscaling. Disable this for small environments. true / false true autoscaleMin The minimum number of pods to deploy based on the autoscaleEnabled setting. A valid number of allocatable pods based on your environment's configuration. 1 autoscaleMax The maximum number of pods to deploy based on the autoscaleEnabled setting. A valid number of allocatable pods based on your environment's configuration. 5 Table 3.8. Istio Mixer telemetry parameters Type Parameter Description Values Default requests cpu The percentage of CPU resources requested for Mixer telemetry. CPU resources in millicores based on your environment's configuration. 10m memory The amount of memory requested for Mixer telemetry. Available memory in bytes (for example, 200Ki, 50Mi, 5Gi) based on your environment's configuration. 128Mi limits cpu The maximum percentage of CPU resources Mixer telemetry is permitted to use. CPU resources in millicores based on your environment's configuration. 4800m memory The maximum amount of memory Mixer telemetry is permitted to use. Available memory in bytes (for example, 200Ki, 50Mi, 5Gi) based on your environment's configuration. 4G 3.10.3.4. Istio Pilot configuration You can configure Pilot to schedule or set limits on resource allocation. The following example illustrates the Pilot parameters for the ServiceMeshControlPlane and a description of the available parameters with appropriate values. Example pilot parameters spec: runtime: components: pilot: deployment: autoScaling: enabled: true minReplicas: 1 maxReplicas: 5 targetCPUUtilizationPercentage: 85 pod: tolerations: - key: node.kubernetes.io/unreachable operator: Exists effect: NoExecute tolerationSeconds: 60 affinity: podAntiAffinity: requiredDuringScheduling: - key: istio topologyKey: kubernetes.io/hostname operator: In values: - pilot container: resources: limits: cpu: 100m memory: 128M Table 3.9. Istio Pilot parameters Parameter Description Values Default value cpu The percentage of CPU resources requested for Pilot. CPU resources in millicores based on your environment's configuration. 10m memory The amount of memory requested for Pilot. Available memory in bytes (for example, 200Ki, 50Mi, 5Gi) based on your environment's configuration. 128Mi autoscaleEnabled This parameter enables/disables autoscaling. Disable this for small environments. true / false true traceSampling This value controls how often random sampling occurs. Note: Increase for development or testing. A valid percentage. 1.0 3.10.4. Configuring Kiali When the Service Mesh Operator creates the ServiceMeshControlPlane it also processes the Kiali resource. The Kiali Operator then uses this object when creating Kiali instances. The default Kiali parameters specified in the ServiceMeshControlPlane are as follows: Example Kiali parameters apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: kiali: enabled: true dashboard: viewOnlyMode: false ingress: enabled: true Table 3.10. Kiali parameters Parameter Description Values Default value This parameter enables/disables Kiali. Kiali is enabled by default. true / false true This parameter enables/disables view-only mode for the Kiali console. When view-only mode is enabled, users cannot use the console to make changes to the Service Mesh. true / false false This parameter enables/disables ingress for Kiali. true / false true 3.10.4.1. Configuring Kiali for Grafana When you install Kiali and Grafana as part of Red Hat OpenShift Service Mesh the Operator configures the following by default: Grafana is enabled as an external service for Kiali Grafana authorization for the Kiali console Grafana URL for the Kiali console Kiali can automatically detect the Grafana URL. However if you have a custom Grafana installation that is not easily auto-detectable by Kiali, you must update the URL value in the ServiceMeshControlPlane resource. Additional Grafana parameters spec: kiali: enabled: true dashboard: viewOnlyMode: false grafanaURL: "https://grafana-istio-system.127.0.0.1.nip.io" ingress: enabled: true 3.10.4.2. Configuring Kiali for Jaeger When you install Kiali and Jaeger as part of Red Hat OpenShift Service Mesh the Operator configures the following by default: Jaeger is enabled as an external service for Kiali Jaeger authorization for the Kiali console Jaeger URL for the Kiali console Kiali can automatically detect the Jaeger URL. However if you have a custom Jaeger installation that is not easily auto-detectable by Kiali, you must update the URL value in the ServiceMeshControlPlane resource. Additional Jaeger parameters spec: kiali: enabled: true dashboard: viewOnlyMode: false jaegerURL: "http://jaeger-query-istio-system.127.0.0.1.nip.io" ingress: enabled: true 3.10.5. Configuring Jaeger When the Service Mesh Operator creates the ServiceMeshControlPlane resource it can also create the resources for distributed tracing. Service Mesh uses Jaeger for distributed tracing. You can specify your Jaeger configuration in either of two ways: Configure Jaeger in the ServiceMeshControlPlane resource. There are some limitations with this approach. Configure Jaeger in a custom Jaeger resource and then reference that Jaeger instance in the ServiceMeshControlPlane resource. If a Jaeger resource matching the value of name exists, the control plane will use the existing installation. This approach lets you fully customize your Jaeger configuration. The default Jaeger parameters specified in the ServiceMeshControlPlane are as follows: Default all-in-one Jaeger parameters apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: version: v1.1 istio: tracing: enabled: true jaeger: template: all-in-one Table 3.11. Jaeger parameters Parameter Description Values Default value This parameter enables/disables installing and deploying tracing by the Service Mesh Operator. Installing Jaeger is enabled by default. To use an existing Jaeger deployment, set this value to false . true / false true This parameter specifies which Jaeger deployment strategy to use. all-in-one - For development, testing, demonstrations, and proof of concept. production-elasticsearch - For production use. all-in-one Note The default template in the ServiceMeshControlPlane resource is the all-in-one deployment strategy which uses in-memory storage. For production, the only supported storage option is Elasticsearch, therefore you must configure the ServiceMeshControlPlane to request the production-elasticsearch template when you deploy Service Mesh within a production environment. 3.10.5.1. Configuring Elasticsearch The default Jaeger deployment strategy uses the all-in-one template so that the installation can be completed using minimal resources. However, because the all-in-one template uses in-memory storage, it is only recommended for development, demo, or testing purposes and should NOT be used for production environments. If you are deploying Service Mesh and Jaeger in a production environment you must change the template to the production-elasticsearch template, which uses Elasticsearch for Jaeger's storage needs. Elasticsearch is a memory intensive application. The initial set of nodes specified in the default OpenShift Container Platform installation may not be large enough to support the Elasticsearch cluster. You should modify the default Elasticsearch configuration to match your use case and the resources you have requested for your OpenShift Container Platform installation. You can adjust both the CPU and memory limits for each component by modifying the resources block with valid CPU and memory values. Additional nodes must be added to the cluster if you want to run with the recommended amount (or more) of memory. Ensure that you do not exceed the resources requested for your OpenShift Container Platform installation. Default "production" Jaeger parameters with Elasticsearch apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: tracing: enabled: true ingress: enabled: true jaeger: template: production-elasticsearch elasticsearch: nodeCount: 3 redundancyPolicy: resources: requests: cpu: "1" memory: "16Gi" limits: cpu: "1" memory: "16Gi" Table 3.12. Elasticsearch parameters Parameter Description Values Default Value Examples This parameter enables/disables tracing in Service Mesh. Jaeger is installed by default. true / false true This parameter enables/disables ingress for Jaeger. true / false true This parameter specifies which Jaeger deployment strategy to use. all-in-one / production-elasticsearch all-in-one Number of Elasticsearch nodes to create. Integer value. 1 Proof of concept = 1, Minimum deployment =3 Number of central processing units for requests, based on your environment's configuration. Specified in cores or millicores (for example, 200m, 0.5, 1). 1Gi Proof of concept = 500m, Minimum deployment =1 Available memory for requests, based on your environment's configuration. Specified in bytes (for example, 200Ki, 50Mi, 5Gi). 500m Proof of concept = 1Gi, Minimum deployment = 16Gi* Limit on number of central processing units, based on your environment's configuration. Specified in cores or millicores (for example, 200m, 0.5, 1). Proof of concept = 500m, Minimum deployment =1 Available memory limit based on your environment's configuration. Specified in bytes (for example, 200Ki, 50Mi, 5Gi). Proof of concept = 1Gi, Minimum deployment = 16Gi* * Each Elasticsearch node can operate with a lower memory setting though this is not recommended for production deployments. For production use, you should have no less than 16Gi allocated to each pod by default, but preferably allocate as much as you can, up to 64Gi per pod. Procedure Log in to the OpenShift Container Platform web console as a user with the cluster-admin role. Navigate to Operators Installed Operators . Click the Red Hat OpenShift Service Mesh Operator. Click the Istio Service Mesh Control Plane tab. Click the name of your control plane file, for example, basic-install . Click the YAML tab. Edit the Jaeger parameters, replacing the default all-in-one template with parameters for the production-elasticsearch template, modified for your use case. Ensure that the indentation is correct. Click Save . Click Reload . OpenShift Container Platform redeploys Jaeger and creates the Elasticsearch resources based on the specified parameters. 3.10.5.2. Connecting to an existing Jaeger instance In order for the SMCP to connect to an existing Jaeger instance, the following must be true: The Jaeger instance is deployed in the same namespace as the control plane, for example, into the istio-system namespace. To enable secure communication between services, you should enable the oauth-proxy, which secures communication to your Jaeger instance, and make sure the secret is mounted into your Jaeger instance so Kiali can communicate with it. To use a custom or already existing Jaeger instance, set spec.istio.tracing.enabled to "false" to disable the deployment of a Jaeger instance. Supply the correct jaeger-collector endpoint to Mixer by setting spec.istio.global.tracer.zipkin.address to the hostname and port of your jaeger-collector service. The hostname of the service is usually <jaeger-instance-name>-collector.<namespace>.svc.cluster.local . Supply the correct jaeger-query endpoint to Kiali for gathering traces by setting spec.istio.kiali.jaegerInClusterURL to the hostname of your jaeger-query service - the port is normally not required, as it uses 443 by default. The hostname of the service is usually <jaeger-instance-name>-query.<namespace>.svc.cluster.local . Supply the dashboard URL of your Jaeger instance to Kiali to enable accessing Jaeger through the Kiali console. You can retrieve the URL from the OpenShift route that is created by the Jaeger Operator. If your Jaeger resource is called external-jaeger and resides in the istio-system project, you can retrieve the route using the following command: USD oc get route -n istio-system external-jaeger Example output NAME HOST/PORT PATH SERVICES [...] external-jaeger external-jaeger-istio-system.apps.test external-jaeger-query [...] The value under HOST/PORT is the externally accessible URL of the Jaeger dashboard. Example Jaeger resource apiVersion: jaegertracing.io/v1 kind: "Jaeger" metadata: name: "external-jaeger" # Deploy to the Control Plane Namespace namespace: istio-system spec: # Set Up Authentication ingress: enabled: true security: oauth-proxy openshift: # This limits user access to the Jaeger instance to users who have access # to the control plane namespace. Make sure to set the correct namespace here sar: '{"namespace": "istio-system", "resource": "pods", "verb": "get"}' htpasswdFile: /etc/proxy/htpasswd/auth volumeMounts: - name: secret-htpasswd mountPath: /etc/proxy/htpasswd volumes: - name: secret-htpasswd secret: secretName: htpasswd The following ServiceMeshControlPlane example assumes that you have deployed Jaeger using the Jaeger Operator and the example Jaeger resource. Example ServiceMeshControlPlane with external Jaeger apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane metadata: name: external-jaeger namespace: istio-system spec: version: v1.1 istio: tracing: # Disable Jaeger deployment by service mesh operator enabled: false global: tracer: zipkin: # Set Endpoint for Trace Collection address: external-jaeger-collector.istio-system.svc.cluster.local:9411 kiali: # Set Jaeger dashboard URL dashboard: jaegerURL: https://external-jaeger-istio-system.apps.test # Set Endpoint for Trace Querying jaegerInClusterURL: external-jaeger-query.istio-system.svc.cluster.local 3.10.5.3. Configuring Elasticsearch The default Jaeger deployment strategy uses the all-in-one template so that the installation can be completed using minimal resources. However, because the all-in-one template uses in-memory storage, it is only recommended for development, demo, or testing purposes and should NOT be used for production environments. If you are deploying Service Mesh and Jaeger in a production environment you must change the template to the production-elasticsearch template, which uses Elasticsearch for Jaeger's storage needs. Elasticsearch is a memory intensive application. The initial set of nodes specified in the default OpenShift Container Platform installation may not be large enough to support the Elasticsearch cluster. You should modify the default Elasticsearch configuration to match your use case and the resources you have requested for your OpenShift Container Platform installation. You can adjust both the CPU and memory limits for each component by modifying the resources block with valid CPU and memory values. Additional nodes must be added to the cluster if you want to run with the recommended amount (or more) of memory. Ensure that you do not exceed the resources requested for your OpenShift Container Platform installation. Default "production" Jaeger parameters with Elasticsearch apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: tracing: enabled: true ingress: enabled: true jaeger: template: production-elasticsearch elasticsearch: nodeCount: 3 redundancyPolicy: resources: requests: cpu: "1" memory: "16Gi" limits: cpu: "1" memory: "16Gi" Table 3.13. Elasticsearch parameters Parameter Description Values Default Value Examples This parameter enables/disables tracing in Service Mesh. Jaeger is installed by default. true / false true This parameter enables/disables ingress for Jaeger. true / false true This parameter specifies which Jaeger deployment strategy to use. all-in-one / production-elasticsearch all-in-one Number of Elasticsearch nodes to create. Integer value. 1 Proof of concept = 1, Minimum deployment =3 Number of central processing units for requests, based on your environment's configuration. Specified in cores or millicores (for example, 200m, 0.5, 1). 1Gi Proof of concept = 500m, Minimum deployment =1 Available memory for requests, based on your environment's configuration. Specified in bytes (for example, 200Ki, 50Mi, 5Gi). 500m Proof of concept = 1Gi, Minimum deployment = 16Gi* Limit on number of central processing units, based on your environment's configuration. Specified in cores or millicores (for example, 200m, 0.5, 1). Proof of concept = 500m, Minimum deployment =1 Available memory limit based on your environment's configuration. Specified in bytes (for example, 200Ki, 50Mi, 5Gi). Proof of concept = 1Gi, Minimum deployment = 16Gi* * Each Elasticsearch node can operate with a lower memory setting though this is not recommended for production deployments. For production use, you should have no less than 16Gi allocated to each pod by default, but preferably allocate as much as you can, up to 64Gi per pod. Procedure Log in to the OpenShift Container Platform web console as a user with the cluster-admin role. Navigate to Operators Installed Operators . Click the Red Hat OpenShift Service Mesh Operator. Click the Istio Service Mesh Control Plane tab. Click the name of your control plane file, for example, basic-install . Click the YAML tab. Edit the Jaeger parameters, replacing the default all-in-one template with parameters for the production-elasticsearch template, modified for your use case. Ensure that the indentation is correct. Click Save . Click Reload . OpenShift Container Platform redeploys Jaeger and creates the Elasticsearch resources based on the specified parameters. 3.10.5.4. Configuring the Elasticsearch index cleaner job When the Service Mesh Operator creates the ServiceMeshControlPlane it also creates the custom resource (CR) for Jaeger. The Red Hat OpenShift distributed tracing platform (Jaeger) Operator then uses this CR when creating Jaeger instances. When using Elasticsearch storage, by default a job is created to clean old traces from it. To configure the options for this job, you edit the Jaeger custom resource (CR), to customize it for your use case. The relevant options are listed below. apiVersion: jaegertracing.io/v1 kind: Jaeger spec: strategy: production storage: type: elasticsearch esIndexCleaner: enabled: false numberOfDays: 7 schedule: "55 23 * * *" Table 3.14. Elasticsearch index cleaner parameters Parameter Values Description enabled: true/ false Enable or disable the index cleaner job. numberOfDays: integer value Number of days to wait before deleting an index. schedule: "55 23 * * *" Cron expression for the job to run For more information about configuring Elasticsearch with OpenShift Container Platform, see Configuring the Elasticsearch log store . 3.10.6. 3scale configuration The following table explains the parameters for the 3scale Istio Adapter in the ServiceMeshControlPlane resource. Example 3scale parameters apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: addons: 3Scale: enabled: false PARAM_THREESCALE_LISTEN_ADDR: 3333 PARAM_THREESCALE_LOG_LEVEL: info PARAM_THREESCALE_LOG_JSON: true PARAM_THREESCALE_LOG_GRPC: false PARAM_THREESCALE_REPORT_METRICS: true PARAM_THREESCALE_METRICS_PORT: 8080 PARAM_THREESCALE_CACHE_TTL_SECONDS: 300 PARAM_THREESCALE_CACHE_REFRESH_SECONDS: 180 PARAM_THREESCALE_CACHE_ENTRIES_MAX: 1000 PARAM_THREESCALE_CACHE_REFRESH_RETRIES: 1 PARAM_THREESCALE_ALLOW_INSECURE_CONN: false PARAM_THREESCALE_CLIENT_TIMEOUT_SECONDS: 10 PARAM_THREESCALE_GRPC_CONN_MAX_SECONDS: 60 PARAM_USE_CACHED_BACKEND: false PARAM_BACKEND_CACHE_FLUSH_INTERVAL_SECONDS: 15 PARAM_BACKEND_CACHE_POLICY_FAIL_CLOSED: true # ... Table 3.15. 3scale parameters Parameter Description Values Default value enabled Whether to use the 3scale adapter true / false false PARAM_THREESCALE_LISTEN_ADDR Sets the listen address for the gRPC server Valid port number 3333 PARAM_THREESCALE_LOG_LEVEL Sets the minimum log output level. debug , info , warn , error , or none info PARAM_THREESCALE_LOG_JSON Controls whether the log is formatted as JSON true / false true PARAM_THREESCALE_LOG_GRPC Controls whether the log contains gRPC info true / false true PARAM_THREESCALE_REPORT_METRICS Controls whether 3scale system and backend metrics are collected and reported to Prometheus true / false true PARAM_THREESCALE_METRICS_PORT Sets the port that the 3scale /metrics endpoint can be scrapped from Valid port number 8080 PARAM_THREESCALE_CACHE_TTL_SECONDS Time period, in seconds, to wait before purging expired items from the cache Time period in seconds 300 PARAM_THREESCALE_CACHE_REFRESH_SECONDS Time period before expiry when cache elements are attempted to be refreshed Time period in seconds 180 PARAM_THREESCALE_CACHE_ENTRIES_MAX Max number of items that can be stored in the cache at any time. Set to 0 to disable caching Valid number 1000 PARAM_THREESCALE_CACHE_REFRESH_RETRIES The number of times unreachable hosts are retried during a cache update loop Valid number 1 PARAM_THREESCALE_ALLOW_INSECURE_CONN Allow to skip certificate verification when calling 3scale APIs. Enabling this is not recommended. true / false false PARAM_THREESCALE_CLIENT_TIMEOUT_SECONDS Sets the number of seconds to wait before terminating requests to 3scale System and Backend Time period in seconds 10 PARAM_THREESCALE_GRPC_CONN_MAX_SECONDS Sets the maximum amount of seconds (+/-10% jitter) a connection may exist before it is closed Time period in seconds 60 PARAM_USE_CACHE_BACKEND If true, attempt to create an in-memory apisonator cache for authorization requests true / false false PARAM_BACKEND_CACHE_FLUSH_INTERVAL_SECONDS If the backend cache is enabled, this sets the interval in seconds for flushing the cache against 3scale Time period in seconds 15 PARAM_BACKEND_CACHE_POLICY_FAIL_CLOSED Whenever the backend cache cannot retrieve authorization data, whether to deny (closed) or allow (open) requests true / false true 3.11. Using the 3scale Istio adapter Warning You are viewing documentation for a Red Hat OpenShift Service Mesh release that is no longer supported. Service Mesh version 1.0 and 1.1 control planes are no longer supported. For information about upgrading your service mesh control plane, see Upgrading Service Mesh . For information about the support status of a particular Red Hat OpenShift Service Mesh release, see the Product lifecycle page . The 3scale Istio Adapter is an optional adapter that allows you to label a service running within the Red Hat OpenShift Service Mesh and integrate that service with the 3scale API Management solution. It is not required for Red Hat OpenShift Service Mesh. 3.11.1. Integrate the 3scale adapter with Red Hat OpenShift Service Mesh You can use these examples to configure requests to your services using the 3scale Istio Adapter. Prerequisites Red Hat OpenShift Service Mesh version 1.x A working 3scale account ( SaaS or 3scale 2.5 On-Premises ) Enabling backend cache requires 3scale 2.9 or greater Red Hat OpenShift Service Mesh prerequisites Note To configure the 3scale Istio Adapter, refer to Red Hat OpenShift Service Mesh custom resources for instructions on adding adapter parameters to the custom resource file. Note Pay particular attention to the kind: handler resource. You must update this with your 3scale account credentials. You can optionally add a service_id to a handler, but this is kept for backwards compatibility only, since it would render the handler only useful for one service in your 3scale account. If you add service_id to a handler, enabling 3scale for other services requires you to create more handlers with different service_ids . Use a single handler per 3scale account by following the steps below: Procedure Create a handler for your 3scale account and specify your account credentials. Omit any service identifier. apiVersion: "config.istio.io/v1alpha2" kind: handler metadata: name: threescale spec: adapter: threescale params: system_url: "https://<organization>-admin.3scale.net/" access_token: "<ACCESS_TOKEN>" connection: address: "threescale-istio-adapter:3333" Optionally, you can provide a backend_url field within the params section to override the URL provided by the 3scale configuration. This may be useful if the adapter runs on the same cluster as the 3scale on-premise instance, and you wish to leverage the internal cluster DNS. Edit or patch the Deployment resource of any services belonging to your 3scale account as follows: Add the "service-mesh.3scale.net/service-id" label with a value corresponding to a valid service_id . Add the "service-mesh.3scale.net/credentials" label with its value being the name of the handler resource from step 1. Do step 2 to link it to your 3scale account credentials and to its service identifier, whenever you intend to add more services. Modify the rule configuration with your 3scale configuration to dispatch the rule to the threescale handler. Rule configuration example apiVersion: "config.istio.io/v1alpha2" kind: rule metadata: name: threescale spec: match: destination.labels["service-mesh.3scale.net"] == "true" actions: - handler: threescale.handler instances: - threescale-authorization.instance 3.11.1.1. Generating 3scale custom resources The adapter includes a tool that allows you to generate the handler , instance , and rule custom resources. Table 3.16. Usage Option Description Required Default value -h, --help Produces help output for available options No --name Unique name for this URL, token pair Yes -n, --namespace Namespace to generate templates No istio-system -t, --token 3scale access token Yes -u, --url 3scale Admin Portal URL Yes --backend-url 3scale backend URL. If set, it overrides the value that is read from system configuration No -s, --service 3scale API/Service ID No --auth 3scale authentication pattern to specify (1=API Key, 2=App Id/App Key, 3=OIDC) No Hybrid -o, --output File to save produced manifests to No Standard output --version Outputs the CLI version and exits immediately No 3.11.1.1.1. Generate templates from URL examples Note Run the following commands via oc exec from the 3scale adapter container image in Generating manifests from a deployed adapter . Use the 3scale-config-gen command to help avoid YAML syntax and indentation errors. You can omit the --service if you use the annotations. This command must be invoked from within the container image via oc exec . Procedure Use the 3scale-config-gen command to autogenerate templates files allowing the token, URL pair to be shared by multiple services as a single handler: The following example generates the templates with the service ID embedded in the handler: Additional resources Tokens . 3.11.1.2. Generating manifests from a deployed adapter Note NAME is an identifier you use to identify with the service you are managing with 3scale. The CREDENTIALS_NAME reference is an identifier that corresponds to the match section in the rule configuration. This is automatically set to the NAME identifier if you are using the CLI tool. Its value does not need to be anything specific: the label value should just match the contents of the rule. See Routing service traffic through the adapter for more information. Run this command to generate manifests from a deployed adapter in the istio-system namespace: This will produce sample output to the terminal. Edit these samples if required and create the objects using the oc create command. When the request reaches the adapter, the adapter needs to know how the service maps to an API on 3scale. You can provide this information in two ways: Label the workload (recommended) Hard code the handler as service_id Update the workload with the required annotations: Note You only need to update the service ID provided in this example if it is not already embedded in the handler. The setting in the handler takes precedence . 3.11.1.3. Routing service traffic through the adapter Follow these steps to drive traffic for your service through the 3scale adapter. Prerequisites Credentials and service ID from your 3scale administrator. Procedure Match the rule destination.labels["service-mesh.3scale.net/credentials"] == "threescale" that you previously created in the configuration, in the kind: rule resource. Add the above label to PodTemplateSpec on the Deployment of the target workload to integrate a service. the value, threescale , refers to the name of the generated handler. This handler stores the access token required to call 3scale. Add the destination.labels["service-mesh.3scale.net/service-id"] == "replace-me" label to the workload to pass the service ID to the adapter via the instance at request time. 3.11.2. Configure the integration settings in 3scale Follow this procedure to configure the 3scale integration settings. Note For 3scale SaaS customers, Red Hat OpenShift Service Mesh is enabled as part of the Early Access program. Procedure Navigate to [your_API_name] Integration Click Settings . Select the Istio option under Deployment . The API Key (user_key) option under Authentication is selected by default. Click Update Product to save your selection. Click Configuration . Click Update Configuration . 3.11.3. Caching behavior Responses from 3scale System APIs are cached by default within the adapter. Entries will be purged from the cache when they become older than the cacheTTLSeconds value. Also by default, automatic refreshing of cached entries will be attempted seconds before they expire, based on the cacheRefreshSeconds value. You can disable automatic refreshing by setting this value higher than the cacheTTLSeconds value. Caching can be disabled entirely by setting cacheEntriesMax to a non-positive value. By using the refreshing process, cached values whose hosts become unreachable will be retried before eventually being purged when past their expiry. 3.11.4. Authenticating requests This release supports the following authentication methods: Standard API Keys : single randomized strings or hashes acting as an identifier and a secret token. Application identifier and key pairs : immutable identifier and mutable secret key strings. OpenID authentication method : client ID string parsed from the JSON Web Token. 3.11.4.1. Applying authentication patterns Modify the instance custom resource, as illustrated in the following authentication method examples, to configure authentication behavior. You can accept the authentication credentials from: Request headers Request parameters Both request headers and query parameters Note When specifying values from headers, they must be lower case. For example, if you want to send a header as User-Key , this must be referenced in the configuration as request.headers["user-key"] . 3.11.4.1.1. API key authentication method Service Mesh looks for the API key in query parameters and request headers as specified in the user option in the subject custom resource parameter. It checks the values in the order given in the custom resource file. You can restrict the search for the API key to either query parameters or request headers by omitting the unwanted option. In this example, Service Mesh looks for the API key in the user_key query parameter. If the API key is not in the query parameter, Service Mesh then checks the user-key header. API key authentication method example apiVersion: "config.istio.io/v1alpha2" kind: instance metadata: name: threescale-authorization namespace: istio-system spec: template: authorization params: subject: user: request.query_params["user_key"] | request.headers["user-key"] | "" action: path: request.url_path method: request.method | "get" If you want the adapter to examine a different query parameter or request header, change the name as appropriate. For example, to check for the API key in a query parameter named "key", change request.query_params["user_key"] to request.query_params["key"] . 3.11.4.1.2. Application ID and application key pair authentication method Service Mesh looks for the application ID and application key in query parameters and request headers, as specified in the properties option in the subject custom resource parameter. The application key is optional. It checks the values in the order given in the custom resource file. You can restrict the search for the credentials to either query parameters or request headers by not including the unwanted option. In this example, Service Mesh looks for the application ID and application key in the query parameters first, moving on to the request headers if needed. Application ID and application key pair authentication method example apiVersion: "config.istio.io/v1alpha2" kind: instance metadata: name: threescale-authorization namespace: istio-system spec: template: authorization params: subject: app_id: request.query_params["app_id"] | request.headers["app-id"] | "" app_key: request.query_params["app_key"] | request.headers["app-key"] | "" action: path: request.url_path method: request.method | "get" If you want the adapter to examine a different query parameter or request header, change the name as appropriate. For example, to check for the application ID in a query parameter named identification , change request.query_params["app_id"] to request.query_params["identification"] . 3.11.4.1.3. OpenID authentication method To use the OpenID Connect (OIDC) authentication method , use the properties value on the subject field to set client_id , and optionally app_key . You can manipulate this object using the methods described previously. In the example configuration shown below, the client identifier (application ID) is parsed from the JSON Web Token (JWT) under the label azp . You can modify this as needed. OpenID authentication method example apiVersion: "config.istio.io/v1alpha2" kind: instance metadata: name: threescale-authorization spec: template: threescale-authorization params: subject: properties: app_key: request.query_params["app_key"] | request.headers["app-key"] | "" client_id: request.auth.claims["azp"] | "" action: path: request.url_path method: request.method | "get" service: destination.labels["service-mesh.3scale.net/service-id"] | "" For this integration to work correctly, OIDC must still be done in 3scale for the client to be created in the identity provider (IdP). You should create a Request authorization for the service you want to protect in the same namespace as that service. The JWT is passed in the Authorization header of the request. In the sample RequestAuthentication defined below, replace issuer , jwksUri , and selector as appropriate. OpenID Policy example apiVersion: security.istio.io/v1beta1 kind: RequestAuthentication metadata: name: jwt-example namespace: bookinfo spec: selector: matchLabels: app: productpage jwtRules: - issuer: >- http://keycloak-keycloak.34.242.107.254.nip.io/auth/realms/3scale-keycloak jwksUri: >- http://keycloak-keycloak.34.242.107.254.nip.io/auth/realms/3scale-keycloak/protocol/openid-connect/certs 3.11.4.1.4. Hybrid authentication method You can choose to not enforce a particular authentication method and accept any valid credentials for either method. If both an API key and an application ID/application key pair are provided, Service Mesh uses the API key. In this example, Service Mesh checks for an API key in the query parameters, then the request headers. If there is no API key, it then checks for an application ID and key in the query parameters, then the request headers. Hybrid authentication method example apiVersion: "config.istio.io/v1alpha2" kind: instance metadata: name: threescale-authorization spec: template: authorization params: subject: user: request.query_params["user_key"] | request.headers["user-key"] | properties: app_id: request.query_params["app_id"] | request.headers["app-id"] | "" app_key: request.query_params["app_key"] | request.headers["app-key"] | "" client_id: request.auth.claims["azp"] | "" action: path: request.url_path method: request.method | "get" service: destination.labels["service-mesh.3scale.net/service-id"] | "" 3.11.5. 3scale Adapter metrics The adapter, by default reports various Prometheus metrics that are exposed on port 8080 at the /metrics endpoint. These metrics provide insight into how the interactions between the adapter and 3scale are performing. The service is labeled to be automatically discovered and scraped by Prometheus. 3.11.6. 3scale Istio adapter verification You might want to check whether the 3scale Istio adapter is working as expected. If your adapter is not working, use the following steps to help troubleshoot the problem. Procedure Ensure the 3scale-adapter pod is running in the Service Mesh control plane namespace: USD oc get pods -n istio-system Check that the 3scale-adapter pod has printed out information about itself booting up, such as its version: USD oc logs istio-system When performing requests to the services protected by the 3scale adapter integration, always try requests that lack the right credentials and ensure they fail. Check the 3scale adapter logs to gather additional information. Additional resources Inspecting pod and container logs . 3.11.7. 3scale Istio adapter troubleshooting checklist As the administrator installing the 3scale Istio adapter, there are a number of scenarios that might be causing your integration to not function properly. Use the following list to troubleshoot your installation: Incorrect YAML indentation. Missing YAML sections. Forgot to apply the changes in the YAML to the cluster. Forgot to label the service workloads with the service-mesh.3scale.net/credentials key. Forgot to label the service workloads with service-mesh.3scale.net/service-id when using handlers that do not contain a service_id so they are reusable per account. The Rule custom resource points to the wrong handler or instance custom resources, or the references lack the corresponding namespace suffix. The Rule custom resource match section cannot possibly match the service you are configuring, or it points to a destination workload that is not currently running or does not exist. Wrong access token or URL for the 3scale Admin Portal in the handler. The Instance custom resource's params/subject/properties section fails to list the right parameters for app_id , app_key , or client_id , either because they specify the wrong location such as the query parameters, headers, and authorization claims, or the parameter names do not match the requests used for testing. Failing to use the configuration generator without realizing that it actually lives in the adapter container image and needs oc exec to invoke it. 3.12. Removing Service Mesh Warning You are viewing documentation for a Red Hat OpenShift Service Mesh release that is no longer supported. Service Mesh version 1.0 and 1.1 control planes are no longer supported. For information about upgrading your service mesh control plane, see Upgrading Service Mesh . For information about the support status of a particular Red Hat OpenShift Service Mesh release, see the Product lifecycle page . To remove Red Hat OpenShift Service Mesh from an existing OpenShift Container Platform instance, remove the control plane before removing the operators. 3.12.1. Removing the Red Hat OpenShift Service Mesh control plane To uninstall Service Mesh from an existing OpenShift Container Platform instance, first you delete the Service Mesh control plane and the Operators. Then, you run commands to remove residual resources. 3.12.1.1. Removing the Service Mesh control plane using the web console You can remove the Red Hat OpenShift Service Mesh control plane by using the web console. Procedure Log in to the OpenShift Container Platform web console. Click the Project menu and select the project where you installed the Service Mesh control plane, for example istio-system . Navigate to Operators Installed Operators . Click Service Mesh Control Plane under Provided APIs . Click the ServiceMeshControlPlane menu . Click Delete Service Mesh Control Plane . Click Delete on the confirmation dialog window to remove the ServiceMeshControlPlane . 3.12.1.2. Removing the Service Mesh control plane using the CLI You can remove the Red Hat OpenShift Service Mesh control plane by using the CLI. In this example, istio-system is the name of the control plane project. Procedure Log in to the OpenShift Container Platform CLI. Run the following command to delete the ServiceMeshMemberRoll resource. USD oc delete smmr -n istio-system default Run this command to retrieve the name of the installed ServiceMeshControlPlane : USD oc get smcp -n istio-system Replace <name_of_custom_resource> with the output from the command, and run this command to remove the custom resource: USD oc delete smcp -n istio-system <name_of_custom_resource> 3.12.2. Removing the installed Operators You must remove the Operators to successfully remove Red Hat OpenShift Service Mesh. After you remove the Red Hat OpenShift Service Mesh Operator, you must remove the Kiali Operator, the Red Hat OpenShift distributed tracing platform (Jaeger) Operator, and the OpenShift Elasticsearch Operator. 3.12.2.1. Removing the Operators Follow this procedure to remove the Operators that make up Red Hat OpenShift Service Mesh. Repeat the steps for each of the following Operators. Red Hat OpenShift Service Mesh Kiali Red Hat OpenShift distributed tracing platform (Jaeger) OpenShift Elasticsearch Procedure Log in to the OpenShift Container Platform web console. From the Operators Installed Operators page, scroll or type a keyword into the Filter by name to find each Operator. Then, click the Operator name. On the Operator Details page, select Uninstall Operator from the Actions menu. Follow the prompts to uninstall each Operator. 3.12.2.2. Clean up Operator resources Follow this procedure to manually remove resources left behind after removing the Red Hat OpenShift Service Mesh Operator using the OpenShift Container Platform web console. Prerequisites An account with cluster administration access. Access to the OpenShift CLI ( oc ). Procedure Log in to the OpenShift Container Platform CLI as a cluster administrator. Run the following commands to clean up resources after uninstalling the Operators. If you intend to keep using Jaeger as a stand alone service without service mesh, do not delete the Jaeger resources. Note The Operators are installed in the openshift-operators namespace by default. If you installed the Operators in another namespace, replace openshift-operators with the name of the project where the Red Hat OpenShift Service Mesh Operator was installed. USD oc delete validatingwebhookconfiguration/openshift-operators.servicemesh-resources.maistra.io USD oc delete mutatingwebhookconfiguration/openshift-operators.servicemesh-resources.maistra.io USD oc delete -n openshift-operators daemonset/istio-node USD oc delete clusterrole/istio-admin clusterrole/istio-cni clusterrolebinding/istio-cni USD oc delete clusterrole istio-view istio-edit USD oc delete clusterrole jaegers.jaegertracing.io-v1-admin jaegers.jaegertracing.io-v1-crdview jaegers.jaegertracing.io-v1-edit jaegers.jaegertracing.io-v1-view USD oc get crds -o name | grep '.*\.istio\.io' | xargs -r -n 1 oc delete USD oc get crds -o name | grep '.*\.maistra\.io' | xargs -r -n 1 oc delete USD oc get crds -o name | grep '.*\.kiali\.io' | xargs -r -n 1 oc delete USD oc delete crds jaegers.jaegertracing.io USD oc delete svc admission-controller -n <operator-project> USD oc delete project <istio-system-project>
[ "oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.15.9", "oc adm must-gather -- /usr/bin/gather_audit_logs", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-must-gather-5drcj must-gather-bklx4 2/2 Running 0 72s openshift-must-gather-5drcj must-gather-s8sdh 2/2 Running 0 72s", "oc adm must-gather --run-namespace <namespace> --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.15.9", "oc adm must-gather --image=registry.redhat.io/openshift-service-mesh/istio-must-gather-rhel8:2.6", "oc adm must-gather --image=registry.redhat.io/openshift-service-mesh/istio-must-gather-rhel8:2.6 gather <namespace>", "apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: httpbin namespace: foo spec: action: DENY rules: - from: - source: namespaces: [\"dev\"] to: - operation: hosts: [\"httpbin.com\",\"httpbin.com:*\"]", "apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: httpbin namespace: default spec: action: DENY rules: - to: - operation: hosts: [\"httpbin.example.com:*\"]", "spec: global: pathNormalization: <option>", "{ \"runtime\": { \"symlink_root\": \"/var/lib/istio/envoy/runtime\" } }", "oc create secret generic -n <SMCPnamespace> gateway-bootstrap --from-file=bootstrap-override.json", "apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: gateways: istio-ingressgateway: env: ISTIO_BOOTSTRAP_OVERRIDE: /var/lib/istio/envoy/custom-bootstrap/bootstrap-override.json secretVolumes: - mountPath: /var/lib/istio/envoy/custom-bootstrap name: custom-bootstrap secretName: gateway-bootstrap", "oc create secret generic -n <SMCPnamespace> gateway-settings --from-literal=overload.global_downstream_max_connections=10000", "apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: template: default #Change the version to \"v1.0\" if you are on the 1.0 stream. version: v1.1 istio: gateways: istio-ingressgateway: env: ISTIO_BOOTSTRAP_OVERRIDE: /var/lib/istio/envoy/custom-bootstrap/bootstrap-override.json secretVolumes: - mountPath: /var/lib/istio/envoy/custom-bootstrap name: custom-bootstrap secretName: gateway-bootstrap # below is the new secret mount - mountPath: /var/lib/istio/envoy/runtime name: gateway-settings secretName: gateway-settings", "oc get jaeger -n istio-system", "NAME AGE jaeger 3d21h", "oc get jaeger jaeger -oyaml -n istio-system > /tmp/jaeger-cr.yaml", "oc delete jaeger jaeger -n istio-system", "oc create -f /tmp/jaeger-cr.yaml -n istio-system", "rm /tmp/jaeger-cr.yaml", "oc delete -f <jaeger-cr-file>", "oc delete -f jaeger-prod-elasticsearch.yaml", "oc create -f <jaeger-cr-file>", "oc get pods -n jaeger-system -w", "spec: version: v1.1", "apiVersion: \"rbac.istio.io/v1alpha1\" kind: ServiceRoleBinding metadata: name: httpbin-client-binding namespace: httpbin spec: subjects: - user: \"cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account\" properties: request.headers[<header>]: \"value\"", "apiVersion: \"rbac.istio.io/v1alpha1\" kind: ServiceRoleBinding metadata: name: httpbin-client-binding namespace: httpbin spec: subjects: - user: \"cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account\" properties: request.regex.headers[<header>]: \"<regular expression>\"", "oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443", "oc new-project istio-system", "oc create -n istio-system -f istio-installation.yaml", "oc get smcp -n istio-system", "NAME READY STATUS PROFILES VERSION AGE basic-install 11/11 ComponentsReady [\"default\"] v1.1.18 4m25s", "oc get pods -n istio-system -w", "NAME READY STATUS RESTARTS AGE grafana-7bf5764d9d-2b2f6 2/2 Running 0 28h istio-citadel-576b9c5bbd-z84z4 1/1 Running 0 28h istio-egressgateway-5476bc4656-r4zdv 1/1 Running 0 28h istio-galley-7d57b47bb7-lqdxv 1/1 Running 0 28h istio-ingressgateway-dbb8f7f46-ct6n5 1/1 Running 0 28h istio-pilot-546bf69578-ccg5x 2/2 Running 0 28h istio-policy-77fd498655-7pvjw 2/2 Running 0 28h istio-sidecar-injector-df45bd899-ctxdt 1/1 Running 0 28h istio-telemetry-66f697d6d5-cj28l 2/2 Running 0 28h jaeger-896945cbc-7lqrr 2/2 Running 0 11h kiali-78d9c5b87c-snjzh 1/1 Running 0 22h prometheus-6dff867c97-gr2n5 2/2 Running 0 28h", "oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443", "oc new-project <your-project>", "apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default namespace: istio-system spec: members: # a list of projects joined into the service mesh - your-project-name - another-project-name", "oc create -n istio-system -f servicemeshmemberroll-default.yaml", "oc get smmr -n istio-system default", "oc edit smmr -n <controlplane-namespace>", "apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default namespace: istio-system #control plane project spec: members: # a list of projects joined into the service mesh - your-project-name - another-project-name", "oc patch deployment/<deployment> -p '{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"kubectl.kubernetes.io/restartedAt\": \"'`date -Iseconds`'\"}}}}}'", "apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: global: mtls: enabled: true", "apiVersion: \"authentication.istio.io/v1alpha1\" kind: \"Policy\" metadata: name: default namespace: <NAMESPACE> spec: peers: - mtls: {}", "apiVersion: \"networking.istio.io/v1alpha3\" kind: \"DestinationRule\" metadata: name: \"default\" namespace: <CONTROL_PLANE_NAMESPACE>> spec: host: \"*.local\" trafficPolicy: tls: mode: ISTIO_MUTUAL", "apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: global: tls: minProtocolVersion: TLSv1_2 maxProtocolVersion: TLSv1_3", "oc create secret generic cacerts -n istio-system --from-file=<path>/ca-cert.pem --from-file=<path>/ca-key.pem --from-file=<path>/root-cert.pem --from-file=<path>/cert-chain.pem", "apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: global: mtls: enabled: true security: selfSigned: false", "oc delete secret istio.default", "RATINGSPOD=`oc get pods -l app=ratings -o jsonpath='{.items[0].metadata.name}'`", "oc exec -it USDRATINGSPOD -c istio-proxy -- /bin/cat /etc/certs/root-cert.pem > /tmp/pod-root-cert.pem", "oc exec -it USDRATINGSPOD -c istio-proxy -- /bin/cat /etc/certs/cert-chain.pem > /tmp/pod-cert-chain.pem", "openssl x509 -in <path>/root-cert.pem -text -noout > /tmp/root-cert.crt.txt", "openssl x509 -in /tmp/pod-root-cert.pem -text -noout > /tmp/pod-root-cert.crt.txt", "diff /tmp/root-cert.crt.txt /tmp/pod-root-cert.crt.txt", "sed '0,/^-----END CERTIFICATE-----/d' /tmp/pod-cert-chain.pem > /tmp/pod-cert-chain-ca.pem", "openssl x509 -in <path>/ca-cert.pem -text -noout > /tmp/ca-cert.crt.txt", "openssl x509 -in /tmp/pod-cert-chain-ca.pem -text -noout > /tmp/pod-cert-chain-ca.crt.txt", "diff /tmp/ca-cert.crt.txt /tmp/pod-cert-chain-ca.crt.txt", "head -n 21 /tmp/pod-cert-chain.pem > /tmp/pod-cert-chain-workload.pem", "openssl verify -CAfile <(cat <path>/ca-cert.pem <path>/root-cert.pem) /tmp/pod-cert-chain-workload.pem", "/tmp/pod-cert-chain-workload.pem: OK", "oc delete secret cacerts -n istio-system", "apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: global: mtls: enabled: true security: selfSigned: true", "apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: ext-host-gwy spec: selector: istio: ingressgateway # use istio default controller servers: - port: number: 443 name: https protocol: HTTPS hosts: - ext-host.example.com tls: mode: SIMPLE serverCertificate: /tmp/tls.crt privateKey: /tmp/tls.key", "apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: virtual-svc spec: hosts: - ext-host.example.com gateways: - ext-host-gwy", "apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: bookinfo-gateway spec: selector: istio: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - \"*\"", "oc apply -f gateway.yaml", "apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: bookinfo spec: hosts: - \"*\" gateways: - bookinfo-gateway http: - match: - uri: exact: /productpage - uri: prefix: /static - uri: exact: /login - uri: exact: /logout - uri: prefix: /api/v1/products route: - destination: host: productpage port: number: 9080", "oc apply -f vs.yaml", "export GATEWAY_URL=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.host}')", "export TARGET_PORT=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.port.targetPort}')", "curl -s -I \"USDGATEWAY_URL/productpage\"", "oc get svc istio-ingressgateway -n istio-system", "export INGRESS_HOST=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')", "export INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"http2\")].port}')", "export SECURE_INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"https\")].port}')", "export TCP_INGRESS_PORT=USD(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"tcp\")].port}')", "export INGRESS_HOST=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')", "export INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"http2\")].nodePort}')", "export SECURE_INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"https\")].nodePort}')", "export TCP_INGRESS_PORT=USD(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"tcp\")].nodePort}')", "spec: istio: gateways: istio-egressgateway: autoscaleEnabled: false autoscaleMin: 1 autoscaleMax: 5 istio-ingressgateway: autoscaleEnabled: false autoscaleMin: 1 autoscaleMax: 5 ior_enabled: true", "apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: gateway1 spec: selector: istio: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - www.bookinfo.com - bookinfo.example.com", "oc -n <control_plane_namespace> get routes", "NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD gateway1-lvlfn bookinfo.example.com istio-ingressgateway <all> None gateway1-scqhv www.bookinfo.com istio-ingressgateway <all> None", "apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: svc-entry spec: hosts: - ext-svc.example.com ports: - number: 443 name: https protocol: HTTPS location: MESH_EXTERNAL resolution: DNS", "apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: ext-res-dr spec: host: ext-svc.example.com trafficPolicy: tls: mode: MUTUAL clientCertificate: /etc/certs/myclientcert.pem privateKey: /etc/certs/client_private_key.pem caCertificates: /etc/certs/rootcacerts.pem", "apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews spec: hosts: - reviews http: - match: - headers: end-user: exact: jason route: - destination: host: reviews subset: v2 - route: - destination: host: reviews subset: v3", "oc apply -f <VirtualService.yaml>", "spec: hosts:", "spec: http: - match:", "spec: http: - match: - destination:", "apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: my-destination-rule spec: host: my-svc trafficPolicy: loadBalancer: simple: RANDOM subsets: - name: v1 labels: version: v1 - name: v2 labels: version: v2 trafficPolicy: loadBalancer: simple: ROUND_ROBIN - name: v3 labels: version: v3", "oc apply -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.6/samples/bookinfo/networking/virtual-service-all-v1.yaml", "oc get virtualservices -o yaml", "export GATEWAY_URL=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.host}')", "echo \"http://USDGATEWAY_URL/productpage\"", "oc apply -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.6/samples/bookinfo/networking/virtual-service-reviews-test-v2.yaml", "oc get virtualservice reviews -o yaml", "oc create configmap --from-file=<templates-directory> smcp-templates -n openshift-operators", "oc get clusterserviceversion -n openshift-operators | grep 'Service Mesh'", "maistra.v1.0.0 Red Hat OpenShift Service Mesh 1.0.0 Succeeded", "oc edit clusterserviceversion -n openshift-operators maistra.v1.0.0", "deployments: - name: istio-operator spec: template: spec: containers: volumeMounts: - name: discovery-cache mountPath: /home/istio-operator/.kube/cache/discovery - name: smcp-templates mountPath: /usr/local/share/istio-operator/templates/ volumes: - name: discovery-cache emptyDir: medium: Memory - name: smcp-templates configMap: name: smcp-templates", "apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane metadata: name: minimal-install spec: template: default", "oc get deployment -n <namespace>", "get deployment -n bookinfo ratings-v1 -o yaml", "apiVersion: apps/v1 kind: Deployment metadata: name: ratings-v1 namespace: bookinfo labels: app: ratings version: v1 spec: template: metadata: labels: sidecar.istio.io/inject: 'true'", "oc apply -n <namespace> -f deployment.yaml", "oc apply -n bookinfo -f deployment-ratings-v1.yaml", "oc get deployment -n <namespace> <deploymentName> -o yaml", "oc get deployment -n bookinfo ratings-v1 -o yaml", "apiVersion: apps/v1 kind: Deployment metadata: name: resource spec: replicas: 7 selector: matchLabels: app: resource template: metadata: annotations: sidecar.maistra.io/proxyEnv: \"{ \\\"maistra_test_env\\\": \\\"env_value\\\", \\\"maistra_test_env_2\\\": \\\"env_value_2\\\" }\"", "oc get cm -n istio-system istio -o jsonpath='{.data.mesh}' | grep disablePolicyChecks", "oc edit cm -n istio-system istio", "oc new-project bookinfo", "apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default spec: members: - bookinfo", "oc create -n istio-system -f servicemeshmemberroll-default.yaml", "oc get smmr -n istio-system -o wide", "NAME READY STATUS AGE MEMBERS default 1/1 Configured 70s [\"bookinfo\"]", "oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.6/samples/bookinfo/platform/kube/bookinfo.yaml", "service/details created serviceaccount/bookinfo-details created deployment.apps/details-v1 created service/ratings created serviceaccount/bookinfo-ratings created deployment.apps/ratings-v1 created service/reviews created serviceaccount/bookinfo-reviews created deployment.apps/reviews-v1 created deployment.apps/reviews-v2 created deployment.apps/reviews-v3 created service/productpage created serviceaccount/bookinfo-productpage created deployment.apps/productpage-v1 created", "oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.6/samples/bookinfo/networking/bookinfo-gateway.yaml", "gateway.networking.istio.io/bookinfo-gateway created virtualservice.networking.istio.io/bookinfo created", "export GATEWAY_URL=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.host}')", "oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.6/samples/bookinfo/networking/destination-rule-all.yaml", "oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.6/samples/bookinfo/networking/destination-rule-all-mtls.yaml", "destinationrule.networking.istio.io/productpage created destinationrule.networking.istio.io/reviews created destinationrule.networking.istio.io/ratings created destinationrule.networking.istio.io/details created", "oc get pods -n bookinfo", "NAME READY STATUS RESTARTS AGE details-v1-55b869668-jh7hb 2/2 Running 0 12m productpage-v1-6fc77ff794-nsl8r 2/2 Running 0 12m ratings-v1-7d7d8d8b56-55scn 2/2 Running 0 12m reviews-v1-868597db96-bdxgq 2/2 Running 0 12m reviews-v2-5b64f47978-cvssp 2/2 Running 0 12m reviews-v3-6dfd49b55b-vcwpf 2/2 Running 0 12m", "echo \"http://USDGATEWAY_URL/productpage\"", "oc delete project bookinfo", "oc -n istio-system patch --type='json' smmr default -p '[{\"op\": \"remove\", \"path\": \"/spec/members\", \"value\":[\"'\"bookinfo\"'\"]}]'", "curl \"http://USDGATEWAY_URL/productpage\"", "export JAEGER_URL=USD(oc get route -n istio-system jaeger -o jsonpath='{.spec.host}')", "echo USDJAEGER_URL", "curl \"http://USDGATEWAY_URL/productpage\"", "apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane metadata: name: basic-install spec: istio: global: proxy: resources: requests: cpu: 100m memory: 128Mi limits: cpu: 500m memory: 128Mi gateways: istio-egressgateway: autoscaleEnabled: false istio-ingressgateway: autoscaleEnabled: false ior_enabled: false mixer: policy: autoscaleEnabled: false telemetry: autoscaleEnabled: false resources: requests: cpu: 100m memory: 1G limits: cpu: 500m memory: 4G pilot: autoscaleEnabled: false traceSampling: 100 kiali: enabled: true grafana: enabled: true tracing: enabled: true jaeger: template: all-in-one", "istio: global: tag: 1.1.0 hub: registry.redhat.io/openshift-service-mesh/ proxy: resources: requests: cpu: 10m memory: 128Mi limits: mtls: enabled: false disablePolicyChecks: true policyCheckFailOpen: false imagePullSecrets: - MyPullSecret", "gateways: egress: enabled: true runtime: deployment: autoScaling: enabled: true maxReplicas: 5 minReplicas: 1 enabled: true ingress: enabled: true runtime: deployment: autoScaling: enabled: true maxReplicas: 5 minReplicas: 1", "mixer: enabled: true policy: autoscaleEnabled: false telemetry: autoscaleEnabled: false resources: requests: cpu: 10m memory: 128Mi limits:", "spec: runtime: components: pilot: deployment: autoScaling: enabled: true minReplicas: 1 maxReplicas: 5 targetCPUUtilizationPercentage: 85 pod: tolerations: - key: node.kubernetes.io/unreachable operator: Exists effect: NoExecute tolerationSeconds: 60 affinity: podAntiAffinity: requiredDuringScheduling: - key: istio topologyKey: kubernetes.io/hostname operator: In values: - pilot container: resources: limits: cpu: 100m memory: 128M", "apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: kiali: enabled: true dashboard: viewOnlyMode: false ingress: enabled: true", "enabled", "dashboard viewOnlyMode", "ingress enabled", "spec: kiali: enabled: true dashboard: viewOnlyMode: false grafanaURL: \"https://grafana-istio-system.127.0.0.1.nip.io\" ingress: enabled: true", "spec: kiali: enabled: true dashboard: viewOnlyMode: false jaegerURL: \"http://jaeger-query-istio-system.127.0.0.1.nip.io\" ingress: enabled: true", "apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: version: v1.1 istio: tracing: enabled: true jaeger: template: all-in-one", "tracing: enabled:", "jaeger: template:", "apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: tracing: enabled: true ingress: enabled: true jaeger: template: production-elasticsearch elasticsearch: nodeCount: 3 redundancyPolicy: resources: requests: cpu: \"1\" memory: \"16Gi\" limits: cpu: \"1\" memory: \"16Gi\"", "tracing: enabled:", "ingress: enabled:", "jaeger: template:", "elasticsearch: nodeCount:", "requests: cpu:", "requests: memory:", "limits: cpu:", "limits: memory:", "oc get route -n istio-system external-jaeger", "NAME HOST/PORT PATH SERVICES [...] external-jaeger external-jaeger-istio-system.apps.test external-jaeger-query [...]", "apiVersion: jaegertracing.io/v1 kind: \"Jaeger\" metadata: name: \"external-jaeger\" # Deploy to the Control Plane Namespace namespace: istio-system spec: # Set Up Authentication ingress: enabled: true security: oauth-proxy openshift: # This limits user access to the Jaeger instance to users who have access # to the control plane namespace. Make sure to set the correct namespace here sar: '{\"namespace\": \"istio-system\", \"resource\": \"pods\", \"verb\": \"get\"}' htpasswdFile: /etc/proxy/htpasswd/auth volumeMounts: - name: secret-htpasswd mountPath: /etc/proxy/htpasswd volumes: - name: secret-htpasswd secret: secretName: htpasswd", "apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane metadata: name: external-jaeger namespace: istio-system spec: version: v1.1 istio: tracing: # Disable Jaeger deployment by service mesh operator enabled: false global: tracer: zipkin: # Set Endpoint for Trace Collection address: external-jaeger-collector.istio-system.svc.cluster.local:9411 kiali: # Set Jaeger dashboard URL dashboard: jaegerURL: https://external-jaeger-istio-system.apps.test # Set Endpoint for Trace Querying jaegerInClusterURL: external-jaeger-query.istio-system.svc.cluster.local", "apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: tracing: enabled: true ingress: enabled: true jaeger: template: production-elasticsearch elasticsearch: nodeCount: 3 redundancyPolicy: resources: requests: cpu: \"1\" memory: \"16Gi\" limits: cpu: \"1\" memory: \"16Gi\"", "tracing: enabled:", "ingress: enabled:", "jaeger: template:", "elasticsearch: nodeCount:", "requests: cpu:", "requests: memory:", "limits: cpu:", "limits: memory:", "apiVersion: jaegertracing.io/v1 kind: Jaeger spec: strategy: production storage: type: elasticsearch esIndexCleaner: enabled: false numberOfDays: 7 schedule: \"55 23 * * *\"", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: addons: 3Scale: enabled: false PARAM_THREESCALE_LISTEN_ADDR: 3333 PARAM_THREESCALE_LOG_LEVEL: info PARAM_THREESCALE_LOG_JSON: true PARAM_THREESCALE_LOG_GRPC: false PARAM_THREESCALE_REPORT_METRICS: true PARAM_THREESCALE_METRICS_PORT: 8080 PARAM_THREESCALE_CACHE_TTL_SECONDS: 300 PARAM_THREESCALE_CACHE_REFRESH_SECONDS: 180 PARAM_THREESCALE_CACHE_ENTRIES_MAX: 1000 PARAM_THREESCALE_CACHE_REFRESH_RETRIES: 1 PARAM_THREESCALE_ALLOW_INSECURE_CONN: false PARAM_THREESCALE_CLIENT_TIMEOUT_SECONDS: 10 PARAM_THREESCALE_GRPC_CONN_MAX_SECONDS: 60 PARAM_USE_CACHED_BACKEND: false PARAM_BACKEND_CACHE_FLUSH_INTERVAL_SECONDS: 15 PARAM_BACKEND_CACHE_POLICY_FAIL_CLOSED: true", "apiVersion: \"config.istio.io/v1alpha2\" kind: handler metadata: name: threescale spec: adapter: threescale params: system_url: \"https://<organization>-admin.3scale.net/\" access_token: \"<ACCESS_TOKEN>\" connection: address: \"threescale-istio-adapter:3333\"", "apiVersion: \"config.istio.io/v1alpha2\" kind: rule metadata: name: threescale spec: match: destination.labels[\"service-mesh.3scale.net\"] == \"true\" actions: - handler: threescale.handler instances: - threescale-authorization.instance", "3scale-config-gen --name=admin-credentials --url=\"https://<organization>-admin.3scale.net:443\" --token=\"[redacted]\"", "3scale-config-gen --url=\"https://<organization>-admin.3scale.net\" --name=\"my-unique-id\" --service=\"123456789\" --token=\"[redacted]\"", "export NS=\"istio-system\" URL=\"https://replaceme-admin.3scale.net:443\" NAME=\"name\" TOKEN=\"token\" exec -n USD{NS} USD(oc get po -n USD{NS} -o jsonpath='{.items[?(@.metadata.labels.app==\"3scale-istio-adapter\")].metadata.name}') -it -- ./3scale-config-gen --url USD{URL} --name USD{NAME} --token USD{TOKEN} -n USD{NS}", "export CREDENTIALS_NAME=\"replace-me\" export SERVICE_ID=\"replace-me\" export DEPLOYMENT=\"replace-me\" patch=\"USD(oc get deployment \"USD{DEPLOYMENT}\" patch=\"USD(oc get deployment \"USD{DEPLOYMENT}\" --template='{\"spec\":{\"template\":{\"metadata\":{\"labels\":{ {{ range USDk,USDv := .spec.template.metadata.labels }}\"{{ USDk }}\":\"{{ USDv }}\",{{ end }}\"service-mesh.3scale.net/service-id\":\"'\"USD{SERVICE_ID}\"'\",\"service-mesh.3scale.net/credentials\":\"'\"USD{CREDENTIALS_NAME}\"'\"}}}}}' )\" patch deployment \"USD{DEPLOYMENT}\" --patch ''\"USD{patch}\"''", "apiVersion: \"config.istio.io/v1alpha2\" kind: instance metadata: name: threescale-authorization namespace: istio-system spec: template: authorization params: subject: user: request.query_params[\"user_key\"] | request.headers[\"user-key\"] | \"\" action: path: request.url_path method: request.method | \"get\"", "apiVersion: \"config.istio.io/v1alpha2\" kind: instance metadata: name: threescale-authorization namespace: istio-system spec: template: authorization params: subject: app_id: request.query_params[\"app_id\"] | request.headers[\"app-id\"] | \"\" app_key: request.query_params[\"app_key\"] | request.headers[\"app-key\"] | \"\" action: path: request.url_path method: request.method | \"get\"", "apiVersion: \"config.istio.io/v1alpha2\" kind: instance metadata: name: threescale-authorization spec: template: threescale-authorization params: subject: properties: app_key: request.query_params[\"app_key\"] | request.headers[\"app-key\"] | \"\" client_id: request.auth.claims[\"azp\"] | \"\" action: path: request.url_path method: request.method | \"get\" service: destination.labels[\"service-mesh.3scale.net/service-id\"] | \"\"", "apiVersion: security.istio.io/v1beta1 kind: RequestAuthentication metadata: name: jwt-example namespace: bookinfo spec: selector: matchLabels: app: productpage jwtRules: - issuer: >- http://keycloak-keycloak.34.242.107.254.nip.io/auth/realms/3scale-keycloak jwksUri: >- http://keycloak-keycloak.34.242.107.254.nip.io/auth/realms/3scale-keycloak/protocol/openid-connect/certs", "apiVersion: \"config.istio.io/v1alpha2\" kind: instance metadata: name: threescale-authorization spec: template: authorization params: subject: user: request.query_params[\"user_key\"] | request.headers[\"user-key\"] | properties: app_id: request.query_params[\"app_id\"] | request.headers[\"app-id\"] | \"\" app_key: request.query_params[\"app_key\"] | request.headers[\"app-key\"] | \"\" client_id: request.auth.claims[\"azp\"] | \"\" action: path: request.url_path method: request.method | \"get\" service: destination.labels[\"service-mesh.3scale.net/service-id\"] | \"\"", "oc get pods -n istio-system", "oc logs istio-system", "oc delete smmr -n istio-system default", "oc get smcp -n istio-system", "oc delete smcp -n istio-system <name_of_custom_resource>", "oc delete validatingwebhookconfiguration/openshift-operators.servicemesh-resources.maistra.io", "oc delete mutatingwebhookconfiguration/openshift-operators.servicemesh-resources.maistra.io", "oc delete -n openshift-operators daemonset/istio-node", "oc delete clusterrole/istio-admin clusterrole/istio-cni clusterrolebinding/istio-cni", "oc delete clusterrole istio-view istio-edit", "oc delete clusterrole jaegers.jaegertracing.io-v1-admin jaegers.jaegertracing.io-v1-crdview jaegers.jaegertracing.io-v1-edit jaegers.jaegertracing.io-v1-view", "oc get crds -o name | grep '.*\\.istio\\.io' | xargs -r -n 1 oc delete", "oc get crds -o name | grep '.*\\.maistra\\.io' | xargs -r -n 1 oc delete", "oc get crds -o name | grep '.*\\.kiali\\.io' | xargs -r -n 1 oc delete", "oc delete crds jaegers.jaegertracing.io", "oc delete svc admission-controller -n <operator-project>", "oc delete project <istio-system-project>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/service_mesh/service-mesh-1-x
Chapter 27. Desktop
Chapter 27. Desktop Stylus of Dell Canvas 27 fixed Previously, Dell Canvas 27 contained a Wacom tablet in which the ranges were offset by default. As a consequence, the stylus mapped to the upper left quarter of the screen. Red Hat Enterprise Linux 7.5 supports the stylus of the Dell Canvas 27, making sure coordinates are accurately reported. As a result, the cursor is placed directly under the tip of the stylus as required. (BZ#1507821) llvmpipe crashes on IBM Power Systems On the little-endian variant of IBM Power Systems architecture, a race-condition in GNOME Shell code previously caused that, the LLVM engine for Mesa, llvm-private , terminated unexpectedly. This update disables threading in the JavaScript engine which prevents the segmentation fault from occurring. As a result, llvm-private no longer crashes on IBM Power Systems. (BZ#1523121)
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.5_release_notes/bug_fixes_desktop
Chapter 4. View OpenShift Data Foundation Topology
Chapter 4. View OpenShift Data Foundation Topology The topology shows the mapped visualization of the OpenShift Data Foundation storage cluster at various abstraction levels and also lets you to interact with these layers. The view also shows how the various elements compose the Storage cluster altogether. Procedure On the OpenShift Web Console, navigate to Storage Data Foundation Topology . The view shows the storage cluster and the zones inside it. You can see the nodes depicted by circular entities within the zones, which are indicated by dotted lines. The label of each item or resource contains basic information such as status and health or indication for alerts. Choose a node to view node details on the right-hand panel. You can also access resources or deployments within a node by clicking on the search/preview decorator icon. To view deployment details Click the preview decorator on a node. A modal window appears above the node that displays all of the deployments associated with that node along with their statuses. Click the Back to main view button in the model's upper left corner to close and return to the view. Select a specific deployment to see more information about it. All relevant data is shown in the side panel. Click the Resources tab to view the pods information. This tab provides a deeper understanding of the problems and offers granularity that aids in better troubleshooting. Click the pod links to view the pod information page on OpenShift Container Platform. The link opens in a new window.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/deploying_openshift_data_foundation_on_any_platform/viewing-odf-topology_mcg-verify
Installing on-premise with Assisted Installer
Installing on-premise with Assisted Installer OpenShift Container Platform 4.16 Installing OpenShift Container Platform on-premise with the Assisted Installer Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html-single/installing_on-premise_with_assisted_installer/index
Chapter 13. Managing ISO Images
Chapter 13. Managing ISO Images You can use Satellite to store ISO images, either from Red Hat's Content Delivery Network or other sources. You can also upload other files, such as virtual machine images, and publish them in repositories. 13.1. Importing ISO Images from Red Hat The Red Hat Content Delivery Network provides ISO images for certain products. The procedure for importing this content is similar to the procedure for enabling repositories for RPM content. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Content > Red Hat Repositories . In the Search field, enter an image name, for example, Red Hat Enterprise Linux 7 Server (ISOs) . In the Available Repositories window, expand Red Hat Enterprise Linux 7 Server (ISOs) . For the x86_64 7.2 entry, click the Enable icon to enable the repositories for the image. In the Satellite web UI, navigate to Content > Products and click Red Hat Enterprise Linux Server . Click the Repositories tab of the Red Hat Enterprise Linux Server window, and click Red Hat Enterprise Linux 7 Server ISOs x86_64 7.2 . In the upper right of the Red Hat Enterprise Linux 7 Server ISOs x86_64 7.2 window, click Select Action and select Sync Now . To view the Synchronization Status In the Satellite web UI, navigate to Content > Sync Status and expand Red Hat Enterprise Linux Server . CLI procedure Locate the Red Hat Enterprise Linux Server product for file repositories: Enable the file repository for Red Hat Enterprise Linux 7.2 Server ISO: Locate the repository in the product: Synchronize the repository in the product: 13.2. Importing Individual ISO Images and Files Use this procedure to manually import ISO content and other files to Satellite Server. To import files, you can complete the following steps in the Satellite web UI or using the Hammer CLI. However, if the size of the file that you want to upload is larger than 15 MB, you must use the Hammer CLI to upload it to a repository. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Content > Products , and in the Products window, click Create Product . In the Name field, enter a name to identify the product. This name populates the Label field. Optional: In the GPG Key field, enter a GPG Key for the product. Optional: From the Sync Plan list, select a synchronization plan for the product. Optional: In the Description field, enter a description of the product. Click Save . In the Products window, click the new product and then click Create Repository . In the Name field, enter a name for the repository. This automatically populates the Label field. From the Type list, select file . In the Upstream URL field, enter the URL of the registry to use as a source. Add a corresponding user name and password in the Upstream Username and Upstream Password fields. Click Save . Select the new repository. Navigate to Upload File and click Browse . Select the .iso file and click Upload . CLI procedure Create the custom product: Create the repository: Upload the ISO file to the repository:
[ "hammer repository-set list --product \"Red Hat Enterprise Linux Server\" --organization \" My_Organization \" | grep \"file\"", "hammer repository-set enable --product \"Red Hat Enterprise Linux Server\" --name \"Red Hat Enterprise Linux 7 Server (ISOs)\" --releasever 7.2 --basearch x86_64 --organization \" My_Organization \"", "hammer repository list --product \"Red Hat Enterprise Linux Server\" --organization \" My_Organization \"", "hammer repository synchronize --name \"Red Hat Enterprise Linux 7 Server ISOs x86_64 7.2\" --product \"Red Hat Enterprise Linux Server\" --organization \" My_Organization \"", "hammer product create --name \" My_ISOs \" --sync-plan \"Example Plan\" --description \" My_Product \" --organization \" My_Organization \"", "hammer repository create --name \" My_ISOs \" --content-type \"file\" --product \" My_Product \" --organization \" My_Organization \"", "hammer repository upload-content --path ~/bootdisk.iso --id repo_ID --organization \" My_Organization \"" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/managing_content/Managing_ISO_Images_content-management
10.3. Establishing Connections
10.3. Establishing Connections 10.3.1. Establishing a Wired (Ethernet) Connection To establish a wired network connection, Right-click on the NetworkManager applet to open its context menu, ensure that the Enable Networking box is checked, then click on Edit Connections . This opens the Network Connections window. Note that this window can also be opened by running, as a normal user: You can click on the arrow head to reveal and hide the list of connections as needed. Figure 10.6. The Network Connections window showing the newly created System eth0 connection The system startup scripts create and configure a single wired connection called System eth0 by default on all systems. Although you can edit System eth0 , creating a new wired connection for your custom settings is recommended. You can create a new wired connection by clicking the Add button, selecting the Wired entry from the list that appears and then clicking the Create button. Figure 10.7. Selecting a new connection type from the "Choose a Connection Type" list Note When you add a new connection by clicking the Add button, a list of connection types appears. Once you have made a selection and clicked on the Create button, NetworkManager creates a new configuration file for that connection and then opens the same dialog that is used for editing an existing connection. There is no difference between these dialogs. In effect, you are always editing a connection; the difference only lies in whether that connection previously existed or was just created by NetworkManager when you clicked Create . Figure 10.8. Editing the newly created Wired connection System eth0 Configuring the Connection Name, Auto-Connect Behavior, and Availability Settings Three settings in the Editing dialog are common to all connection types: Connection name - Enter a descriptive name for your network connection. This name will be used to list this connection in the Wired section of the Network Connections window. Connect automatically - Check this box if you want NetworkManager to auto-connect to this connection when it is available. See Section 10.2.3, "Connecting to a Network Automatically" for more information. Available to all users - Check this box to create a connection available to all users on the system. Changing this setting may require root privileges. See Section 10.2.4, "User and System Connections" for details. Configuring the Wired Tab The final three configurable settings are located within the Wired tab itself: the first is a text-entry field where you can specify a MAC (Media Access Control) address, and the second allows you to specify a cloned MAC address, and third allows you to specify the MTU (Maximum Transmission Unit) value. Normally, you can leave the MAC address field blank and the MTU set to automatic . These defaults will suffice unless you are associating a wired connection with a second or specific NIC, or performing advanced networking. In such cases, see the following descriptions: MAC Address Network hardware such as a Network Interface Card (NIC) has a unique MAC address (Media Access Control; also known as a hardware address ) that identifies it to the system. Running the ip addr command will show the MAC address associated with each interface. For example, in the following ip addr output, the MAC address for the eth0 interface (which is 52:54:00:26:9e:f1 ) immediately follows the link/ether keyword: A single system can have one or more NICs installed on it. The MAC address field therefore allows you to associate a specific NIC with a specific connection (or connections). As mentioned, you can determine the MAC address using the ip addr command, and then copy and paste that value into the MAC address text-entry field. The cloned MAC address field is mostly for use in such situations were a network service has been restricted to a specific MAC address and you need to emulate that MAC address. MTU The MTU (Maximum Transmission Unit) value represents the size in bytes of the largest packet that the connection will use to transmit. This value defaults to 1500 when using IPv4, or a variable number 1280 or higher for IPv6, and does not generally need to be specified or changed. Saving Your New (or Modified) Connection and Making Further Configurations Once you have finished editing your wired connection, click the Apply button and NetworkManager will immediately save your customized configuration. Given a correct configuration, you can connect to your new or customized connection by selecting it from the NetworkManager Notification Area applet. See Section 10.2.1, "Connecting to a Network" for information on using your new or altered connection. You can further configure an existing connection by selecting it in the Network Connections window and clicking Edit to return to the Editing dialog. Then, to configure: port-based Network Access Control (PNAC), click the 802.1X Security tab and proceed to Section 10.3.9.1, "Configuring 802.1X Security" ; IPv4 settings for the connection, click the IPv4 Settings tab and proceed to Section 10.3.9.4, "Configuring IPv4 Settings" ; or, IPv6 settings for the connection, click the IPv6 Settings tab and proceed to Section 10.3.9.5, "Configuring IPv6 Settings" .
[ "~]USD nm-connection-editor &", "~]# ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 1000 link/ether 52:54:00:26:9e:f1 brd ff:ff:ff:ff:ff:ff inet 192.168.122.251/24 brd 192.168.122.255 scope global eth0 inet6 fe80::5054:ff:fe26:9ef1/64 scope link valid_lft forever preferred_lft forever" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sec-establishing_connections
Appendix C. Cluster Connection Configuration Elements
Appendix C. Cluster Connection Configuration Elements The table below lists all of the configuration elements of a cluster-connection . Table C.1. Cluster connection configuration elements Name Description address Each cluster connection applies only to addresses that match the value specified in the address field. If no address is specified, then all addresses will be load balanced. The address field also supports comma separated lists of addresses. Use exclude syntax, ! to prevent an address from being matched. Below are some example addresses: jms.eu Matches all addresses starting with jms.eu . !jms.eu Matches all addresses except for those starting with jms.eu jms.eu.uk,jms.eu.de Matches all addresses starting with either jms.eu.uk or jms.eu.de jms.eu,!jms.eu.uk Matches all addresses starting with jms.eu , but not those starting with jms.eu.uk Note You should not have multiple cluster connections with overlapping addresses (for example, "europe" and "europe.news"), because the same messages could be distributed between more than one cluster connection, possibly resulting in duplicate deliveries. call-failover-timeout Use when a call is made during a failover attempt. The default is -1 , or no timeout. call-timeout When a packet is sent over a cluster connection, and it is a blocking call, call-timeout determines how long the broker will wait (in milliseconds) for the reply before throwing an exception. The default is 30000 . check-period The interval, in milliseconds, between checks to see if the cluster connection has failed to receive pings from another broker. The default is 30000 . confirmation-window-size The size, in bytes, of the window used for sending confirmations from the broker connected to. When the broker receives confirmation-window-size bytes, it notifies its client. The default is 1048576 . A value of -1 means no window. connector-ref Identifies the connector that will be transmitted to other brokers in the cluster so that they have the correct cluster topology. This parameter is mandatory. connection-ttl Determines how long a cluster connection should stay alive if it stops receiving messages from a specific broker in the cluster. The default is 60000 . discovery-group-ref Points to a discovery-group to be used to communicate with other brokers in the cluster. This element must include the attribute discovery-group-name , which must match the name attribute of a previously configured discovery-group . initial-connect-attempts Sets the number of times the system will try to connect a broker in the cluster initially. If the max-retry is achieved, this broker will be considered permanently down, and the system will not route messages to this broker. The default is -1 , which means infinite retries. max-hops Configures the broker to load balance messages to brokers which might be connected to it only indirectly with other brokers as intermediates in a chain. This allows for more complex topologies while still providing message load-balancing. The default value is 1 , which means messages are distributed only to other brokers directly connected to this broker. This parameter is optional. max-retry-interval The maximum delay for retries, in milliseconds. The default is 2000 . message-load-balancing Determines whether and how messages will be distributed between other brokers in the cluster. Include the message-load-balancing element to enable load balancing. The default value is ON_DEMAND . You can provide a value as well. Valid values are: OFF Disables load balancing. STRICT Enable load balancing and forwards messages to all brokers that have a matching queue, whether or not the queue has an active consumer or a matching selector. ON_DEMAND Enables load balancing and ensures that messages are forwarded only to brokers that have active consumers with a matching selector. OFF_WITH_REDISTRIBUTION Disables load balancing but ensures that messages are forwarded only to brokers that have active consumers with a matching selector when no suitable local consumer is available. min-large-message-size If a message size, in bytes, is larger than min-large-message-size , it will be split into multiple segments when sent over the network to other cluster members. The default is 102400 . notification-attempts Sets how many times the cluster connection should broadcast itself when connecting to the cluster. The default is 2 . notification-interval Sets how often, in milliseconds, the cluster connection should broadcast itself when attaching to the cluster. The default is 1000 . producer-window-size The size, in bytes, for producer flow control over cluster connection. By default, it is disabled, but you may want to set a value if you are using really large messages in cluster. A value of -1 means no window. reconnect-attempts Sets the number of times the system will try to reconnect to a broker in the cluster. If the max-retry is achieved, this broker will be considered permanently down and the system will stop routing messages to this broker. The default is -1 , which means infinite retries. retry-interval Determines the interval, in milliseconds, between retry attempts. If the cluster connection is created and the target broker has not been started or is booting, then the cluster connections from other brokers will retry connecting to the target until it comes back up. This parameter is optional. The default value is 500 milliseconds. retry-interval-multiplier The multiplier used to increase the retry-interval after each reconnect attempt. The default is 1. use-duplicate-detection Cluster connections use bridges to link the brokers, and bridges can be configured to add a duplicate ID property in each message that is forwarded. If the target broker of the bridge crashes and then recovers, messages might be resent from the source broker. By setting use-duplicate-detection to true , any duplicate messages will be filtered out and ignored on receipt at the target broker. The default is true .
null
https://docs.redhat.com/en/documentation/red_hat_amq_broker/7.12/html/configuring_amq_broker/cluster_connection_elements
Chapter 5. Sending traces and metrics to the OpenTelemetry Collector
Chapter 5. Sending traces and metrics to the OpenTelemetry Collector You can set up and use the Red Hat build of OpenTelemetry to send traces to the OpenTelemetry Collector or the TempoStack instance. Sending traces and metrics to the OpenTelemetry Collector is possible with or without sidecar injection. 5.1. Sending traces and metrics to the OpenTelemetry Collector with sidecar injection You can set up sending telemetry data to an OpenTelemetry Collector instance with sidecar injection. The Red Hat build of OpenTelemetry Operator allows sidecar injection into deployment workloads and automatic configuration of your instrumentation to send telemetry data to the OpenTelemetry Collector. Prerequisites The Red Hat OpenShift distributed tracing platform (Tempo) is installed, and a TempoStack instance is deployed. You have access to the cluster through the web console or the OpenShift CLI ( oc ): You are logged in to the web console as a cluster administrator with the cluster-admin role. An active OpenShift CLI ( oc ) session by a cluster administrator with the cluster-admin role. For Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. Procedure Create a project for an OpenTelemetry Collector instance. apiVersion: project.openshift.io/v1 kind: Project metadata: name: observability Create a service account. apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-sidecar namespace: observability Grant the permissions to the service account for the k8sattributes and resourcedetection processors. apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: - apiGroups: ["", "config.openshift.io"] resources: ["pods", "namespaces", "infrastructures", "infrastructures/status"] verbs: ["get", "watch", "list"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector subjects: - kind: ServiceAccount name: otel-collector-sidecar namespace: observability roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io Deploy the OpenTelemetry Collector as a sidecar. apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: observability spec: serviceAccount: otel-collector-sidecar mode: sidecar config: serviceAccount: otel-collector-sidecar receivers: otlp: protocols: grpc: {} http: {} processors: batch: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] timeout: 2s exporters: otlp: endpoint: "tempo-<example>-gateway:8090" 1 tls: insecure: true service: pipelines: traces: receivers: [otlp] processors: [memory_limiter, resourcedetection, batch] exporters: [otlp] 1 This points to the Gateway of the TempoStack instance deployed by using the <example> Tempo Operator. Create your deployment using the otel-collector-sidecar service account. Add the sidecar.opentelemetry.io/inject: "true" annotation to your Deployment object. This will inject all the needed environment variables to send data from your workloads to the OpenTelemetry Collector instance. 5.2. Sending traces and metrics to the OpenTelemetry Collector without sidecar injection You can set up sending telemetry data to an OpenTelemetry Collector instance without sidecar injection, which involves manually setting several environment variables. Prerequisites The Red Hat OpenShift distributed tracing platform (Tempo) is installed, and a TempoStack instance is deployed. You have access to the cluster through the web console or the OpenShift CLI ( oc ): You are logged in to the web console as a cluster administrator with the cluster-admin role. An active OpenShift CLI ( oc ) session by a cluster administrator with the cluster-admin role. For Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. Procedure Create a project for an OpenTelemetry Collector instance. apiVersion: project.openshift.io/v1 kind: Project metadata: name: observability Create a service account. apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment namespace: observability Grant the permissions to the service account for the k8sattributes and resourcedetection processors. apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: - apiGroups: ["", "config.openshift.io"] resources: ["pods", "namespaces", "infrastructures", "infrastructures/status"] verbs: ["get", "watch", "list"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector subjects: - kind: ServiceAccount name: otel-collector-deployment namespace: observability roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io Deploy the OpenTelemetry Collector instance with the OpenTelemetryCollector custom resource. apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: observability spec: mode: deployment serviceAccount: otel-collector-deployment config: receivers: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} opencensus: otlp: protocols: grpc: {} http: {} zipkin: {} processors: batch: {} k8sattributes: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] exporters: otlp: endpoint: "tempo-<example>-distributor:4317" 1 tls: insecure: true service: pipelines: traces: receivers: [jaeger, opencensus, otlp, zipkin] processors: [memory_limiter, k8sattributes, resourcedetection, batch] exporters: [otlp] 1 This points to the Gateway of the TempoStack instance deployed by using the <example> Tempo Operator. Set the environment variables in the container with your instrumented application. Name Description Default value OTEL_SERVICE_NAME Sets the value of the service.name resource attribute. "" OTEL_EXPORTER_OTLP_ENDPOINT Base endpoint URL for any signal type with an optionally specified port number. https://localhost:4317 OTEL_EXPORTER_OTLP_CERTIFICATE Path to the certificate file for the TLS credentials of the gRPC client. https://localhost:4317 OTEL_TRACES_SAMPLER Sampler to be used for traces. parentbased_always_on OTEL_EXPORTER_OTLP_PROTOCOL Transport protocol for the OTLP exporter. grpc OTEL_EXPORTER_OTLP_TIMEOUT Maximum time interval for the OTLP exporter to wait for each batch export. 10s OTEL_EXPORTER_OTLP_INSECURE Disables client transport security for gRPC requests. An HTTPS schema overrides it. False
[ "apiVersion: project.openshift.io/v1 kind: Project metadata: name: observability", "apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-sidecar namespace: observability", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: - apiGroups: [\"\", \"config.openshift.io\"] resources: [\"pods\", \"namespaces\", \"infrastructures\", \"infrastructures/status\"] verbs: [\"get\", \"watch\", \"list\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector subjects: - kind: ServiceAccount name: otel-collector-sidecar namespace: observability roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io", "apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: observability spec: serviceAccount: otel-collector-sidecar mode: sidecar config: serviceAccount: otel-collector-sidecar receivers: otlp: protocols: grpc: {} http: {} processors: batch: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] timeout: 2s exporters: otlp: endpoint: \"tempo-<example>-gateway:8090\" 1 tls: insecure: true service: pipelines: traces: receivers: [otlp] processors: [memory_limiter, resourcedetection, batch] exporters: [otlp]", "apiVersion: project.openshift.io/v1 kind: Project metadata: name: observability", "apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment namespace: observability", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: - apiGroups: [\"\", \"config.openshift.io\"] resources: [\"pods\", \"namespaces\", \"infrastructures\", \"infrastructures/status\"] verbs: [\"get\", \"watch\", \"list\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector subjects: - kind: ServiceAccount name: otel-collector-deployment namespace: observability roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io", "apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: observability spec: mode: deployment serviceAccount: otel-collector-deployment config: receivers: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} opencensus: otlp: protocols: grpc: {} http: {} zipkin: {} processors: batch: {} k8sattributes: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] exporters: otlp: endpoint: \"tempo-<example>-distributor:4317\" 1 tls: insecure: true service: pipelines: traces: receivers: [jaeger, opencensus, otlp, zipkin] processors: [memory_limiter, k8sattributes, resourcedetection, batch] exporters: [otlp]" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/red_hat_build_of_opentelemetry/otel-sending-traces-and-metrics-to-otel-collector
Chapter 4. Remote health monitoring with connected clusters
Chapter 4. Remote health monitoring with connected clusters 4.1. About remote health monitoring OpenShift Container Platform collects telemetry and configuration data about your cluster and reports it to Red Hat by using the Telemeter Client and the Insights Operator. The data that is provided to Red Hat enables the benefits outlined in this document. A cluster that reports data to Red Hat through Telemetry and the Insights Operator is considered a connected cluster . Telemetry is the term that Red Hat uses to describe the information being sent to Red Hat by the OpenShift Container Platform Telemeter Client. Lightweight attributes are sent from connected clusters to Red Hat to enable subscription management automation, monitor the health of clusters, assist with support, and improve customer experience. The Insights Operator gathers OpenShift Container Platform configuration data and sends it to Red Hat. The data is used to produce insights about potential issues that a cluster might be exposed to. These insights are communicated to cluster administrators on console.redhat.com/openshift . More information is provided in this document about these two processes. Telemetry and Insights Operator benefits Telemetry and the Insights Operator enable the following benefits for end-users: Enhanced identification and resolution of issues . Events that might seem normal to an end-user can be observed by Red Hat from a broader perspective across a fleet of clusters. Some issues can be more rapidly identified from this point of view and resolved without an end-user needing to open a support case or file a Jira issue . Advanced release management . OpenShift Container Platform offers the candidate , fast , and stable release channels, which enable you to choose an update strategy. The graduation of a release from fast to stable is dependent on the success rate of updates and on the events seen during upgrades. With the information provided by connected clusters, Red Hat can improve the quality of releases to stable channels and react more rapidly to issues found in the fast channels. Targeted prioritization of new features and functionality . The data collected provides insights about which areas of OpenShift Container Platform are used most. With this information, Red Hat can focus on developing the new features and functionality that have the greatest impact for our customers. A streamlined support experience . You can provide a cluster ID for a connected cluster when creating a support ticket on the Red Hat Customer Portal . This enables Red Hat to deliver a streamlined support experience that is specific to your cluster, by using the connected information. This document provides more information about that enhanced support experience. Predictive analytics . The insights displayed for your cluster on console.redhat.com/openshift are enabled by the information collected from connected clusters. Red Hat is investing in applying deep learning, machine learning, and artificial intelligence automation to help identify issues that OpenShift Container Platform clusters are exposed to. 4.1.1. About Telemetry Telemetry sends a carefully chosen subset of the cluster monitoring metrics to Red Hat. The Telemeter Client fetches the metrics values every four minutes and thirty seconds and uploads the data to Red Hat. These metrics are described in this document. This stream of data is used by Red Hat to monitor the clusters in real-time and to react as necessary to problems that impact our customers. It also allows Red Hat to roll out OpenShift Container Platform upgrades to customers to minimize service impact and continuously improve the upgrade experience. This debugging information is available to Red Hat Support and Engineering teams with the same restrictions as accessing data reported through support cases. All connected cluster information is used by Red Hat to help make OpenShift Container Platform better and more intuitive to use. Additional resources See the OpenShift Container Platform update documentation for more information about updating or upgrading a cluster. 4.1.1.1. Information collected by Telemetry The following information is collected by Telemetry: The unique random identifier that is generated during an installation Version information, including the OpenShift Container Platform cluster version and installed update details that are used to determine update version availability Update information, including the number of updates available per cluster, the channel and image repository used for an update, update progress information, and the number of errors that occur in an update The name of the provider platform that OpenShift Container Platform is deployed on and the data center location Sizing information about clusters, machine types, and machines, including the number of CPU cores and the amount of RAM used for each The number of etcd members and the number of objects stored in the etcd cluster The OpenShift Container Platform framework components installed in a cluster and their condition and status Usage information about components, features, and extensions Usage details about Technology Previews and unsupported configurations Information about degraded software Information about nodes that are marked as NotReady Events for all namespaces listed as "related objects" for a degraded Operator Configuration details that help Red Hat Support to provide beneficial support for customers. This includes node configuration at the cloud infrastructure level, hostnames, IP addresses, Kubernetes pod names, namespaces, and services. Information about the validity of certificates Telemetry does not collect identifying information such as user names, or passwords. Red Hat does not intend to collect personal information. If Red Hat discovers that personal information has been inadvertently received, Red Hat will delete such information. To the extent that any telemetry data constitutes personal data, please refer to the Red Hat Privacy Statement for more information about Red Hat's privacy practices. Additional resources See Showing data collected by Telemetry for details about how to list the attributes that Telemetry gathers from Prometheus in OpenShift Container Platform. See the upstream cluster-monitoring-operator source code for a list of the attributes that Telemetry gathers from Prometheus. Telemetry is installed and enabled by default. If you need to opt out of remote health reporting, see Opting out of remote health reporting . 4.1.2. About the Insights Operator The Insights Operator periodically gathers configuration and component failure status and, by default, reports that data every two hours to Red Hat. This information enables Red Hat to assess configuration and deeper failure data than is reported through Telemetry. Users of OpenShift Container Platform can display the report of each cluster in the Insights Advisor service on Red Hat Hybrid Cloud Console. If any issues have been identified, Insights provides further details and, if available, steps on how to solve a problem. The Insights Operator does not collect identifying information, such as user names, passwords, or certificates. See Red Hat Insights Data & Application Security for information about Red Hat Insights data collection and controls. Red Hat uses all connected cluster information to: Identify potential cluster issues and provide a solution and preventive actions in the Insights Advisor service on Red Hat Hybrid Cloud Console Improve OpenShift Container Platform by providing aggregated and critical information to product and support teams Make OpenShift Container Platform more intuitive Additional resources The Insights Operator is installed and enabled by default. If you need to opt out of remote health reporting, see Opting out of remote health reporting . 4.1.2.1. Information collected by the Insights Operator The following information is collected by the Insights Operator: General information about your cluster and its components to identify issues that are specific to your OpenShift Container Platform version and environment Configuration files, such as the image registry configuration, of your cluster to determine incorrect settings and issues that are specific to parameters you set Errors that occur in the cluster components Progress information of running updates, and the status of any component upgrades Details of the platform that OpenShift Container Platform is deployed on, such as Amazon Web Services, and the region that the cluster is located in If an Operator reports an issue, information is collected about core OpenShift Container Platform pods in the openshift-* and kube-* projects. This includes state, resource, security context, volume information, and more. Additional resources See Showing data collected by the Insights Operator for details about how to review the data that is collected by the Insights Operator. The Insights Operator source code is available for review and contribution. See the Insights Operator upstream project for a list of the items collected by the Insights Operator. 4.1.3. Understanding Telemetry and Insights Operator data flow The Telemeter Client collects selected time series data from the Prometheus API. The time series data is uploaded to api.openshift.com every four minutes and thirty seconds for processing. The Insights Operator gathers selected data from the Kubernetes API and the Prometheus API into an archive. The archive is uploaded to console.redhat.com every two hours for processing. The Insights Operator also downloads the latest Insights analysis from console.redhat.com . This is used to populate the Insights status pop-up that is included in the Overview page in the OpenShift Container Platform web console. All of the communication with Red Hat occurs over encrypted channels by using Transport Layer Security (TLS) and mutual certificate authentication. All of the data is encrypted in transit and at rest. Access to the systems that handle customer data is controlled through multi-factor authentication and strict authorization controls. Access is granted on a need-to-know basis and is limited to required operations. Telemetry and Insights Operator data flow Additional resources See Monitoring overview for more information about the OpenShift Container Platform monitoring stack. See Configuring your firewall for details about configuring a firewall and enabling endpoints for Telemetry and Insights 4.1.4. Additional details about how remote health monitoring data is used The information collected to enable remote health monitoring is detailed in Information collected by Telemetry and Information collected by the Insights Operator . As further described in the preceding sections of this document, Red Hat collects data about your use of the Red Hat Product(s) for purposes such as providing support and upgrades, optimizing performance or configuration, minimizing service impacts, identifying and remediating threats, troubleshooting, improving the offerings and user experience, responding to issues, and for billing purposes if applicable. Collection safeguards Red Hat employs technical and organizational measures designed to protect the telemetry and configuration data. Sharing Red Hat may share the data collected through Telemetry and the Insights Operator internally within Red Hat to improve your user experience. Red Hat may share telemetry and configuration data with its business partners in an aggregated form that does not identify customers to help the partners better understand their markets and their customers' use of Red Hat offerings or to ensure the successful integration of products jointly supported by those partners. Third parties Red Hat may engage certain third parties to assist in the collection, analysis, and storage of the Telemetry and configuration data. User control / enabling and disabling telemetry and configuration data collection You may disable OpenShift Container Platform Telemetry and the Insights Operator by following the instructions in Opting out of remote health reporting . 4.2. Showing data collected by remote health monitoring As an administrator, you can review the metrics collected by Telemetry and the Insights Operator. 4.2.1. Showing data collected by Telemetry You can see the cluster and components time series data captured by Telemetry. Prerequisites Install the OpenShift CLI ( oc ). You must log in to the cluster with a user that has either the cluster-admin role or the cluster-monitoring-view role. Procedure Find the URL for the Prometheus service that runs in the OpenShift Container Platform cluster: USD oc get route prometheus-k8s -n openshift-monitoring -o jsonpath="{.spec.host}" Navigate to the URL. Enter this query in the Expression input box and press Execute : This query replicates the request that Telemetry makes against a running OpenShift Container Platform cluster's Prometheus service and returns the full set of time series captured by Telemetry. 4.2.2. Showing data collected by the Insights Operator You can review the data that is collected by the Insights Operator. Prerequisites Access to the cluster as a user with the cluster-admin role. Procedure Find the name of the currently running pod for the Insights Operator: USD INSIGHTS_OPERATOR_POD=USD(oc get pods --namespace=openshift-insights -o custom-columns=:metadata.name --no-headers --field-selector=status.phase=Running) Copy the recent data archives collected by the Insights Operator: USD oc cp openshift-insights/USDINSIGHTS_OPERATOR_POD:/var/lib/insights-operator ./insights-data The recent Insights Operator archives are now available in the insights-data directory. 4.3. Opting out of remote health reporting You may choose to opt out of reporting health and usage data for your cluster. To opt out of remote health reporting, you must: Modify the global cluster pull secret to disable remote health reporting. Update the cluster to use this modified pull secret. 4.3.1. Consequences of disabling remote health reporting In OpenShift Container Platform, customers can opt out of reporting usage information. However, connected clusters allow Red Hat to react more quickly to problems and better support our customers, as well as better understand how product upgrades impact clusters. Connected clusters also help to simplify the subscription and entitlement process and enable the Red Hat OpenShift Cluster Manager service to provide an overview of your clusters and their subscription status. Red Hat strongly recommends leaving health and usage reporting enabled for pre-production and test clusters even if it is necessary to opt out for production clusters. This allows Red Hat to be a participant in qualifying OpenShift Container Platform in your environments and react more rapidly to product issues. Some of the consequences of opting out of having a connected cluster are: Red Hat will not be able to monitor the success of product upgrades or the health of your clusters without a support case being opened. Red Hat will not be able to use configuration data to better triage customer support cases and identify which configurations our customers find important. The Red Hat OpenShift Cluster Manager will not show data about your clusters including health and usage information. Your subscription entitlement information must be manually entered via console.redhat.com without the benefit of automatic usage reporting. In restricted networks, Telemetry and Insights data can still be reported through appropriate configuration of your proxy. 4.3.2. Modifying the global cluster pull secret to disable remote health reporting You can modify your existing global cluster pull secret to disable remote health reporting. This disables both Telemetry and the Insights Operator. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Download the global cluster pull secret to your local file system. USD oc extract secret/pull-secret -n openshift-config --to=. In a text editor, edit the .dockerconfigjson file that was downloaded. Remove the cloud.openshift.com JSON entry, for example: "cloud.openshift.com":{"auth":"<hash>","email":"<email_address>"} Save the file. You can now update your cluster to use this modified pull secret. 4.3.3. Updating the global cluster pull secret You can update the global pull secret for your cluster by either replacing the current pull secret or appending a new pull secret. The procedure is required when users use a separate registry to store images than the registry used during installation. Warning Cluster resources must adjust to the new pull secret, which can temporarily limit the usability of the cluster. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Optional: To append a new pull secret to the existing pull secret, complete the following steps: Enter the following command to download the pull secret: USD oc get secret/pull-secret -n openshift-config --template='{{index .data ".dockerconfigjson" | base64decode}}' ><pull_secret_location> 1 1 Provide the path to the pull secret file. Enter the following command to add the new pull secret: USD oc registry login --registry="<registry>" \ 1 --auth-basic="<username>:<password>" \ 2 --to=<pull_secret_location> 3 1 Provide the new registry. You can include multiple repositories within the same registry, for example: --registry="<registry/my-namespace/my-repository>" . 2 Provide the credentials of the new registry. 3 Provide the path to the pull secret file. Alternatively, you can perform a manual update to the pull secret file. Enter the following command to update the global pull secret for your cluster: USD oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=<pull_secret_location> 1 1 Provide the path to the new pull secret file. This update is rolled out to all nodes, which can take some time depending on the size of your cluster. Note As of OpenShift Container Platform 4.7.4, changes to the global pull secret no longer trigger a node drain or reboot. 4.4. Using Insights to identify issues with your cluster Insights repeatedly analyzes the data Insights Operator sends. Users of OpenShift Container Platform can display the report in the Insights Advisor service on Red Hat Hybrid Cloud Console. 4.4.1. About Red Hat Insights Advisor for OpenShift Container Platform You can use Insights Advisor to assess and monitor the health of your OpenShift Container Platform clusters. Whether you are concerned about individual clusters, or with your whole infrastructure, it is important to be aware of your exposure to issues that can affect service availability, fault tolerance, performance, or security. Insights repeatedly analyzes the data that Insights Operator sends using a database of recommendations , which are sets of conditions that can leave your OpenShift Container Platform clusters at risk. Your data is then uploaded to the Insights Advisor service on Red Hat Hybrid Cloud Console where you can perform the following actions: See clusters impacted by a specific recommendation. Use robust filtering capabilities to refine your results to those recommendations. Learn more about individual recommendations, details about the risks they present, and get resolutions tailored to your individual clusters. Share results with other stakeholders. 4.4.2. Understanding Insights Advisor recommendations Insights Advisor bundles information about various cluster states and component configurations that can negatively affect the service availability, fault tolerance, performance, or security of your clusters. This information set is called a recommendation in Insights Advisor and includes the following information: Name: A concise description of the recommendation Added: When the recommendation was published to the Insights Advisor archive Category: Whether the issue has the potential to negatively affect service availability, fault tolerance, performance, or security Total risk: A value derived from the likelihood that the condition will negatively affect your infrastructure, and the impact on operations if that were to happen Clusters: A list of clusters on which a recommendation is detected Description: A brief synopsis of the issue, including how it affects your clusters Link to associated topics: More information from Red Hat about the issue 4.4.3. Displaying potential issues with your cluster This section describes how to display the Insights report in Insights Advisor on Red Hat Hybrid Cloud Console . Note that Insights repeatedly analyzes your cluster and shows the latest results. These results can change, for example, if you fix an issue or a new issue has been detected. Prerequisites Your cluster is registered on Red Hat Hybrid Cloud Console . Remote health reporting is enabled, which is the default. You are logged in to Red Hat Hybrid Cloud Console . Procedure Navigate to Advisor Recommendations on Red Hat Hybrid Cloud Console . Depending on the result, Insights Advisor displays one of the following: No matching recommendations found , if Insights did not identify any issues. A list of issues Insights has detected, grouped by risk (low, moderate, important, and critical). No clusters yet , if Insights has not yet analyzed the cluster. The analysis starts shortly after the cluster has been installed, registered, and connected to the internet. If any issues are displayed, click the > icon in front of the entry for more details. Depending on the issue, the details can also contain a link to more information from Red Hat about the issue. 4.4.4. Displaying all Insights Advisor recommendations The Recommendations view, by default, only displays the recommendations that are detected on your clusters. However, you can view all of the recommendations in the advisor archive. Prerequisites Remote health reporting is enabled, which is the default. Your cluster is registered on Red Hat Hybrid Cloud Console. You are logged in to Red Hat Hybrid Cloud Console . Procedure Navigate to Advisor Recommendations on Red Hat Hybrid Cloud Console . Click the X icons to the Clusters Impacted and Status filters. You can now browse through all of the potential recommendations for your cluster. 4.4.5. Disabling Insights Advisor recommendations You can disable specific recommendations that affect your clusters, so that they no longer appear in your reports. It is possible to disable a recommendation for a single cluster or all of your clusters. Note Disabling a recommendation for all of your clusters also applies to any future clusters. Prerequisites Remote health reporting is enabled, which is the default. Your cluster is registered on Red Hat Hybrid Cloud Console . You are logged in to Red Hat Hybrid Cloud Console . Procedure Navigate to Advisor Recommendations on Red Hat Hybrid Cloud Console . Click the name of the recommendation to disable. You are directed to the single recommendation page. To disable the recommendation for a single cluster: Click the Options menu for that cluster, and then click Disable recommendation for cluster . Enter a justification note and click Save . To disable the recommendation for all of your clusters: Click Actions Disable recommendation . Enter a justification note and click Save . 4.4.6. Enabling a previously disabled Insights Advisor recommendation When a recommendation is disabled for all clusters, you will no longer see the recommendation in Insights Advisor. You can change this behavior. Prerequisites Remote health reporting is enabled, which is the default. Your cluster is registered on Red Hat Hybrid Cloud Console . You are logged in to Red Hat Hybrid Cloud Console . Procedure Navigate to Advisor Recommendations on Red Hat Hybrid Cloud Console . Filter the recommendations by Status Disabled . Locate the recommendation to enable. Click the Options menu , and then click Enable recommendation . 4.4.7. Displaying the Insights status in the web console Insights repeatedly analyzes your cluster and you can display the status of identified potential issues of your cluster in the OpenShift Container Platform web console. This status shows the number of issues in the different categories and, for further details, links to the reports in OpenShift Cluster Manager . Prerequisites Your cluster is registered in OpenShift Cluster Manager . Remote health reporting is enabled, which is the default. You are logged in to the OpenShift Container Platform web console. Procedure Navigate to Home Overview in the OpenShift Container Platform web console. Click Insights on the Status card. The pop-up window lists potential issues grouped by risk. Click the individual categories or View all recommendations in Insights Advisor to display more details. 4.5. Using Insights Operator The Insights Operator periodically gathers configuration and component failure status and, by default, reports that data every two hours to Red Hat. This information enables Red Hat to assess configuration and deeper failure data than is reported through Telemetry. Users of OpenShift Container Platform can display the report in the Insights Advisor service on Red Hat Hybrid Cloud Console. Additional resources The Insights Operator is installed and enabled by default. If you need to opt out of remote health reporting, see Opting out of remote health reporting . For more information on using Insights Advisor to identify issues with your cluster, see Using Insights to identify issues with your cluster . 4.5.1. Downloading your Insights Operator archive Insights Operator stores gathered data in an archive located in the openshift-insights namespace of your cluster. You can download and review the data that is gathered by the Insights Operator. Prerequisites Access to the cluster as a user with the cluster-admin role. Procedure Find the name of the running pod for the Insights Operator: USD oc get pods --namespace=openshift-insights -o custom-columns=:metadata.name --no-headers --field-selector=status.phase=Running Copy the recent data archives collected by the Insights Operator: USD oc cp openshift-insights/<insights_operator_pod_name>:/var/lib/insights-operator ./insights-data 1 1 Replace <insights_operator_pod_name> with the pod name output from the preceding command. The recent Insights Operator archives are now available in the insights-data directory. 4.5.2. Viewing Insights Operator gather durations You can view the time it takes for the Insights Operator to gather the information contained in the archive. This helps you to understand Insights Operator resource usage and issues with Insights Advisor. Prerequisites A recent copy of your Insights Operator archive. Procedure From your archive, open /insights-operator/gathers.json . The file contains a list of Insights Operator gather operations: { "name": "clusterconfig/authentication", "duration_in_ms": 730, 1 "records_count": 1, "errors": null, "panic": null } 1 duration_in_ms is the amount of time in milliseconds for each gather operation. Inspect each gather operation for abnormalities. 4.6. Using remote health reporting in a restricted network You can manually gather and upload Insights Operator archives to diagnose issues from a restricted network. To use the Insights Operator in a restricted network, you must: Create a copy of your Insights Operator archive. Upload the Insights Operator archive to console.redhat.com . 4.6.1. Copying an Insights Operator archive You must create a copy of your Insights Operator data archive for upload to cloud.redhat.com . Prerequisites You are logged in to OpenShift Container Platform as cluster-admin . Procedure Find the name of the Insights Operator pod that is currently running: USD INSIGHTS_OPERATOR_POD=USD(oc get pods --namespace=openshift-insights -o custom-columns=:metadata.name --no-headers --field-selector=status.phase=Running) Copy the recent data archives from the Insights Operator container: USD oc cp openshift-insights/USDINSIGHTS_OPERATOR_POD:/var/lib/insights-operator ./insights-data The recent Insights Operator archives are now available in the insights-data directory. 4.6.2. Uploading an Insights Operator archive You can manually upload an Insights Operator archive to console.redhat.com to diagnose potential issues. Prerequisites You are logged in to OpenShift Container Platform as cluster-admin . You have a workstation with unrestricted internet access. You have created a copy of the Insights Operator archive. Procedure Download the dockerconfig.json file: USD oc extract secret/pull-secret -n openshift-config --to=. Copy your "cloud.openshift.com" "auth" token from the dockerconfig.json file: { "auths": { "cloud.openshift.com": { "auth": " <your_token> ", "email": "[email protected]" } } Upload the archive to console.redhat.com : USD curl -v -H "User-Agent: insights-operator/one10time200gather184a34f6a168926d93c330 cluster/ <cluster_id> " -H "Authorization: Bearer <your_token> " -F "upload=@ <path_to_archive> ; type=application/vnd.redhat.openshift.periodic+tar" https://console.redhat.com/api/ingress/v1/upload where <cluster_id> is your cluster ID, <your_token> is the token from your pull secret, and <path_to_archive> is the path to the Insights Operator archive. If the operation is successful, the command returns a "request_id" and "account_number" : Example output * Connection #0 to host console.redhat.com left intact {"request_id":"393a7cf1093e434ea8dd4ab3eb28884c","upload":{"account_number":"6274079"}}% Verification steps Log in to https://console.redhat.com/openshift . Click the Clusters menu in the left pane. To display the details of the cluster, click the cluster name. Open the Insights Advisor tab of the cluster. If the upload was successful, the tab displays one of the following: Your cluster passed all recommendations , if Insights Advisor did not identify any issues. A list of issues that Insights Advisor has detected, prioritized by risk (low, moderate, important, and critical).
[ "oc get route prometheus-k8s -n openshift-monitoring -o jsonpath=\"{.spec.host}\"", "{__name__=~\"cluster:usage:.*|count:up0|count:up1|cluster_version|cluster_version_available_updates|cluster_operator_up|cluster_operator_conditions|cluster_version_payload|cluster_installer|cluster_infrastructure_provider|cluster_feature_set|instance:etcd_object_counts:sum|ALERTS|code:apiserver_request_total:rate:sum|cluster:capacity_cpu_cores:sum|cluster:capacity_memory_bytes:sum|cluster:cpu_usage_cores:sum|cluster:memory_usage_bytes:sum|openshift:cpu_usage_cores:sum|openshift:memory_usage_bytes:sum|workload:cpu_usage_cores:sum|workload:memory_usage_bytes:sum|cluster:virt_platform_nodes:sum|cluster:node_instance_type_count:sum|cnv:vmi_status_running:count|node_role_os_version_machine:cpu_capacity_cores:sum|node_role_os_version_machine:cpu_capacity_sockets:sum|subscription_sync_total|csv_succeeded|csv_abnormal|ceph_cluster_total_bytes|ceph_cluster_total_used_raw_bytes|ceph_health_status|job:ceph_osd_metadata:count|job:kube_pv:count|job:ceph_pools_iops:total|job:ceph_pools_iops_bytes:total|job:ceph_versions_running:count|job:noobaa_total_unhealthy_buckets:sum|job:noobaa_bucket_count:sum|job:noobaa_total_object_count:sum|noobaa_accounts_num|noobaa_total_usage|console_url|cluster:network_attachment_definition_instances:max|cluster:network_attachment_definition_enabled_instance_up:max|insightsclient_request_send_total|cam_app_workload_migrations|cluster:apiserver_current_inflight_requests:sum:max_over_time:2m|cluster:telemetry_selected_series:count\",alertstate=~\"firing|\"}", "INSIGHTS_OPERATOR_POD=USD(oc get pods --namespace=openshift-insights -o custom-columns=:metadata.name --no-headers --field-selector=status.phase=Running)", "oc cp openshift-insights/USDINSIGHTS_OPERATOR_POD:/var/lib/insights-operator ./insights-data", "oc extract secret/pull-secret -n openshift-config --to=.", "\"cloud.openshift.com\":{\"auth\":\"<hash>\",\"email\":\"<email_address>\"}", "oc get secret/pull-secret -n openshift-config --template='{{index .data \".dockerconfigjson\" | base64decode}}' ><pull_secret_location> 1", "oc registry login --registry=\"<registry>\" \\ 1 --auth-basic=\"<username>:<password>\" \\ 2 --to=<pull_secret_location> 3", "oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=<pull_secret_location> 1", "oc get pods --namespace=openshift-insights -o custom-columns=:metadata.name --no-headers --field-selector=status.phase=Running", "oc cp openshift-insights/<insights_operator_pod_name>:/var/lib/insights-operator ./insights-data 1", "{ \"name\": \"clusterconfig/authentication\", \"duration_in_ms\": 730, 1 \"records_count\": 1, \"errors\": null, \"panic\": null }", "INSIGHTS_OPERATOR_POD=USD(oc get pods --namespace=openshift-insights -o custom-columns=:metadata.name --no-headers --field-selector=status.phase=Running)", "oc cp openshift-insights/USDINSIGHTS_OPERATOR_POD:/var/lib/insights-operator ./insights-data", "oc extract secret/pull-secret -n openshift-config --to=.", "{ \"auths\": { \"cloud.openshift.com\": { \"auth\": \" <your_token> \", \"email\": \"[email protected]\" } }", "curl -v -H \"User-Agent: insights-operator/one10time200gather184a34f6a168926d93c330 cluster/ <cluster_id> \" -H \"Authorization: Bearer <your_token> \" -F \"upload=@ <path_to_archive> ; type=application/vnd.redhat.openshift.periodic+tar\" https://console.redhat.com/api/ingress/v1/upload", "* Connection #0 to host console.redhat.com left intact {\"request_id\":\"393a7cf1093e434ea8dd4ab3eb28884c\",\"upload\":{\"account_number\":\"6274079\"}}%" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/support/remote-health-monitoring-with-connected-clusters
Chapter 7. Creating RHEL system image and uploading to Microsoft Azure by using Insights image builder
Chapter 7. Creating RHEL system image and uploading to Microsoft Azure by using Insights image builder You can create customized RHEL system images by using Insights image builder, and upload those images to the Microsoft Azure cloud target environment. Then, you can create a Virtual Machine (VM) from the image you shared with the Microsoft Azure Cloud account. Warning Red Hat Hybrid Cloud Console does not support uploading the images that you created for the Microsoft Azure target environment to GovCloud regions. 7.1. Authorizing Insights image builder to push images to Microsoft Azure Cloud To authorize Insights image builder to push images to the Microsoft Azure cloud, you must: Configure Insights image builder as an authorized application for your tenant GUID Give it the role of Contributor to at least one resource group. To authorize Insights image builder as an authorized application, follow the steps: Prerequisites You have an existing Resource Group in Microsoft Azure portal. You have the User Access Administrator role rights. Your Microsoft Azure subscription has Microsoft.Storage and Microsoft.Compute as a resource provider. Procedure Access Insights image builder on a browser. The Insights image builder dashboard appears. Click Create image . The Create image dialog wizard opens. On the Image output page, complete the following steps: From the Release list, select the Release that you want to use: for example, choose Red Hat Enterprise Linux (RHEL). From the Select target environments option, select Microsoft Azure . Click . On the Target Environment - Microsoft Azure window, to add Insights image builder as an authorized application, complete the following steps: Insert your Tenant GUID . Image builder checks if your Tenant GUID is correctly formatted and the Authorize image builder button becomes available. Click Authorize image builder to authorize Insights image builder to push images to the Microsoft Azure cloud. This redirects you to the Microsoft Azure portal. Login with your credentials. Click Accept the Permission requested . Confirm that Insights image builder is authorized for your tenant. Search for Azure Active Directory and choose Enterprise applications , from the left menu. Search for Insights image builder and confirm it is authorized. Add the Enterprise application as a contributor to your Resource Group . In the search bar, type Resource Groups and select the first entry under Services . This redirects you to the Resource Groups dashboard. Select your Resource Group . On the left menu, click Access control (IAM) to add a permission so the Insights image builder application can access your resource group. From the menu, click the tab Role assignments . Click +Add . From the dropdown menu, choose Add role assignment . A menu appears on the left side. Insert the following details: Role: Assign the Contributor role Assign access to: User, group, service principal. Add members: Click +Select members and type Red Hat in the search bar. Press enter. Select: Insights image builder application The Insights image builder application is now authorized to push images to Microsoft Azure cloud. Note Even though any user can add an application to the resources group, the application is not able to locate any resource unless the account administrator adds the shared application as a contributor under the IAM section of the resource group. . Verification From the menu, click the tab Role assignments . You can see Insights image builder set as a Contributor of the Resource Group you selected. Additional resources Manage Microsoft Azure Resource Manager resources group by using the Microsoft Azure portal 7.2. Creating a customized RHEL system image for Microsoft Azure using image builder After you authorize image builder to push images to Microsoft Azure, create customized system images using image builder and upload those images to Microsoft Azure. For that, follow the steps: Prerequisites You have created a Microsoft Azure Storage Account . You have a Storage Account created. You authorized image builder to push images to Microsoft Azure. See Authorizing Insights image builder to push images to Microsoft Azure Cloud . Procedure On the Target Environment - Microsoft Azure window, complete the following steps: Enter your Tenant GUID : you can find your Tenant ID in the Microsoft Azure Active Directory application in Microsoft Azure portal. Enter your Subscription ID : you can find your Subscription ID account by accessing the Microsoft Azure console. Enter your Resource group : is the name of your Resource Group in Microsoft Azure portal. Click . On the Registration page, select the type of registration that you want to use. You can select from these options: Register images with Red Hat : Register and connect image instances, subscriptions and insights with Red Hat. For details on how to embed an activation key and register systems on first boot, see Creating a customized system image with an embed subscription by using Insights image builder . Register image instances only : Register and connect only image instances and subscriptions with Red Hat. Register later : Register the system after the image creation. Click . Optional: On the Packages page, add packages to your image. See Adding packages during image creation by using Insights image builder . On the Name image page, enter a name for your image and click . If you do not enter a name, you can find the image you created by its UUID. On the Review page, review the details about the image creation and click Create image . After you complete the steps in the Create image wizard, the image builder dashboard is displayed. Insights image builder starts the compose of a RHEL Azure Disk Image image for the x86_64 architecture, uploads it to the resource group account you specified, and creates a Microsoft Azure Image. The Insights image builder Images dashboard opens. You can see details such as the Image UUID , the cloud target environment , the image OS release and the status of the image creation. After the status is Ready , the Azure Disk Image is shared with the specified account. On the dashboard, you can see details such as the Image UUID , the cloud target environment , the image OS release and the status of the image creation. Possible statuses: Pending: the image upload and cloud registration is being processed. In Progress: the image upload and cloud registration is ongoing. Ready: the image upload and cloud registration is completed Failed: the image upload and cloud registration failed. Note The image build, upload and cloud registration processes can take up to ten minutes to complete. Verification Check if the image status is Ready . It means that the image upload and cloud registration completed successfully. Additional resources How to find your Microsoft Azure Active Directory tenant ID 7.3. Accessing your customized RHEL system image from your Microsoft Azure account After finishing to build and upload the image, and the cloud registration process status is marked as Ready , you can access the Azure Disk Image from your Microsoft Azure account. Prerequisites You have access to your Microsoft Azure dashboard . Procedure Access your Microsoft Azure dashboard and navigate to the Resource group page. Verification After you access your Microsoft Azure Account, you can see that the image successfully shared with the resource group account you specified. Note If the image is not visible there, you might have issues with the upload process. Return to the Insights image builder dashboard and check if the image is marked as Ready . 7.4. Creating a VM from the RHEL system image shared with your Microsoft Azure account You can create a Virtual Machine (VM) from the image you shared with the Microsoft Azure Cloud account by using Insights image builder. Prerequisites You must have a Microsoft Azure Storage Account created. You must have uploaded the required image to the Microsoft Azure Cloud account. Procedure Click + Create VM . You are redirected to the Create a virtual machine dashboard. In the Basic tab under Project Details , your Subscription and the Resource Group are pre-set. Optional: If you want to create a new resource Group: Click Create new . A pop-up prompts you to create the Resource Group Name container. Insert a name and click OK . If you want to keep the Resource Group that is already pre-set. Under Instance Details , insert: Virtual machine name Region Image Size : Choose a VM size that better suits your needs. Keep the remaining fields as in the default choice. Under Administrator account , enter the following details: Username : the name of the account administrator. SSH public key source : from the drop-down menu, select Generate new key pair . Key pair name : insert a name for the key pair. Under Inbound port rules : Public inbound ports : select Allow selected ports . Select inbound ports : Use the default set SSH (22) . Click Review + Create . You are redirected to the Review + create tab. You receive a confirmation that the validation passed. Review the details and click Create . To change options, click . A Generates New Key Pair pop-up opens. Click Download private key and create resources . Save the key file in the yourKey .pem file format. After the deployment is complete, click Go to resource . You are redirected to a new window with your VM details. Select the public IP address on the top right side of the page and copy it to your clipboard. Verification Create an SSH connection to connect to the Virtual Machine you created. For that, follow the steps: Open a terminal. At your prompt, open an SSH connection to your virtual machine. Replace the IP address with the one from your VM, and replace the path to the .pem file with the path to where the key file was downloaded. Add the user name and replace the IP address with the one from your VM. Replace the path to the .pem file with the path to where the key file was downloaded. For example: You are required to confirm if you want to continue to connect. Type yes to continue. As a result, the output image you shared with the Microsoft Azure Storage account is started and ready to be provisioned. Note The default user is azureuser and the password is azureuser .
[ "ssh -i <yourKey.pem file location> <username>@<IP_address>", "ssh -i ./Downloads/yourKey.pem [email protected]" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/creating_customized_images_by_using_insights_image_builder/creating-and-uploading-customized-rhel-system-image-to-azure-using-image-builder
Tutorials
Tutorials Red Hat OpenShift Service on AWS 4 Red Hat OpenShift Service on AWS tutorials Red Hat OpenShift Documentation Team
[ "mkdir scratch cd scratch cat << 'EOF' > verify-permissions.sh #!/bin/bash while getopts 'p:' OPTION; do case \"USDOPTION\" in p) PREFIX=\"USDOPTARG\" ;; ?) echo \"script usage: USD(basename \\USD0) [-p PREFIX]\" >&2 exit 1 ;; esac done shift \"USD((USDOPTIND -1))\" rosa create account-roles --mode manual --prefix USDPREFIX INSTALLER_POLICY=USD(cat sts_installer_permission_policy.json | jq ) CONTROL_PLANE_POLICY=USD(cat sts_instance_controlplane_permission_policy.json | jq) WORKER_POLICY=USD(cat sts_instance_worker_permission_policy.json | jq) SUPPORT_POLICY=USD(cat sts_support_permission_policy.json | jq) simulatePolicy () { outputFile=\"USD{2}.results\" echo USD2 aws iam simulate-custom-policy --policy-input-list \"USD1\" --action-names USD(jq '.Statement | map(select(.Effect == \"Allow\"))[].Action | if type == \"string\" then . else .[] end' \"USD2\" -r) --output text > USDoutputFile } simulatePolicy \"USDINSTALLER_POLICY\" \"sts_installer_permission_policy.json\" simulatePolicy \"USDCONTROL_PLANE_POLICY\" \"sts_instance_controlplane_permission_policy.json\" simulatePolicy \"USDWORKER_POLICY\" \"sts_instance_worker_permission_policy.json\" simulatePolicy \"USDSUPPORT_POLICY\" \"sts_support_permission_policy.json\" EOF chmod +x verify-permissions.sh ./verify-permissions.sh -p SimPolTest", "for file in USD(ls *.results); do echo USDfile; cat USDfile; done", "sts_installer_permission_policy.json.results EVALUATIONRESULTS autoscaling:DescribeAutoScalingGroups allowed * MATCHEDSTATEMENTS PolicyInputList.1 IAM Policy ENDPOSITION 6 195 STARTPOSITION 17 3 EVALUATIONRESULTS ec2:AllocateAddress allowed * MATCHEDSTATEMENTS PolicyInputList.1 IAM Policy ENDPOSITION 6 195 STARTPOSITION 17 3 EVALUATIONRESULTS ec2:AssociateAddress allowed * MATCHEDSTATEMENTS PolicyInputList.1 IAM Policy", "export VPC_ID=<vpc_ID> 1 export REGION=<region> 2 export VPC_CIDR=<vpc_CIDR> 3", "echo \"VPC ID: USD{VPC_ID}, VPC CIDR Range: USD{VPC_CIDR}, Region: USD{REGION}\"", "SG_ID=USD(aws ec2 create-security-group --group-name rosa-inbound-resolver --description \"Security group for ROSA inbound resolver\" --vpc-id USD{VPC_ID} --region USD{REGION} --output text) aws ec2 authorize-security-group-ingress --group-id USD{SG_ID} --protocol tcp --port 53 --cidr USD{VPC_CIDR} --region USD{REGION} aws ec2 authorize-security-group-ingress --group-id USD{SG_ID} --protocol udp --port 53 --cidr USD{VPC_CIDR} --region USD{REGION}", "RESOLVER_ID=USD(aws route53resolver create-resolver-endpoint --name rosa-inbound-resolver --creator-request-id rosa-USD(date '+%Y-%m-%d') --security-group-ids USD{SG_ID} --direction INBOUND --ip-addresses USD(aws ec2 describe-subnets --filter Name=vpc-id,Values=USD{VPC_ID} --region USD{REGION} | jq -jr '.Subnets | map(\"SubnetId=\\(.SubnetId) \") | .[]') --region USD{REGION} --output text --query 'ResolverEndpoint.Id')", "RESOLVER_ID=USD(aws route53resolver create-resolver-endpoint --name rosa-inbound-resolver --creator-request-id rosa-USD(date '+%Y-%m-%d') --security-group-ids USD{SG_ID} --direction INBOUND --ip-addresses SubnetId=<subnet_ID>,Ip=<endpoint_IP> SubnetId=<subnet_ID>,Ip=<endpoint_IP> \\ 1 --region USD{REGION} --output text --query 'ResolverEndpoint.Id')", "aws route53resolver list-resolver-endpoint-ip-addresses --resolver-endpoint-id USD{RESOLVER_ID} --region=USD{REGION} --query 'IpAddresses[*].Ip'", "[ \"10.0.45.253\", \"10.0.23.131\", \"10.0.148.159\" ]", "aws route53 list-hosted-zones-by-vpc --vpc-id USD{VPC_ID} --vpc-region USD{REGION} --query 'HostedZoneSummaries[*].Name' --output table", "---------------------------------------------- | ListHostedZonesByVPC | +--------------------------------------------+ | domain-prefix.agls.p3.openshiftapps.com. | +--------------------------------------------+", "zone \"<domain-prefix>.<unique-ID>.p1.openshiftapps.com\" { 1 type forward; forward only; forwarders { 2 10.0.45.253; 10.0.23.131; 10.0.148.159; }; };", "export DOMAIN=apps.example.com 1 export AWS_PAGER=\"\" export CLUSTER_NAME=USD(oc get infrastructure cluster -o=jsonpath=\"{.status.infrastructureName}\" | sed 's/-[a-z0-9]\\{5\\}USD//') export REGION=USD(oc get infrastructure cluster -o=jsonpath=\"{.status.platformStatus.aws.region}\") export AWS_ACCOUNT_ID=USD(aws sts get-caller-identity --query Account --output text) export SCRATCH=\"/tmp/USD{CLUSTER}/cloudfront-waf\" mkdir -p USD{SCRATCH} echo \"Cluster: USD{CLUSTER}, Region: USD{REGION}, AWS Account ID: USD{AWS_ACCOUNT_ID}\"", "export CLUSTER=my-custom-value", "oc -n openshift-ingress create secret tls waf-tls --cert=fullchain.pem --key=privkey.pem", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: cloudfront-waf namespace: openshift-ingress-operator spec: domain: apps.example.com 1 defaultCertificate: name: waf-tls endpointPublishingStrategy: loadBalancer: dnsManagementPolicy: Unmanaged providerParameters: aws: type: NLB type: AWS scope: External type: LoadBalancerService routeSelector: 2 matchLabels: route: waf", "oc apply -f waf-ingress-controller.yaml", "oc -n openshift-ingress get service/router-cloudfront-waf", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-cloudfront-waf LoadBalancer 172.30.16.141 a68a838a7f26440bf8647809b61c4bc8-4225395f488830bd.elb.us-east-1.amazonaws.com 80:30606/TCP,443:31065/TCP 2m19s", "cat << EOF > USD{SCRATCH}/waf-rules.json [ { \"Name\": \"AWS-AWSManagedRulesCommonRuleSet\", \"Priority\": 0, \"Statement\": { \"ManagedRuleGroupStatement\": { \"VendorName\": \"AWS\", \"Name\": \"AWSManagedRulesCommonRuleSet\" } }, \"OverrideAction\": { \"None\": {} }, \"VisibilityConfig\": { \"SampledRequestsEnabled\": true, \"CloudWatchMetricsEnabled\": true, \"MetricName\": \"AWS-AWSManagedRulesCommonRuleSet\" } }, { \"Name\": \"AWS-AWSManagedRulesSQLiRuleSet\", \"Priority\": 1, \"Statement\": { \"ManagedRuleGroupStatement\": { \"VendorName\": \"AWS\", \"Name\": \"AWSManagedRulesSQLiRuleSet\" } }, \"OverrideAction\": { \"None\": {} }, \"VisibilityConfig\": { \"SampledRequestsEnabled\": true, \"CloudWatchMetricsEnabled\": true, \"MetricName\": \"AWS-AWSManagedRulesSQLiRuleSet\" } } ] EOF", "WAF_WACL=USD(aws wafv2 create-web-acl --name cloudfront-waf --region USD{REGION} --default-action Allow={} --scope CLOUDFRONT --visibility-config SampledRequestsEnabled=true,CloudWatchMetricsEnabled=true,MetricName=USD{CLUSTER}-waf-metrics --rules file://USD{SCRATCH}/waf-rules.json --query 'Summary.Name' --output text)", "NLB=USD(oc -n openshift-ingress get service router-cloudfront-waf -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')", "aws acm import-certificate --certificate file://cert.pem --certificate-chain file://fullchain.pem --private-key file://privkey.pem --region us-east-1", "aws cloudfront list-distributions --query \"DistributionList.Items[?Origins.Items[?DomainName=='USD{NLB}']].DomainName\" --output text", "*.apps.example.com CNAME d1b2c3d4e5f6g7.cloudfront.net", "oc new-project hello-world", "oc -n hello-world new-app --image=docker.io/openshift/hello-openshift", "oc -n hello-world create route edge --service=hello-openshift hello-openshift-tls --hostname hello-openshift.USD{DOMAIN}", "oc -n hello-world label route.route.openshift.io/hello-openshift-tls route=waf", "curl \"https://hello-openshift.USD{DOMAIN}\"", "Hello OpenShift!", "curl -X POST \"https://hello-openshift.USD{DOMAIN}\" -F \"user='<script><alert>Hello></alert></script>'\"", "<!DOCTYPE HTML PUBLIC \"-//W3C//DTD HTML 4.01 Transitional//EN\" \"http://www.w3.org/TR/html4/loose.dtd\"> <HTML><HEAD><META HTTP-EQUIV=\"Content-Type\" CONTENT=\"text/html; charset=iso-8859-1\"> <TITLE>ERROR: The request could not be satisfied</TITLE> </HEAD><BODY> <H1>403 ERROR</H1> <H2>The request could not be satisfied.</H2> <HR noshade size=\"1px\"> Request blocked. We can't connect to the server for this app or website at this time. There might be too much traffic or a configuration error. Try again later, or contact the app or website owner. <BR clear=\"all\"> If you provide content to customers through CloudFront, you can find steps to troubleshoot and help prevent this error by reviewing the CloudFront documentation. <BR clear=\"all\"> <HR noshade size=\"1px\"> <PRE> Generated by cloudfront (CloudFront) Request ID: nFk9q2yB8jddI6FZOTjdliexzx-FwZtr8xUQUNT75HThPlrALDxbag== </PRE> <ADDRESS> </ADDRESS> </BODY></HTML>", "export AWS_PAGER=\"\" export CLUSTER=USD(oc get infrastructure cluster -o=jsonpath=\"{.status.infrastructureName}\") export REGION=USD(oc get infrastructure cluster -o=jsonpath=\"{.status.platformStatus.aws.region}\") export OIDC_ENDPOINT=USD(oc get authentication.config.openshift.io cluster -o jsonpath='{.spec.serviceAccountIssuer}' | sed 's|^https://||') export AWS_ACCOUNT_ID=USD(aws sts get-caller-identity --query Account --output text) export SCRATCH=\"/tmp/USD{CLUSTER}/alb-waf\" mkdir -p USD{SCRATCH} echo \"Cluster: USD(echo USD{CLUSTER} | sed 's/-[a-z0-9]\\{5\\}USD//'), Region: USD{REGION}, OIDC Endpoint: USD{OIDC_ENDPOINT}, AWS Account ID: USD{AWS_ACCOUNT_ID}\"", "export VPC_ID=<vpc-id> 1 export PUBLIC_SUBNET_IDS=(<space-separated-list-of-ids>) 2 export PRIVATE_SUBNET_IDS=(<space-separated-list-of-ids>) 3", "aws ec2 create-tags --resources USD{VPC_ID} --tags Key=kubernetes.io/cluster/USD{CLUSTER},Value=shared --region USD{REGION}", "aws ec2 create-tags --resources USD{PUBLIC_SUBNET_IDS} --tags Key=kubernetes.io/role/elb,Value='1' Key=kubernetes.io/cluster/USD{CLUSTER},Value=shared --region USD{REGION}", "aws ec2 create-tags --resources USD{PRIVATE_SUBNET_IDS} --tags Key=kubernetes.io/role/internal-elb,Value='1' Key=kubernetes.io/cluster/USD{CLUSTER},Value=shared --region USD{REGION}", "oc new-project aws-load-balancer-operator", "POLICY_ARN=USD(aws iam list-policies --query \"Policies[?PolicyName=='aws-load-balancer-operator-policy'].{ARN:Arn}\" --output text)", "if [[ -z \"USD{POLICY_ARN}\" ]]; then wget -O \"USD{SCRATCH}/load-balancer-operator-policy.json\" https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/main/docs/install/iam_policy.json POLICY_ARN=USD(aws --region \"USDREGION\" --query Policy.Arn --output text iam create-policy --policy-name aws-load-balancer-operator-policy --policy-document \"file://USD{SCRATCH}/load-balancer-operator-policy.json\") fi", "cat <<EOF > \"USD{SCRATCH}/trust-policy.json\" { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Condition\": { \"StringEquals\" : { \"USD{OIDC_ENDPOINT}:sub\": [\"system:serviceaccount:aws-load-balancer-operator:aws-load-balancer-operator-controller-manager\", \"system:serviceaccount:aws-load-balancer-operator:aws-load-balancer-controller-cluster\"] } }, \"Principal\": { \"Federated\": \"arn:aws:iam::USDAWS_ACCOUNT_ID:oidc-provider/USD{OIDC_ENDPOINT}\" }, \"Action\": \"sts:AssumeRoleWithWebIdentity\" } ] } EOF", "ROLE_ARN=USD(aws iam create-role --role-name \"USD{CLUSTER}-alb-operator\" --assume-role-policy-document \"file://USD{SCRATCH}/trust-policy.json\" --query Role.Arn --output text)", "aws iam attach-role-policy --role-name \"USD{CLUSTER}-alb-operator\" --policy-arn USD{POLICY_ARN}", "cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: aws-load-balancer-operator namespace: aws-load-balancer-operator stringData: credentials: | [default] role_arn = USD{ROLE_ARN} web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token EOF", "cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: aws-load-balancer-operator namespace: aws-load-balancer-operator spec: upgradeStrategy: Default --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: aws-load-balancer-operator namespace: aws-load-balancer-operator spec: channel: stable-v1.0 installPlanApproval: Automatic name: aws-load-balancer-operator source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: aws-load-balancer-operator.v1.0.0 EOF", "cat << EOF | oc apply -f - apiVersion: networking.olm.openshift.io/v1 kind: AWSLoadBalancerController metadata: name: cluster spec: credentials: name: aws-load-balancer-operator enabledAddons: - AWSWAFv2 EOF", "oc -n aws-load-balancer-operator get pods", "NAME READY STATUS RESTARTS AGE aws-load-balancer-controller-cluster-6ddf658785-pdp5d 1/1 Running 0 99s aws-load-balancer-operator-controller-manager-577d9ffcb9-w6zqn 2/2 Running 0 2m4s", "oc new-project hello-world", "oc new-app -n hello-world --image=docker.io/openshift/hello-openshift", "oc -n hello-world patch service hello-openshift -p '{\"spec\":{\"type\":\"NodePort\"}}'", "cat << EOF | oc apply -f - apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: hello-openshift-alb namespace: hello-world annotations: alb.ingress.kubernetes.io/scheme: internet-facing spec: ingressClassName: alb rules: - http: paths: - path: / pathType: Exact backend: service: name: hello-openshift port: number: 8080 EOF", "INGRESS=USD(oc -n hello-world get ingress hello-openshift-alb -o jsonpath='{.status.loadBalancer.ingress[0].hostname}') curl \"http://USD{INGRESS}\"", "Hello OpenShift!", "cat << EOF > USD{SCRATCH}/waf-rules.json [ { \"Name\": \"AWS-AWSManagedRulesCommonRuleSet\", \"Priority\": 0, \"Statement\": { \"ManagedRuleGroupStatement\": { \"VendorName\": \"AWS\", \"Name\": \"AWSManagedRulesCommonRuleSet\" } }, \"OverrideAction\": { \"None\": {} }, \"VisibilityConfig\": { \"SampledRequestsEnabled\": true, \"CloudWatchMetricsEnabled\": true, \"MetricName\": \"AWS-AWSManagedRulesCommonRuleSet\" } }, { \"Name\": \"AWS-AWSManagedRulesSQLiRuleSet\", \"Priority\": 1, \"Statement\": { \"ManagedRuleGroupStatement\": { \"VendorName\": \"AWS\", \"Name\": \"AWSManagedRulesSQLiRuleSet\" } }, \"OverrideAction\": { \"None\": {} }, \"VisibilityConfig\": { \"SampledRequestsEnabled\": true, \"CloudWatchMetricsEnabled\": true, \"MetricName\": \"AWS-AWSManagedRulesSQLiRuleSet\" } } ] EOF", "WAF_ARN=USD(aws wafv2 create-web-acl --name USD{CLUSTER}-waf --region USD{REGION} --default-action Allow={} --scope REGIONAL --visibility-config SampledRequestsEnabled=true,CloudWatchMetricsEnabled=true,MetricName=USD{CLUSTER}-waf-metrics --rules file://USD{SCRATCH}/waf-rules.json --query 'Summary.ARN' --output text)", "oc annotate -n hello-world ingress.networking.k8s.io/hello-openshift-alb alb.ingress.kubernetes.io/wafv2-acl-arn=USD{WAF_ARN}", "curl \"http://USD{INGRESS}\"", "Hello OpenShift!", "curl -X POST \"http://USD{INGRESS}\" -F \"user='<script><alert>Hello></alert></script>'\"", "<html> <head><title>403 Forbidden</title></head> <body> <center><h1>403 Forbidden</h1></center> </body> </html", "export CLUSTER_NAME=USD(oc get infrastructure cluster -o=jsonpath=\"{.status.infrastructureName}\" | sed 's/-[a-z0-9]\\{5\\}USD//') export ROSA_CLUSTER_ID=USD(rosa describe cluster -c USD{CLUSTER_NAME} --output json | jq -r .id) export REGION=USD(rosa describe cluster -c USD{CLUSTER_NAME} --output json | jq -r .region.id) export OIDC_ENDPOINT=USD(oc get authentication.config.openshift.io cluster -o jsonpath='{.spec.serviceAccountIssuer}' | sed 's|^https://||') export AWS_ACCOUNT_ID=`aws sts get-caller-identity --query Account --output text` export CLUSTER_VERSION=`rosa describe cluster -c USD{CLUSTER_NAME} -o json | jq -r .version.raw_id | cut -f -2 -d '.'` export ROLE_NAME=\"USD{CLUSTER_NAME}-openshift-oadp-aws-cloud-credentials\" export AWS_PAGER=\"\" export SCRATCH=\"/tmp/USD{CLUSTER_NAME}/oadp\" mkdir -p USD{SCRATCH} echo \"Cluster ID: USD{ROSA_CLUSTER_ID}, Region: USD{REGION}, OIDC Endpoint: USD{OIDC_ENDPOINT}, AWS Account ID: USD{AWS_ACCOUNT_ID}\"", "POLICY_ARN=USD(aws iam list-policies --query \"Policies[?PolicyName=='RosaOadpVer1'].{ARN:Arn}\" --output text) if [[ -z \"USD{POLICY_ARN}\" ]]; then cat << EOF > USD{SCRATCH}/policy.json { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"s3:CreateBucket\", \"s3:DeleteBucket\", \"s3:PutBucketTagging\", \"s3:GetBucketTagging\", \"s3:PutEncryptionConfiguration\", \"s3:GetEncryptionConfiguration\", \"s3:PutLifecycleConfiguration\", \"s3:GetLifecycleConfiguration\", \"s3:GetBucketLocation\", \"s3:ListBucket\", \"s3:GetObject\", \"s3:PutObject\", \"s3:DeleteObject\", \"s3:ListBucketMultipartUploads\", \"s3:AbortMultipartUpload\", \"s3:ListMultipartUploadParts\", \"ec2:DescribeSnapshots\", \"ec2:DescribeVolumes\", \"ec2:DescribeVolumeAttribute\", \"ec2:DescribeVolumesModifications\", \"ec2:DescribeVolumeStatus\", \"ec2:CreateTags\", \"ec2:CreateVolume\", \"ec2:CreateSnapshot\", \"ec2:DeleteSnapshot\" ], \"Resource\": \"*\" } ]} EOF POLICY_ARN=USD(aws iam create-policy --policy-name \"RosaOadpVer1\" --policy-document file:///USD{SCRATCH}/policy.json --query Policy.Arn --tags Key=rosa_openshift_version,Value=USD{CLUSTER_VERSION} Key=rosa_role_prefix,Value=ManagedOpenShift Key=operator_namespace,Value=openshift-oadp Key=operator_name,Value=openshift-oadp --output text) fi echo USD{POLICY_ARN}", "cat <<EOF > USD{SCRATCH}/trust-policy.json { \"Version\": \"2012-10-17\", \"Statement\": [{ \"Effect\": \"Allow\", \"Principal\": { \"Federated\": \"arn:aws:iam::USD{AWS_ACCOUNT_ID}:oidc-provider/USD{OIDC_ENDPOINT}\" }, \"Action\": \"sts:AssumeRoleWithWebIdentity\", \"Condition\": { \"StringEquals\": { \"USD{OIDC_ENDPOINT}:sub\": [ \"system:serviceaccount:openshift-adp:openshift-adp-controller-manager\", \"system:serviceaccount:openshift-adp:velero\"] } } }] } EOF ROLE_ARN=USD(aws iam create-role --role-name \"USD{ROLE_NAME}\" --assume-role-policy-document file://USD{SCRATCH}/trust-policy.json --tags Key=rosa_cluster_id,Value=USD{ROSA_CLUSTER_ID} Key=rosa_openshift_version,Value=USD{CLUSTER_VERSION} Key=rosa_role_prefix,Value=ManagedOpenShift Key=operator_namespace,Value=openshift-adp Key=operator_name,Value=openshift-oadp --query Role.Arn --output text) echo USD{ROLE_ARN}", "aws iam attach-role-policy --role-name \"USD{ROLE_NAME}\" --policy-arn USD{POLICY_ARN}", "oc create namespace openshift-adp", "cat <<EOF > USD{SCRATCH}/credentials [default] role_arn = USD{ROLE_ARN} web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token region=<aws_region> 1 EOF oc -n openshift-adp create secret generic cloud-credentials --from-file=USD{SCRATCH}/credentials", "cat << EOF | oc create -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: generateName: openshift-adp- namespace: openshift-adp name: oadp spec: targetNamespaces: - openshift-adp --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: redhat-oadp-operator namespace: openshift-adp spec: channel: stable-1.2 installPlanApproval: Automatic name: redhat-oadp-operator source: redhat-operators sourceNamespace: openshift-marketplace EOF", "watch oc -n openshift-adp get pods", "NAME READY STATUS RESTARTS AGE openshift-adp-controller-manager-546684844f-qqjhn 1/1 Running 0 22s", "cat << EOF | oc create -f - apiVersion: oadp.openshift.io/v1alpha1 kind: CloudStorage metadata: name: USD{CLUSTER_NAME}-oadp namespace: openshift-adp spec: creationSecret: key: credentials name: cloud-credentials enableSharedConfig: true name: USD{CLUSTER_NAME}-oadp provider: aws region: USDREGION EOF", "oc get pvc -n <namespace> 1", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE applog Bound pvc-351791ae-b6ab-4e8b-88a4-30f73caf5ef8 1Gi RWO gp3-csi 4d19h mysql Bound pvc-16b8e009-a20a-4379-accc-bc81fedd0621 1Gi RWO gp3-csi 4d19h", "oc get storageclass", "NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE gp2 kubernetes.io/aws-ebs Delete WaitForFirstConsumer true 4d21h gp2-csi ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h gp3 ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h gp3-csi (default) ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h", "cat << EOF | oc create -f - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: USD{CLUSTER_NAME}-dpa namespace: openshift-adp spec: backupImages: true features: dataMover: enable: false backupLocations: - bucket: cloudStorageRef: name: USD{CLUSTER_NAME}-oadp credential: key: credentials name: cloud-credentials prefix: velero default: true config: region: USD{REGION} configuration: velero: defaultPlugins: - openshift - aws - csi restic: enable: false EOF", "cat << EOF | oc create -f - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: USD{CLUSTER_NAME}-dpa namespace: openshift-adp spec: backupImages: true features: dataMover: enable: false backupLocations: - bucket: cloudStorageRef: name: USD{CLUSTER_NAME}-oadp credential: key: credentials name: cloud-credentials prefix: velero default: true config: region: USD{REGION} configuration: velero: defaultPlugins: - openshift - aws restic: enable: false snapshotLocations: - velero: config: credentialsFile: /tmp/credentials/openshift-adp/cloud-credentials-credentials enableSharedConfig: 'true' profile: default region: USD{REGION} provider: aws EOF", "oc create namespace hello-world oc new-app -n hello-world --image=docker.io/openshift/hello-openshift", "oc expose service/hello-openshift -n hello-world", "curl `oc get route/hello-openshift -n hello-world -o jsonpath='{.spec.host}'`", "Hello OpenShift!", "cat << EOF | oc create -f - apiVersion: velero.io/v1 kind: Backup metadata: name: hello-world namespace: openshift-adp spec: includedNamespaces: - hello-world storageLocation: USD{CLUSTER_NAME}-dpa-1 ttl: 720h0m0s EOF", "watch \"oc -n openshift-adp get backup hello-world -o json | jq .status\"", "{ \"completionTimestamp\": \"2022-09-07T22:20:44Z\", \"expiration\": \"2022-10-07T22:20:22Z\", \"formatVersion\": \"1.1.0\", \"phase\": \"Completed\", \"progress\": { \"itemsBackedUp\": 58, \"totalItems\": 58 }, \"startTimestamp\": \"2022-09-07T22:20:22Z\", \"version\": 1 }", "oc delete ns hello-world", "cat << EOF | oc create -f - apiVersion: velero.io/v1 kind: Restore metadata: name: hello-world namespace: openshift-adp spec: backupName: hello-world EOF", "watch \"oc -n openshift-adp get restore hello-world -o json | jq .status\"", "{ \"completionTimestamp\": \"2022-09-07T22:25:47Z\", \"phase\": \"Completed\", \"progress\": { \"itemsRestored\": 38, \"totalItems\": 38 }, \"startTimestamp\": \"2022-09-07T22:25:28Z\", \"warnings\": 9 }", "oc -n hello-world get pods", "NAME READY STATUS RESTARTS AGE hello-openshift-9f885f7c6-kdjpj 1/1 Running 0 90s", "curl `oc get route/hello-openshift -n hello-world -o jsonpath='{.spec.host}'`", "Hello OpenShift!", "oc delete ns hello-world", "oc delete backups.velero.io hello-world oc delete restores.velero.io hello-world", "velero backup delete hello-world velero restore delete hello-world", "oc -n openshift-adp delete dpa USD{CLUSTER_NAME}-dpa", "oc -n openshift-adp delete cloudstorage USD{CLUSTER_NAME}-oadp", "oc -n openshift-adp patch cloudstorage USD{CLUSTER_NAME}-oadp -p '{\"metadata\":{\"finalizers\":null}}' --type=merge", "oc -n openshift-adp delete subscription oadp-operator", "oc delete ns redhat-openshift-adp", "for CRD in `oc get crds | grep velero | awk '{print USD1}'`; do oc delete crd USDCRD; done for CRD in `oc get crds | grep -i oadp | awk '{print USD1}'`; do oc delete crd USDCRD; done", "aws s3 rm s3://USD{CLUSTER_NAME}-oadp --recursive aws s3api delete-bucket --bucket USD{CLUSTER_NAME}-oadp", "aws iam detach-role-policy --role-name \"USD{ROLE_NAME}\" --policy-arn \"USD{POLICY_ARN}\"", "aws iam delete-role --role-name \"USD{ROLE_NAME}\"", "export AWS_PAGER=\"\" export ROSA_CLUSTER_NAME=USD(oc get infrastructure cluster -o=jsonpath=\"{.status.infrastructureName}\" | sed 's/-[a-z0-9]\\{5\\}USD//') export REGION=USD(oc get infrastructure cluster -o=jsonpath=\"{.status.platformStatus.aws.region}\") export OIDC_ENDPOINT=USD(oc get authentication.config.openshift.io cluster -o jsonpath='{.spec.serviceAccountIssuer}' | sed 's|^https://||') export AWS_ACCOUNT_ID=USD(aws sts get-caller-identity --query Account --output text) export SCRATCH=\"/tmp/USD{ROSA_CLUSTER_NAME}/alb-operator\" mkdir -p USD{SCRATCH} echo \"Cluster: USD{ROSA_CLUSTER_NAME}, Region: USD{REGION}, OIDC Endpoint: USD{OIDC_ENDPOINT}, AWS Account ID: USD{AWS_ACCOUNT_ID}\"", "export VPC_ID=<vpc-id> export PUBLIC_SUBNET_IDS=<public-subnets> export PRIVATE_SUBNET_IDS=<private-subnets> export CLUSTER_NAME=USD(oc get infrastructure cluster -o=jsonpath=\"{.status.infrastructureName}\")", "aws ec2 create-tags --resources USD{VPC_ID} --tags Key=kubernetes.io/cluster/USD{CLUSTER_NAME},Value=owned --region USD{REGION}", "aws ec2 create-tags --resources USD{PUBLIC_SUBNET_IDS} --tags Key=kubernetes.io/role/elb,Value='' --region USD{REGION}", "aws ec2 create-tags --resources \"USD{PRIVATE_SUBNET_IDS}\" --tags Key=kubernetes.io/role/internal-elb,Value='' --region USD{REGION}", "oc new-project aws-load-balancer-operator POLICY_ARN=USD(aws iam list-policies --query \"Policies[?PolicyName=='aws-load-balancer-operator-policy'].{ARN:Arn}\" --output text) if [[ -z \"USD{POLICY_ARN}\" ]]; then wget -O \"USD{SCRATCH}/load-balancer-operator-policy.json\" https://raw.githubusercontent.com/rh-mobb/documentation/main/content/rosa/aws-load-balancer-operator/load-balancer-operator-policy.json POLICY_ARN=USD(aws --region \"USDREGION\" --query Policy.Arn --output text iam create-policy --policy-name aws-load-balancer-operator-policy --policy-document \"file://USD{SCRATCH}/load-balancer-operator-policy.json\") fi echo USDPOLICY_ARN", "cat <<EOF > \"USD{SCRATCH}/trust-policy.json\" { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Condition\": { \"StringEquals\" : { \"USD{OIDC_ENDPOINT}:sub\": [\"system:serviceaccount:aws-load-balancer-operator:aws-load-balancer-operator-controller-manager\", \"system:serviceaccount:aws-load-balancer-operator:aws-load-balancer-controller-cluster\"] } }, \"Principal\": { \"Federated\": \"arn:aws:iam::USDAWS_ACCOUNT_ID:oidc-provider/USD{OIDC_ENDPOINT}\" }, \"Action\": \"sts:AssumeRoleWithWebIdentity\" } ] } EOF", "ROLE_ARN=USD(aws iam create-role --role-name \"USD{ROSA_CLUSTER_NAME}-alb-operator\" --assume-role-policy-document \"file://USD{SCRATCH}/trust-policy.json\" --query Role.Arn --output text) echo USDROLE_ARN aws iam attach-role-policy --role-name \"USD{ROSA_CLUSTER_NAME}-alb-operator\" --policy-arn USDPOLICY_ARN", "cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: aws-load-balancer-operator namespace: aws-load-balancer-operator stringData: credentials: | [default] role_arn = USDROLE_ARN web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token EOF", "cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: aws-load-balancer-operator namespace: aws-load-balancer-operator spec: upgradeStrategy: Default --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: aws-load-balancer-operator namespace: aws-load-balancer-operator spec: channel: stable-v1.0 installPlanApproval: Automatic name: aws-load-balancer-operator source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: aws-load-balancer-operator.v1.0.0 EOF", "cat << EOF | oc apply -f - apiVersion: networking.olm.openshift.io/v1 kind: AWSLoadBalancerController metadata: name: cluster spec: credentials: name: aws-load-balancer-operator EOF", "oc -n aws-load-balancer-operator get pods", "NAME READY STATUS RESTARTS AGE aws-load-balancer-controller-cluster-6ddf658785-pdp5d 1/1 Running 0 99s aws-load-balancer-operator-controller-manager-577d9ffcb9-w6zqn 2/2 Running 0 2m4s", "oc new-project hello-world", "oc new-app -n hello-world --image=docker.io/openshift/hello-openshift", "cat << EOF | oc apply -f - apiVersion: v1 kind: Service metadata: name: hello-openshift-nodeport namespace: hello-world spec: ports: - port: 80 targetPort: 8080 protocol: TCP type: NodePort selector: deployment: hello-openshift EOF", "cat << EOF | oc apply -f - apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: hello-openshift-alb namespace: hello-world annotations: alb.ingress.kubernetes.io/scheme: internet-facing spec: ingressClassName: alb rules: - http: paths: - path: / pathType: Exact backend: service: name: hello-openshift-nodeport port: number: 80 EOF", "INGRESS=USD(oc -n hello-world get ingress hello-openshift-alb -o jsonpath='{.status.loadBalancer.ingress[0].hostname}') curl \"http://USD{INGRESS}\"", "Hello OpenShift!", "cat << EOF | oc apply -f - apiVersion: v1 kind: Service metadata: name: hello-openshift-nlb namespace: hello-world annotations: service.beta.kubernetes.io/aws-load-balancer-type: external service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: instance service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing spec: ports: - port: 80 targetPort: 8080 protocol: TCP type: LoadBalancer selector: deployment: hello-openshift EOF", "NLB=USD(oc -n hello-world get service hello-openshift-nlb -o jsonpath='{.status.loadBalancer.ingress[0].hostname}') curl \"http://USD{NLB}\"", "Hello OpenShift!", "oc delete project hello-world", "oc delete subscription aws-load-balancer-operator -n aws-load-balancer-operator aws iam detach-role-policy --role-name \"USD{ROSA_CLUSTER_NAME}-alb-operator\" --policy-arn USDPOLICY_ARN aws iam delete-role --role-name \"USD{ROSA_CLUSTER_NAME}-alb-operator\"", "aws iam delete-policy --policy-arn USDPOLICY_ARN", "domain=USD(rosa describe cluster -c <cluster_name> | grep \"DNS\" | grep -oE '\\S+.openshiftapps.com') echo \"OAuth callback URL: https://oauth.USD{domain}/oauth2callback/AAD\"", "CLUSTER_NAME=example-cluster 1 IDP_NAME=AAD 2 APP_ID=yyyyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy 3 CLIENT_SECRET=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx 4 TENANT_ID=zzzzzzzz-zzzz-zzzz-zzzz-zzzzzzzzzzzz 5", "rosa create idp --cluster USD{CLUSTER_NAME} --type openid --name USD{IDP_NAME} --client-id USD{APP_ID} --client-secret USD{CLIENT_SECRET} --issuer-url https://login.microsoftonline.com/USD{TENANT_ID}/v2.0 --email-claims email --name-claims name --username-claims preferred_username --extra-scopes email,profile --groups-claims groups", "rosa create idp --cluster USD{CLUSTER_NAME} --type openid --name USD{IDP_NAME} --client-id USD{APP_ID} --client-secret USD{CLIENT_SECRET} --issuer-url https://login.microsoftonline.com/USD{TENANT_ID}/v2.0 --email-claims email --name-claims name --username-claims preferred_username --extra-scopes email,profile", "rosa grant user cluster-admin --user=<USERNAME> 1 --cluster=USD{CLUSTER_NAME}", "oc create clusterrolebinding cluster-admin-group --clusterrole=cluster-admin --group=<GROUP_ID> 1", "oc login --token=<your-token> --server=<your-server-url>", "oc get authentication.config.openshift.io cluster -o json | jq .spec.serviceAccountIssuer", "\"https://xxxxx.cloudfront.net/xxxxx\"", "oc new-project csi-secrets-store oc adm policy add-scc-to-user privileged system:serviceaccount:csi-secrets-store:secrets-store-csi-driver oc adm policy add-scc-to-user privileged system:serviceaccount:csi-secrets-store:csi-secrets-store-provider-aws", "export REGION=USD(oc get infrastructure cluster -o=jsonpath=\"{.status.platformStatus.aws.region}\") export OIDC_ENDPOINT=USD(oc get authentication.config.openshift.io cluster -o jsonpath='{.spec.serviceAccountIssuer}' | sed 's|^https://||') export AWS_ACCOUNT_ID=`aws sts get-caller-identity --query Account --output text` export AWS_PAGER=\"\"", "helm repo add secrets-store-csi-driver https://kubernetes-sigs.github.io/secrets-store-csi-driver/charts", "helm repo update", "helm upgrade --install -n csi-secrets-store csi-secrets-store-driver secrets-store-csi-driver/secrets-store-csi-driver", "oc -n csi-secrets-store apply -f https://raw.githubusercontent.com/rh-mobb/documentation/main/content/misc/secrets-store-csi/aws-provider-installer.yaml", "oc -n csi-secrets-store get ds csi-secrets-store-provider-aws csi-secrets-store-driver-secrets-store-csi-driver", "oc label csidriver.storage.k8s.io/secrets-store.csi.k8s.io security.openshift.io/csi-ephemeral-volume-profile=restricted", "SECRET_ARN=USD(aws --region \"USDREGION\" secretsmanager create-secret --name MySecret --secret-string '{\"username\":\"shadowman\", \"password\":\"hunter2\"}' --query ARN --output text); echo USDSECRET_ARN", "cat << EOF > policy.json { \"Version\": \"2012-10-17\", \"Statement\": [{ \"Effect\": \"Allow\", \"Action\": [ \"secretsmanager:GetSecretValue\", \"secretsmanager:DescribeSecret\" ], \"Resource\": [\"USDSECRET_ARN\"] }] } EOF", "POLICY_ARN=USD(aws --region \"USDREGION\" --query Policy.Arn --output text iam create-policy --policy-name openshift-access-to-mysecret-policy --policy-document file://policy.json); echo USDPOLICY_ARN", "cat <<EOF > trust-policy.json { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Condition\": { \"StringEquals\" : { \"USD{OIDC_ENDPOINT}:sub\": [\"system:serviceaccount:my-application:default\"] } }, \"Principal\": { \"Federated\": \"arn:aws:iam::USDAWS_ACCOUNT_ID:oidc-provider/USD{OIDC_ENDPOINT}\" }, \"Action\": \"sts:AssumeRoleWithWebIdentity\" } ] } EOF", "ROLE_ARN=USD(aws iam create-role --role-name openshift-access-to-mysecret --assume-role-policy-document file://trust-policy.json --query Role.Arn --output text); echo USDROLE_ARN", "aws iam attach-role-policy --role-name openshift-access-to-mysecret --policy-arn USDPOLICY_ARN", "oc new-project my-application", "oc annotate -n my-application serviceaccount default eks.amazonaws.com/role-arn=USDROLE_ARN", "cat << EOF | oc apply -f - apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: my-application-aws-secrets spec: provider: aws parameters: objects: | - objectName: \"MySecret\" objectType: \"secretsmanager\" EOF", "cat << EOF | oc apply -f - apiVersion: v1 kind: Pod metadata: name: my-application labels: app: my-application spec: volumes: - name: secrets-store-inline csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: \"my-application-aws-secrets\" containers: - name: my-application-deployment image: k8s.gcr.io/e2e-test-images/busybox:1.29 command: - \"/bin/sleep\" - \"10000\" volumeMounts: - name: secrets-store-inline mountPath: \"/mnt/secrets-store\" readOnly: true EOF", "oc exec -it my-application -- cat /mnt/secrets-store/MySecret", "oc delete project my-application", "helm delete -n csi-secrets-store csi-secrets-store-driver", "oc adm policy remove-scc-from-user privileged system:serviceaccount:csi-secrets-store:secrets-store-csi-driver; oc adm policy remove-scc-from-user privileged system:serviceaccount:csi-secrets-store:csi-secrets-store-provider-aws", "oc -n csi-secrets-store delete -f https://raw.githubusercontent.com/rh-mobb/documentation/main/content/misc/secrets-store-csi/aws-provider-installer.yaml", "aws iam detach-role-policy --role-name openshift-access-to-mysecret --policy-arn USDPOLICY_ARN; aws iam delete-role --role-name openshift-access-to-mysecret; aws iam delete-policy --policy-arn USDPOLICY_ARN", "aws secretsmanager --region USDREGION delete-secret --secret-id USDSECRET_ARN", "export CLUSTER_NAME=USD(oc get infrastructure cluster -o=jsonpath=\"{.status.infrastructureName}\" | sed 's/-[a-z0-9]\\{5\\}USD//') export REGION=USD(rosa describe cluster -c USD{ROSA_CLUSTER_NAME} --output json | jq -r .region.id) export OIDC_ENDPOINT=USD(oc get authentication.config.openshift.io cluster -o json | jq -r .spec.serviceAccountIssuer | sed 's|^https://||') export AWS_ACCOUNT_ID=`aws sts get-caller-identity --query Account --output text` export ACK_SERVICE=s3 export ACK_SERVICE_ACCOUNT=ack-USD{ACK_SERVICE}-controller export POLICY_ARN=arn:aws:iam::aws:policy/AmazonS3FullAccess export AWS_PAGER=\"\" export SCRATCH=\"/tmp/USD{ROSA_CLUSTER_NAME}/ack\" mkdir -p USD{SCRATCH}", "echo \"Cluster: USD{ROSA_CLUSTER_NAME}, Region: USD{REGION}, OIDC Endpoint: USD{OIDC_ENDPOINT}, AWS Account ID: USD{AWS_ACCOUNT_ID}\"", "cat <<EOF > \"USD{SCRATCH}/trust-policy.json\" { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Condition\": { \"StringEquals\" : { \"USD{OIDC_ENDPOINT}:sub\": \"system:serviceaccount:ack-system:USD{ACK_SERVICE_ACCOUNT}\" } }, \"Principal\": { \"Federated\": \"arn:aws:iam::USDAWS_ACCOUNT_ID:oidc-provider/USD{OIDC_ENDPOINT}\" }, \"Action\": \"sts:AssumeRoleWithWebIdentity\" } ] } EOF", "ROLE_ARN=USD(aws iam create-role --role-name \"ack-USD{ACK_SERVICE}-controller\" --assume-role-policy-document \"file://USD{SCRATCH}/trust-policy.json\" --query Role.Arn --output text) echo USDROLE_ARN aws iam attach-role-policy --role-name \"ack-USD{ACK_SERVICE}-controller\" --policy-arn USD{POLICY_ARN}", "oc new-project ack-system", "cat <<EOF > \"USD{SCRATCH}/config.txt\" ACK_ENABLE_DEVELOPMENT_LOGGING=true ACK_LOG_LEVEL=debug ACK_WATCH_NAMESPACE= AWS_REGION=USD{REGION} AWS_ENDPOINT_URL= ACK_RESOURCE_TAGS=USD{CLUSTER_NAME} ENABLE_LEADER_ELECTION=true LEADER_ELECTION_NAMESPACE= EOF", "oc -n ack-system create configmap --from-env-file=USD{SCRATCH}/config.txt ack-USD{ACK_SERVICE}-user-config", "cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: ack-USD{ACK_SERVICE}-controller namespace: ack-system spec: upgradeStrategy: Default --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: ack-USD{ACK_SERVICE}-controller namespace: ack-system spec: channel: alpha installPlanApproval: Automatic name: ack-USD{ACK_SERVICE}-controller source: community-operators sourceNamespace: openshift-marketplace EOF", "oc -n ack-system annotate serviceaccount USD{ACK_SERVICE_ACCOUNT} eks.amazonaws.com/role-arn=USD{ROLE_ARN} && oc -n ack-system rollout restart deployment ack-USD{ACK_SERVICE}-controller", "oc -n ack-system get pods", "NAME READY STATUS RESTARTS AGE ack-s3-controller-585f6775db-s4lfz 1/1 Running 0 51s", "cat << EOF | oc apply -f - apiVersion: s3.services.k8s.aws/v1alpha1 kind: Bucket metadata: name: USD{CLUSTER-NAME}-bucket namespace: ack-system spec: name: USD{CLUSTER-NAME}-bucket EOF", "aws s3 ls | grep USD{CLUSTER_NAME}-bucket", "2023-10-04 14:51:45 mrmc-test-maz-bucket", "oc -n ack-system delete bucket.s3.services.k8s.aws/USD{CLUSTER-NAME}-bucket", "oc -n ack-system delete subscription ack-USD{ACK_SERVICE}-controller aws iam detach-role-policy --role-name \"ack-USD{ACK_SERVICE}-controller\" --policy-arn USD{POLICY_ARN} aws iam delete-role --role-name \"ack-USD{ACK_SERVICE}-controller\"", "oc delete project ack-system", "export DOMAIN=<apps.example.com> 1 export AWS_PAGER=\"\" export CLUSTER=USD(oc get infrastructure cluster -o=jsonpath=\"{.status.infrastructureName}\" | sed 's/-[a-z0-9]\\{5\\}USD//') export REGION=USD(oc get infrastructure cluster -o=jsonpath=\"{.status.platformStatus.aws.region}\") export AWS_ACCOUNT_ID=USD(aws sts get-caller-identity --query Account --output text) export SCRATCH=\"/tmp/USD{CLUSTER}/external-dns\" mkdir -p USD{SCRATCH}", "echo \"Cluster: USD{CLUSTER}, Region: USD{REGION}, AWS Account ID: USD{AWS_ACCOUNT_ID}\"", "export CLUSTER=my-custom-value", "oc -n openshift-ingress create secret tls external-dns-tls --cert=fullchain.pem --key=privkey.pem", "cat << EOF | oc apply -f - apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: external-dns-ingress namespace: openshift-ingress-operator spec: domain: USD{DOMAIN} defaultCertificate: name: external-dns-tls endpointPublishingStrategy: loadBalancer: dnsManagementPolicy: Unmanaged providerParameters: aws: type: NLB type: AWS scope: External type: LoadBalancerService EOF", "oc -n openshift-ingress get service/router-external-dns-ingress", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-external-dns-ingress LoadBalancer 172.30.71.250 a4838bb991c6748439134ab89f132a43-aeae124077b50c01.elb.us-east-1.amazonaws.com 80:32227/TCP,443:30310/TCP 43s", "export ZONE_ID=USD(aws route53 list-hosted-zones-by-name --output json --dns-name \"USD{DOMAIN}.\" --query 'HostedZones[0]'.Id --out text | sed 's/\\/hostedzone\\///')", "NLB_HOST=USD(oc -n openshift-ingress get service/router-external-dns-ingress -ojsonpath=\"{.status.loadBalancer.ingress[0].hostname}\") cat << EOF > \"USD{SCRATCH}/create-cname.json\" { \"Comment\":\"Add CNAME to ingress controller canonical domain\", \"Changes\":[{ \"Action\":\"CREATE\", \"ResourceRecordSet\":{ \"Name\": \"router-external-dns-ingress.USD{DOMAIN}\", \"Type\":\"CNAME\", \"TTL\":30, \"ResourceRecords\":[{ \"Value\": \"USD{NLB_HOST}\" }] } }] } EOF", "aws route53 change-resource-record-sets --hosted-zone-id USD{ZONE_ID} --change-batch file://USD{SCRATCH}/create-cname.json", "cat << EOF > \"USD{SCRATCH}/external-dns-policy.json\" { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"route53:ChangeResourceRecordSets\" ], \"Resource\": [ \"arn:aws:route53:::hostedzone/USD{ZONE_ID}\" ] }, { \"Effect\": \"Allow\", \"Action\": [ \"route53:ListHostedZones\", \"route53:ListResourceRecordSets\" ], \"Resource\": [ \"*\" ] } ] } EOF", "aws iam create-user --user-name \"USD{CLUSTER}-external-dns-operator\"", "aws iam attach-user-policy --user-name \"USD{CLUSTER}-external-dns-operator\" --policy-arn USDPOLICY_ARN", "SECRET_ACCESS_KEY=USD(aws iam create-access-key --user-name \"USD{CLUSTER}-external-dns-operator\")", "cat << EOF > \"USD{SCRATCH}/credentials\" [default] aws_access_key_id = USD(echo USDSECRET_ACCESS_KEY | jq -r '.AccessKey.AccessKeyId') aws_secret_access_key = USD(echo USDSECRET_ACCESS_KEY | jq -r '.AccessKey.SecretAccessKey') EOF", "oc new-project external-dns-operator", "cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: external-dns-group namespace: external-dns-operator spec: targetNamespaces: - external-dns-operator --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: external-dns-operator namespace: external-dns-operator spec: channel: stable-v1.1 installPlanApproval: Automatic name: external-dns-operator source: redhat-operators sourceNamespace: openshift-marketplace EOF", "oc rollout status deploy external-dns-operator --timeout=300s", "oc -n external-dns-operator create secret generic external-dns --from-file \"USD{SCRATCH}/credentials\"", "cat << EOF | oc apply -f - apiVersion: externaldns.olm.openshift.io/v1beta1 kind: ExternalDNS metadata: name: USD{DOMAIN} spec: domains: - filterType: Include matchType: Exact name: USD{DOMAIN} provider: aws: credentials: name: external-dns type: AWS source: openshiftRouteOptions: routerName: external-dns-ingress type: OpenShiftRoute zones: - USD{ZONE_ID} EOF", "oc rollout status deploy external-dns-USD{DOMAIN} --timeout=300s", "oc new-project hello-world", "oc new-app -n hello-world --image=docker.io/openshift/hello-openshift", "oc -n hello-world create route edge --service=hello-openshift hello-openshift-tls --hostname hello-openshift.USD{DOMAIN}", "aws route53 list-resource-record-sets --hosted-zone-id USD{ZONE_ID} --query \"ResourceRecordSets[?Type == 'CNAME']\" | grep hello-openshift", "aws route53 list-resource-record-sets --hosted-zone-id USD{ZONE_ID} --query \"ResourceRecordSets[?Type == 'TXT']\" | grep USD{DOMAIN}", "curl https://hello-openshift.USD{DOMAIN}", "Hello OpenShift!", "export DOMAIN=apps.example.com 1 export [email protected] 2 export AWS_PAGER=\"\" export CLUSTER=USD(oc get infrastructure cluster -o=jsonpath=\"{.status.infrastructureName}\" | sed 's/-[a-z0-9]\\{5\\}USD//') export OIDC_ENDPOINT=USD(oc get authentication.config.openshift.io cluster -o json | jq -r .spec.serviceAccountIssuer | sed 's|^https://||') export REGION=USD(oc get infrastructure cluster -o=jsonpath=\"{.status.platformStatus.aws.region}\") export AWS_ACCOUNT_ID=USD(aws sts get-caller-identity --query Account --output text) export SCRATCH=\"/tmp/USD{CLUSTER}/dynamic-certs\" mkdir -p USD{SCRATCH}", "echo \"Cluster: USD{CLUSTER}, Region: USD{REGION}, OIDC Endpoint: USD{OIDC_ENDPOINT}, AWS Account ID: USD{AWS_ACCOUNT_ID}\"", "export CLUSTER=my-custom-value", "export ZONE_ID=USD(aws route53 list-hosted-zones-by-name --output json --dns-name \"USD{DOMAIN}.\" --query 'HostedZones[0]'.Id --out text | sed 's/\\/hostedzone\\///')", "cat <<EOF > \"USD{SCRATCH}/cert-manager-policy.json\" { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": \"route53:GetChange\", \"Resource\": \"arn:aws:route53:::change/*\" }, { \"Effect\": \"Allow\", \"Action\": [ \"route53:ChangeResourceRecordSets\", \"route53:ListResourceRecordSets\" ], \"Resource\": \"arn:aws:route53:::hostedzone/USD{ZONE_ID}\" }, { \"Effect\": \"Allow\", \"Action\": \"route53:ListHostedZonesByName\", \"Resource\": \"*\" } ] } EOF", "POLICY_ARN=USD(aws iam create-policy --policy-name \"USD{CLUSTER}-cert-manager-policy\" --policy-document file://USD{SCRATCH}/cert-manager-policy.json --query 'Policy.Arn' --output text)", "cat <<EOF > \"USD{SCRATCH}/trust-policy.json\" { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Condition\": { \"StringEquals\" : { \"USD{OIDC_ENDPOINT}:sub\": \"system:serviceaccount:cert-manager:cert-manager\" } }, \"Principal\": { \"Federated\": \"arn:aws:iam::USDAWS_ACCOUNT_ID:oidc-provider/USD{OIDC_ENDPOINT}\" }, \"Action\": \"sts:AssumeRoleWithWebIdentity\" } ] } EOF", "ROLE_ARN=USD(aws iam create-role --role-name \"USD{CLUSTER}-cert-manager-operator\" --assume-role-policy-document \"file://USD{SCRATCH}/trust-policy.json\" --query Role.Arn --output text)", "aws iam attach-role-policy --role-name \"USD{CLUSTER}-cert-manager-operator\" --policy-arn USD{POLICY_ARN}", "oc new-project cert-manager-operator", "cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-cert-manager-operator-group namespace: cert-manager-operator spec: targetNamespaces: - cert-manager-operator --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-cert-manager-operator namespace: cert-manager-operator spec: channel: stable-v1 installPlanApproval: Automatic name: openshift-cert-manager-operator source: redhat-operators sourceNamespace: openshift-marketplace EOF", "oc -n cert-manager-operator get pods", "NAME READY STATUS RESTARTS AGE cert-manager-operator-controller-manager-84b8799db5-gv8mx 2/2 Running 0 12s", "oc -n cert-manager annotate serviceaccount cert-manager eks.amazonaws.com/role-arn=USD{ROLE_ARN}", "oc -n cert-manager delete pods -l app.kubernetes.io/name=cert-manager", "oc patch certmanager.operator.openshift.io/cluster --type merge -p '{\"spec\":{\"controllerConfig\":{\"overrideArgs\":[\"--dns01-recursive-nameservers-only\",\"--dns01-recursive-nameservers=1.1.1.1:53\"]}}}'", "cat << EOF | oc apply -f - apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: letsencrypt-production spec: acme: server: https://acme-v02.api.letsencrypt.org/directory email: USD{EMAIL} # This key doesn't exist, cert-manager creates it privateKeySecretRef: name: prod-letsencrypt-issuer-account-key solvers: - dns01: route53: hostedZoneID: USD{ZONE_ID} region: USD{REGION} secretAccessKeySecretRef: name: '' EOF", "oc get clusterissuer.cert-manager.io/letsencrypt-production", "NAME READY AGE letsencrypt-production True 47s", "cat << EOF | oc apply -f - apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: custom-domain-ingress-cert namespace: openshift-ingress spec: secretName: custom-domain-ingress-cert-tls issuerRef: name: letsencrypt-production kind: ClusterIssuer commonName: \"USD{DOMAIN}\" dnsNames: - \"USD{DOMAIN}\" EOF", "oc -n openshift-ingress get certificate.cert-manager.io/custom-domain-ingress-cert", "NAME READY SECRET AGE custom-domain-ingress-cert True custom-domain-ingress-cert-tls 9m53s", "cat << EOF | oc apply -f - apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: custom-domain-ingress namespace: openshift-ingress-operator spec: domain: USD{DOMAIN} defaultCertificate: name: custom-domain-ingress-cert-tls endpointPublishingStrategy: loadBalancer: dnsManagementPolicy: Unmanaged providerParameters: aws: type: NLB type: AWS scope: External type: LoadBalancerService EOF", "oc -n openshift-ingress get service/router-custom-domain-ingress", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-custom-domain-ingress LoadBalancer 172.30.174.34 a309962c3bd6e42c08cadb9202eca683-1f5bbb64a1f1ec65.elb.us-east-1.amazonaws.com 80:31342/TCP,443:31821/TCP 7m28s", "INGRESS=USD(oc -n openshift-ingress get service/router-custom-domain-ingress -ojsonpath=\"{.status.loadBalancer.ingress[0].hostname}\") cat << EOF > \"USD{SCRATCH}/create-cname.json\" { \"Comment\":\"Add CNAME to custom domain endpoint\", \"Changes\":[{ \"Action\":\"CREATE\", \"ResourceRecordSet\":{ \"Name\": \"*.USD{DOMAIN}\", \"Type\":\"CNAME\", \"TTL\":30, \"ResourceRecords\":[{ \"Value\": \"USD{INGRESS}\" }] } }] } EOF", "aws route53 change-resource-record-sets --hosted-zone-id USD{ZONE_ID} --change-batch file://USD{SCRATCH}/create-cname.json", "oc -n cert-manager apply -f https://github.com/cert-manager/openshift-routes/releases/latest/download/cert-manager-openshift-routes.yaml", "oc -n cert-manager get pods", "NAME READY STATUS RESTARTS AGE cert-manager-866d8f788c-9kspc 1/1 Running 0 4h21m cert-manager-cainjector-6885c585bd-znws8 1/1 Running 0 4h41m cert-manager-openshift-routes-75b6bb44cd-f8kd5 1/1 Running 0 6s cert-manager-webhook-8498785dd9-bvfdf 1/1 Running 0 4h41m", "oc new-project hello-world", "oc -n hello-world new-app --image=docker.io/openshift/hello-openshift", "oc -n hello-world create route edge --service=hello-openshift hello-openshift-tls --hostname hello.USD{DOMAIN}", "curl -I https://hello.USD{DOMAIN}", "curl: (60) SSL: no alternative certificate subject name matches target host name 'hello.example.com' More details here: https://curl.se/docs/sslcerts.html curl failed to verify the legitimacy of the server and therefore could not establish a secure connection to it. To learn more about this situation and how to fix it, please visit the web page mentioned above.", "oc -n hello-world annotate route hello-openshift-tls cert-manager.io/issuer-kind=ClusterIssuer cert-manager.io/issuer-name=letsencrypt-production", "curl -I https://hello.USD{DOMAIN}", "HTTP/2 200 date: Thu, 05 Oct 2023 23:45:33 GMT content-length: 17 content-type: text/plain; charset=utf-8 set-cookie: 52e4465485b6fb4f8a1b1bed128d0f3b=68676068bb32d24f0f558f094ed8e4d7; path=/; HttpOnly; Secure; SameSite=None cache-control: private", "oc get certificate,certificaterequest,order,challenge", "export ROSA_CLUSTER_NAME=USD(oc get infrastructure cluster -o=jsonpath=\"{.status.infrastructureName}\" | sed 's/-[a-z0-9]\\{5\\}USD//') export ROSA_MACHINE_POOL_NAME=worker", "oc get node -o json | jq '.items[] | { \"name\": .metadata.name, \"ips\": (.status.addresses | map(select(.type == \"InternalIP\") | .address)), \"capacity\": (.metadata.annotations.\"cloud.network.openshift.io/egress-ipconfig\" | fromjson[] | .capacity.ipv4) }'", "--- { \"name\": \"ip-10-10-145-88.ec2.internal\", \"ips\": [ \"10.10.145.88\" ], \"capacity\": 14 } { \"name\": \"ip-10-10-154-175.ec2.internal\", \"ips\": [ \"10.10.154.175\" ], \"capacity\": 14 } ---", "oc new-project demo-egress-ns", "cat <<EOF | oc apply -f - apiVersion: k8s.ovn.org/v1 kind: EgressIP metadata: name: demo-egress-ns spec: # NOTE: these egress IPs are within the subnet range(s) in which my worker nodes # are deployed. egressIPs: - 10.10.100.253 - 10.10.150.253 - 10.10.200.253 namespaceSelector: matchLabels: kubernetes.io/metadata.name: demo-egress-ns EOF", "oc new-project demo-egress-pod", "cat <<EOF | oc apply -f - apiVersion: k8s.ovn.org/v1 kind: EgressIP metadata: name: demo-egress-pod spec: # NOTE: these egress IPs are within the subnet range(s) in which my worker nodes # are deployed. egressIPs: - 10.10.100.254 - 10.10.150.254 - 10.10.200.254 namespaceSelector: matchLabels: kubernetes.io/metadata.name: demo-egress-pod podSelector: matchLabels: run: demo-egress-pod EOF", "oc get egressips", "NAME EGRESSIPS ASSIGNED NODE ASSIGNED EGRESSIPS demo-egress-ns 10.10.100.253 demo-egress-pod 10.10.100.254", "rosa update machinepool USD{ROSA_MACHINE_POOL_NAME} --cluster=\"USD{ROSA_CLUSTER_NAME}\" --labels \"k8s.ovn.org/egress-assignable=\"", "oc get egressips", "NAME EGRESSIPS ASSIGNED NODE ASSIGNED EGRESSIPS demo-egress-ns 10.10.100.253 ip-10-10-156-122.ec2.internal 10.10.150.253 demo-egress-pod 10.10.100.254 ip-10-10-156-122.ec2.internal 10.10.150.254", "oc -n default run demo-service --image=gcr.io/google_containers/echoserver:1.4", "cat <<EOF | oc apply -f - apiVersion: v1 kind: Service metadata: name: demo-service namespace: default annotations: service.beta.kubernetes.io/aws-load-balancer-scheme: \"internal\" service.beta.kubernetes.io/aws-load-balancer-internal: \"true\" spec: selector: run: demo-service ports: - port: 80 targetPort: 8080 type: LoadBalancer externalTrafficPolicy: Local # NOTE: this limits the source IPs that are allowed to connect to our service. It # is being used as part of this demo, restricting connectivity to our egress # IP addresses only. # NOTE: these egress IPs are within the subnet range(s) in which my worker nodes # are deployed. loadBalancerSourceRanges: - 10.10.100.254/32 - 10.10.150.254/32 - 10.10.200.254/32 - 10.10.100.253/32 - 10.10.150.253/32 - 10.10.200.253/32 EOF", "export LOAD_BALANCER_HOSTNAME=USD(oc get svc -n default demo-service -o json | jq -r '.status.loadBalancer.ingress[].hostname')", "oc run demo-egress-ns -it --namespace=demo-egress-ns --env=LOAD_BALANCER_HOSTNAME=USDLOAD_BALANCER_HOSTNAME --image=registry.access.redhat.com/ubi9/ubi -- bash", "curl -s http://USDLOAD_BALANCER_HOSTNAME", "CLIENT VALUES: client_address=10.10.207.247 command=GET real path=/ query=nil request_version=1.1 request_uri=http://internal-a3e61de18bfca4a53a94a208752b7263-148284314.us-east-1.elb.amazonaws.com:8080/ SERVER VALUES: server_version=nginx: 1.10.0 - lua: 10001 HEADERS RECEIVED: accept=*/* host=internal-a3e61de18bfca4a53a94a208752b7263-148284314.us-east-1.elb.amazonaws.com user-agent=curl/7.76.1 BODY: -no body in request-", "exit", "oc run demo-egress-pod -it --namespace=demo-egress-pod --env=LOAD_BALANCER_HOSTNAME=USDLOAD_BALANCER_HOSTNAME --image=registry.access.redhat.com/ubi9/ubi -- bash", "curl -s http://USDLOAD_BALANCER_HOSTNAME", "CLIENT VALUES: client_address=10.10.207.247 command=GET real path=/ query=nil request_version=1.1 request_uri=http://internal-a3e61de18bfca4a53a94a208752b7263-148284314.us-east-1.elb.amazonaws.com:8080/ SERVER VALUES: server_version=nginx: 1.10.0 - lua: 10001 HEADERS RECEIVED: accept=*/* host=internal-a3e61de18bfca4a53a94a208752b7263-148284314.us-east-1.elb.amazonaws.com user-agent=curl/7.76.1 BODY: -no body in request-", "exit", "oc run demo-egress-pod-fail -it --namespace=demo-egress-pod --env=LOAD_BALANCER_HOSTNAME=USDLOAD_BALANCER_HOSTNAME --image=registry.access.redhat.com/ubi9/ubi -- bash", "curl -s http://USDLOAD_BALANCER_HOSTNAME", "exit", "oc delete svc demo-service -n default; oc delete pod demo-service -n default; oc delete project demo-egress-ns; oc delete project demo-egress-pod; oc delete egressip demo-egress-ns; oc delete egressip demo-egress-pod", "rosa update machinepool USD{ROSA_MACHINE_POOL_NAME} --cluster=\"USD{ROSA_CLUSTER_NAME}\" --labels \"\"", "export CLUSTER_NAME=USD(oc get infrastructure cluster -o=jsonpath=\"{.status.infrastructureName}\" | sed 's/-[a-z0-9]\\{5\\}USD//')", "echo \"Cluster: USD{CLUSTER_NAME}\"", "Cluster: my-rosa-cluster", "oc get routes -n openshift-console oc get routes -n openshift-authentication", "NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD console console-openshift-console.apps.my-example-cluster-aws.z9a9.p1.openshiftapps.com ... 1 more console https reencrypt/Redirect None downloads downloads-openshift-console.apps.my-example-cluster-aws.z9a9.p1.openshiftapps.com ... 1 more downloads http edge/Redirect None NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD oauth-openshift oauth-openshift.apps.my-example-cluster-aws.z9a9.p1.openshiftapps.com ... 1 more oauth-openshift 6443 passthrough/Redirect None", "export INGRESS_ID=USD(rosa list ingress -c USD{CLUSTER_NAME} -o json | jq -r '.[] | select(.default == true) | .id')", "echo \"Ingress ID: USD{INGRESS_ID}\"", "Ingress ID: r3l6", "rosa edit ingress -h Edit a cluster ingress for a cluster. Usage: rosa edit ingress ID [flags] [...] --component-routes string Component routes settings. Available keys [oauth, console, downloads]. For each key a pair of hostname and tlsSecretRef is expected to be supplied. Format should be a comma separate list 'oauth: hostname=example-hostname;tlsSecretRef=example-secret-ref,downloads:...'", "openssl req -newkey rsa:2048 -new -nodes -x509 -days 365 -keyout key-console.pem -out cert-console.pem -subj \"/CN=console.my-new-domain.dev\" openssl req -newkey rsa:2048 -new -nodes -x509 -days 365 -keyout key-downloads.pem -out cert-downloads.pem -subj \"/CN=downloads.my-new-domain.dev\" openssl req -newkey rsa:2048 -new -nodes -x509 -days 365 -keyout key-oauth.pem -out cert-oauth.pem -subj \"/CN=oauth.my-new-domain.dev\"", "oc create secret tls console-tls --cert=cert-console.pem --key=key-console.pem -n openshift-config oc create secret tls downloads-tls --cert=cert-downloads.pem --key=key-downloads.pem -n openshift-config oc create secret tls oauth-tls --cert=cert-oauth.pem --key=key-oauth.pem -n openshift-config", "oc get svc -n openshift-ingress NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.237.88 a234gsr3242rsfsfs-1342r624.us-east-1.elb.amazonaws.com 80:31175/TCP,443:31554/TCP 76d", "rosa edit ingress -c USD{CLUSTER_NAME} USD{INGRESS_ID} --component-routes 'console: hostname=console.my-new-domain.dev;tlsSecretRef=console-tls,downloads: hostname=downloads.my-new-domain.dev;tlsSecretRef=downloads-tls,oauth: hostname=oauth.my-new-domain.dev;tlsSecretRef=oauth-tls'", "rosa edit ingress -c USD{CLUSTER_NAME} USD{INGRESS_ID} --component-routes 'console: hostname=console.my-new-domain.dev;tlsSecretRef=console-tls,downloads: hostname=\"\";tlsSecretRef=\"\", oauth: hostname=oauth.my-new-domain.dev;tlsSecretRef=oauth-tls'", "rosa list ingress -c USD{CLUSTER_NAME} -ojson | jq \".[] | select(.id == \\\"USD{INGRESS_ID}\\\") | .component_routes\"", "{ \"console\": { \"kind\": \"ComponentRoute\", \"hostname\": \"console.my-new-domain.dev\", \"tls_secret_ref\": \"console-tls\" }, \"downloads\": { \"kind\": \"ComponentRoute\", \"hostname\": \"downloads.my-new-domain.dev\", \"tls_secret_ref\": \"downloads-tls\" }, \"oauth\": { \"kind\": \"ComponentRoute\", \"hostname\": \"oauth.my-new-domain.dev\", \"tls_secret_ref\": \"oauth-tls\" } }", "rosa edit ingress -c USD{CLUSTER_NAME} USD{INGRESS_ID} --component-routes 'console: hostname=\"\";tlsSecretRef=\"\",downloads: hostname=\"\";tlsSecretRef=\"\", oauth: hostname=\"\";tlsSecretRef=\"\"'", "rosa create account-roles --mode auto --yes", "rosa create cluster --cluster-name <cluster-name> --sts --mode auto --yes", "rosa list clusters", "rosa create account-roles --mode auto --yes", "I: Creating roles using 'arn:aws:iam::000000000000:user/rosa-user' I: Created role 'ManagedOpenShift-ControlPlane-Role' with ARN 'arn:aws:iam::000000000000:role/ManagedOpenShift-ControlPlane-Role' I: Created role 'ManagedOpenShift-Worker-Role' with ARN 'arn:aws:iam::000000000000:role/ManagedOpenShift-Worker-Role' I: Created role 'ManagedOpenShift-Support-Role' with ARN 'arn:aws:iam::000000000000:role/ManagedOpenShift-Support-Role' I: Created role 'ManagedOpenShift-Installer-Role' with ARN 'arn:aws:iam::000000000000:role/ManagedOpenShift-Installer-Role' I: Created policy with ARN 'arn:aws:iam::000000000000:policy/ManagedOpenShift-openshift-machine-api-aws-cloud-credentials' I: Created policy with ARN 'arn:aws:iam::000000000000:policy/ManagedOpenShift-openshift-cloud-credential-operator-cloud-crede' I: Created policy with ARN 'arn:aws:iam::000000000000:policy/ManagedOpenShift-openshift-image-registry-installer-cloud-creden' I: Created policy with ARN 'arn:aws:iam::000000000000:policy/ManagedOpenShift-openshift-ingress-operator-cloud-credentials' I: Created policy with ARN 'arn:aws:iam::000000000000:policy/ManagedOpenShift-openshift-cluster-csi-drivers-ebs-cloud-credent' I: To create a cluster with these roles, run the following command: rosa create cluster --sts", "rosa create cluster --cluster-name <cluster-name> --sts --mode auto --yes", "rosa create cluster --cluster-name my-rosa-cluster --sts --mode auto --yes", "I: Creating cluster 'my-rosa-cluster' I: To view a list of clusters and their status, run 'rosa list clusters' I: Cluster 'my-rosa-cluster' has been created. I: Once the cluster is installed you will need to add an Identity Provider before you can login into the cluster. See 'rosa create idp --help' for more information. I: To determine when your cluster is Ready, run 'rosa describe cluster -c my-rosa-cluster'. I: To watch your cluster installation logs, run 'rosa logs install -c my-rosa-cluster --watch'. Name: my-rosa-cluster ID: 1mlhulb3bo0l54ojd0ji000000000000 External ID: OpenShift Version: Channel Group: stable DNS: my-rosa-cluster.ibhp.p1.openshiftapps.com AWS Account: 000000000000 API URL: Console URL: Region: us-west-2 Multi-AZ: false Nodes: - Master: 3 - Infra: 2 - Compute: 2 Network: - Service CIDR: 172.30.0.0/16 - Machine CIDR: 10.0.0.0/16 - Pod CIDR: 10.128.0.0/14 - Host Prefix: /23 STS Role ARN: arn:aws:iam::000000000000:role/ManagedOpenShift-Installer-Role Support Role ARN: arn:aws:iam::000000000000:role/ManagedOpenShift-Support-Role Instance IAM Roles: - Master: arn:aws:iam::000000000000:role/ManagedOpenShift-ControlPlane-Role - Worker: arn:aws:iam::000000000000:role/ManagedOpenShift-Worker-Role Operator IAM Roles: - arn:aws:iam::000000000000:role/my-rosa-cluster-openshift-image-registry-installer-cloud-credentials - arn:aws:iam::000000000000:role/my-rosa-cluster-openshift-ingress-operator-cloud-credentials - arn:aws:iam::000000000000:role/my-rosa-cluster-openshift-cluster-csi-drivers-ebs-cloud-credentials - arn:aws:iam::000000000000:role/my-rosa-cluster-openshift-machine-api-aws-cloud-credentials - arn:aws:iam::000000000000:role/my-rosa-cluster-openshift-cloud-credential-operator-cloud-credential-oper State: waiting (Waiting for OIDC configuration) Private: No Created: Oct 28 2021 20:28:09 UTC Details Page: https://console.redhat.com/openshift/details/s/1wupmiQy45xr1nN000000000000 OIDC Endpoint URL: https://rh-oidc.s3.us-east-1.amazonaws.com/1mlhulb3bo0l54ojd0ji000000000000", "rosa describe cluster --cluster <cluster-name>", "rosa list clusters", "rosa create account-roles --mode manual", "I: All policy files saved to the current directory I: Run the following commands to create the account roles and policies: aws iam create-role --role-name ManagedOpenShift-Worker-Role --assume-role-policy-document file://sts_instance_worker_trust_policy.json --tags Key=rosa_openshift_version,Value=4.8 Key=rosa_role_prefix,Value=ManagedOpenShift Key=rosa_role_type,Value=instance_worker aws iam put-role-policy --role-name ManagedOpenShift-Worker-Role --policy-name ManagedOpenShift-Worker-Role-Policy --policy-document file://sts_instance_worker_permission_policy.json", "ls openshift_cloud_credential_operator_cloud_credential_operator_iam_ro_creds_policy.json sts_instance_controlplane_permission_policy.json openshift_cluster_csi_drivers_ebs_cloud_credentials_policy.json sts_instance_controlplane_trust_policy.json openshift_image_registry_installer_cloud_credentials_policy.json sts_instance_worker_permission_policy.json openshift_ingress_operator_cloud_credentials_policy.json sts_instance_worker_trust_policy.json openshift_machine_api_aws_cloud_credentials_policy.json sts_support_permission_policy.json sts_installer_permission_policy.json sts_support_trust_policy.json sts_installer_trust_policy.json", "cat sts_installer_permission_policy.json { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"autoscaling:DescribeAutoScalingGroups\", \"ec2:AllocateAddress\", \"ec2:AssociateAddress\", \"ec2:AssociateDhcpOptions\", \"ec2:AssociateRouteTable\", \"ec2:AttachInternetGateway\", \"ec2:AttachNetworkInterface\", \"ec2:AuthorizeSecurityGroupEgress\", \"ec2:AuthorizeSecurityGroupIngress\", [...]", "rosa create cluster --interactive --sts", "Cluster name: my-rosa-cluster OpenShift version: <choose version> External ID (optional): <leave blank> Operator roles prefix: <accept default> Multiple availability zones: No AWS region: <choose region> PrivateLink cluster: No Install into an existing VPC: No Enable Customer Managed key: No Compute nodes instance type: m5.xlarge Enable autoscaling: No Compute nodes: 2 Machine CIDR: <accept default> Service CIDR: <accept default> Pod CIDR: <accept default> Host prefix: <accept default> Encrypt etcd data (optional): No Disable Workload monitoring: No", "I: Creating cluster 'my-rosa-cluster' I: To create this cluster again in the future, you can run: rosa create cluster --cluster-name my-rosa-cluster --role-arn arn:aws:iam::000000000000:role/ManagedOpenShift-Installer-Role --support-role-arn arn:aws:iam::000000000000:role/ManagedOpenShift-Support-Role --master-iam-role arn:aws:iam::000000000000:role/ManagedOpenShift-ControlPlane-Role --worker-iam-role arn:aws:iam::000000000000:role/ManagedOpenShift-Worker-Role --operator-roles-prefix my-rosa-cluster --region us-west-2 --version 4.8.13 --compute-nodes 2 --machine-cidr 10.0.0.0/16 --service-cidr 172.30.0.0/16 --pod-cidr 10.128.0.0/14 --host-prefix 23 I: To view a list of clusters and their status, run 'rosa list clusters' I: Cluster 'my-rosa-cluster' has been created. I: Once the cluster is installed you will need to add an Identity Provider before you can login into the cluster. See 'rosa create idp --help' for more information. Name: my-rosa-cluster ID: 1t6i760dbum4mqltqh6o000000000000 External ID: OpenShift Version: Channel Group: stable DNS: my-rosa-cluster.abcd.p1.openshiftapps.com AWS Account: 000000000000 API URL: Console URL: Region: us-west-2 Multi-AZ: false Nodes: - Control plane: 3 - Infra: 2 - Compute: 2 Network: - Service CIDR: 172.30.0.0/16 - Machine CIDR: 10.0.0.0/16 - Pod CIDR: 10.128.0.0/14 - Host Prefix: /23 STS Role ARN: arn:aws:iam::000000000000:role/ManagedOpenShift-Installer-Role Support Role ARN: arn:aws:iam::000000000000:role/ManagedOpenShift-Support-Role Instance IAM Roles: - Control plane: arn:aws:iam::000000000000:role/ManagedOpenShift-ControlPlane-Role - Worker: arn:aws:iam::000000000000:role/ManagedOpenShift-Worker-Role Operator IAM Roles: - arn:aws:iam::000000000000:role/my-rosa-cluster-w7i6-openshift-ingress-operator-cloud-credentials - arn:aws:iam::000000000000:role/my-rosa-cluster-w7i6-openshift-cluster-csi-drivers-ebs-cloud-credentials - arn:aws:iam::000000000000:role/my-rosa-cluster-w7i6-openshift-cloud-network-config-controller-cloud-cre - arn:aws:iam::000000000000:role/my-rosa-cluster-openshift-machine-api-aws-cloud-credentials - arn:aws:iam::000000000000:role/my-rosa-cluster-openshift-cloud-credential-operator-cloud-credentia - arn:aws:iam::000000000000:role/my-rosa-cluster-openshift-image-registry-installer-cloud-credential State: waiting (Waiting for OIDC configuration) Private: No Created: Jul 1 2022 22:13:50 UTC Details Page: https://console.redhat.com/openshift/details/s/2BMQm8xz8Hq5yEN000000000000 OIDC Endpoint URL: https://rh-oidc.s3.us-east-1.amazonaws.com/1t6i760dbum4mqltqh6o000000000000 I: Run the following commands to continue the cluster creation: rosa create operator-roles --cluster my-rosa-cluster rosa create oidc-provider --cluster my-rosa-cluster I: To determine when your cluster is Ready, run 'rosa describe cluster -c my-rosa-cluster'. I: To watch your cluster installation logs, run 'rosa logs install -c my-rosa-cluster --watch'.", "rosa create operator-roles --mode manual --cluster <cluster-name>", "I: Run the following commands to create the operator roles: aws iam create-role --role-name my-rosa-cluster-openshift-image-registry-installer-cloud-credentials --assume-role-policy-document file://operator_image_registry_installer_cloud_credentials_policy.json --tags Key=rosa_cluster_id,Value=1mkesci269png3tck000000000000000 Key=rosa_openshift_version,Value=4.8 Key=rosa_role_prefix,Value= Key=operator_namespace,Value=openshift-image-registry Key=operator_name,Value=installer-cloud-credentials aws iam attach-role-policy --role-name my-rosa-cluster-openshift-image-registry-installer-cloud-credentials --policy-arn arn:aws:iam::000000000000:policy/ManagedOpenShift-openshift-image-registry-installer-cloud-creden [...]", "rosa create oidc-provider --mode manual --cluster <cluster-name>", "I: Run the following commands to create the OIDC provider: aws iam create-open-id-connect-provider --url https://rh-oidc.s3.us-east-1.amazonaws.com/1mkesci269png3tckknhh0rfs2da5fj9 --client-id-list openshift sts.amazonaws.com --thumbprint-list a9d53002e97e00e043244f3d170d000000000000 aws iam create-open-id-connect-provider --url https://rh-oidc.s3.us-east-1.amazonaws.com/1mkesci269png3tckknhh0rfs2da5fj9 --client-id-list openshift sts.amazonaws.com --thumbprint-list a9d53002e97e00e043244f3d170d000000000000", "rosa describe cluster --cluster <cluster-name>", "rosa list clusters", "rosa describe cluster -c <cluster-name> | grep Console", "rosa create account-roles --mode auto --yes", "rosa create ocm-role --mode auto --admin --yes", "rosa create user-role --mode auto --yes", "rosa create account-roles --mode auto --yes", "I: Creating roles using 'arn:aws:iam::000000000000:user/rosa-user' I: Created role 'ManagedOpenShift-ControlPlane-Role' with ARN 'arn:aws:iam::000000000000:role/ManagedOpenShift-ControlPlane-Role' I: Created role 'ManagedOpenShift-Worker-Role' with ARN 'arn:aws:iam::000000000000:role/ManagedOpenShift-Worker-Role' I: Created role 'ManagedOpenShift-Support-Role' with ARN 'arn:aws:iam::000000000000:role/ManagedOpenShift-Support-Role' I: Created role 'ManagedOpenShift-Installer-Role' with ARN 'arn:aws:iam::000000000000:role/ManagedOpenShift-Installer-Role' I: Created policy with ARN 'arn:aws:iam::000000000000:policy/ManagedOpenShift-openshift-machine-api-aws-cloud-credentials' I: Created policy with ARN 'arn:aws:iam::000000000000:policy/ManagedOpenShift-openshift-cloud-credential-operator-cloud-crede' I: Created policy with ARN 'arn:aws:iam::000000000000:policy/ManagedOpenShift-openshift-image-registry-installer-cloud-creden' I: Created policy with ARN 'arn:aws:iam::000000000000:policy/ManagedOpenShift-openshift-ingress-operator-cloud-credentials' I: Created policy with ARN 'arn:aws:iam::000000000000:policy/ManagedOpenShift-openshift-cluster-csi-drivers-ebs-cloud-credent' I: To create a cluster with these roles, run the following command: rosa create cluster --sts", "rosa list ocm-role", "rosa create ocm-role --mode auto --admin --yes", "I: Creating ocm role I: Creating role using 'arn:aws:iam::000000000000:user/rosa-user' I: Created role 'ManagedOpenShift-OCM-Role-12561000' with ARN 'arn:aws:iam::000000000000:role/ManagedOpenShift-OCM-Role-12561000' I: Linking OCM role I: Successfully linked role-arn 'arn:aws:iam::000000000000:role/ManagedOpenShift-OCM-Role-12561000' with organization account '1MpZfntsZeUdjWHg7XRgP000000'", "rosa create ocm-role --mode manual --admin --yes", "rosa create ocm-role --mode auto --yes", "rosa list user-role", "rosa create user-role --mode auto --yes", "I: Creating User role I: Creating ocm user role using 'arn:aws:iam::000000000000:user/rosa-user' I: Created role 'ManagedOpenShift-User-rosa-user-Role' with ARN 'arn:aws:iam::000000000000:role/ManagedOpenShift-User-rosa-user-Role' I: Linking User role I: Successfully linked role ARN 'arn:aws:iam::000000000000:role/ManagedOpenShift-User-rosa-user-Role' with account '1rbOQez0z5j1YolInhcXY000000'", "rosa create account-roles --mode auto", "rosa create operator-roles --mode auto --cluster <cluster-name> --yes", "I: Creating roles using 'arn:aws:iam::000000000000:user/rosauser' I: Created role 'rosacluster-b736-openshift-ingress-operator-cloud-credentials' with ARN 'arn:aws:iam::000000000000:role/rosacluster-b736-openshift-ingress-operator-cloud-credentials' I: Created role 'rosacluster-b736-openshift-cluster-csi-drivers-ebs-cloud-credent' with ARN 'arn:aws:iam::000000000000:role/rosacluster-b736-openshift-cluster-csi-drivers-ebs-cloud-credent' I: Created role 'rosacluster-b736-openshift-cloud-network-config-controller-cloud' with ARN 'arn:aws:iam::000000000000:role/rosacluster-b736-openshift-cloud-network-config-controller-cloud' I: Created role 'rosacluster-b736-openshift-machine-api-aws-cloud-credentials' with ARN 'arn:aws:iam::000000000000:role/rosacluster-b736-openshift-machine-api-aws-cloud-credentials' I: Created role 'rosacluster-b736-openshift-cloud-credential-operator-cloud-crede' with ARN 'arn:aws:iam::000000000000:role/rosacluster-b736-openshift-cloud-credential-operator-cloud-crede' I: Created role 'rosacluster-b736-openshift-image-registry-installer-cloud-creden' with ARN 'arn:aws:iam::000000000000:role/rosacluster-b736-openshift-image-registry-installer-cloud-creden'", "rosa create oidc-provider --mode auto --cluster <cluster-name> --yes", "I: Creating OIDC provider using 'arn:aws:iam::000000000000:user/rosauser' I: Created OIDC provider with ARN 'arn:aws:iam::000000000000:oidc-provider/rh-oidc.s3.us-east-1.amazonaws.com/1tt4kvrr2kha2rgs8gjfvf0000000000'", "rosa list regions --hosted-cp", "#!/bin/bash set -e ########## This script will create the network requirements for a ROSA cluster. This will be a public cluster. This creates: - VPC - Public and private subnets - Internet Gateway - Relevant route tables - NAT Gateway # This will automatically use the region configured for the aws cli # ########## VPC_CIDR=10.0.0.0/16 PUBLIC_CIDR_SUBNET=10.0.1.0/24 PRIVATE_CIDR_SUBNET=10.0.0.0/24 Create VPC echo -n \"Creating VPC...\" VPC_ID=USD(aws ec2 create-vpc --cidr-block USDVPC_CIDR --query Vpc.VpcId --output text) Create tag name aws ec2 create-tags --resources USDVPC_ID --tags Key=Name,Value=USDCLUSTER_NAME Enable dns hostname aws ec2 modify-vpc-attribute --vpc-id USDVPC_ID --enable-dns-hostnames echo \"done.\" Create Public Subnet echo -n \"Creating public subnet...\" PUBLIC_SUBNET_ID=USD(aws ec2 create-subnet --vpc-id USDVPC_ID --cidr-block USDPUBLIC_CIDR_SUBNET --query Subnet.SubnetId --output text) aws ec2 create-tags --resources USDPUBLIC_SUBNET_ID --tags Key=Name,Value=USDCLUSTER_NAME-public echo \"done.\" Create private subnet echo -n \"Creating private subnet...\" PRIVATE_SUBNET_ID=USD(aws ec2 create-subnet --vpc-id USDVPC_ID --cidr-block USDPRIVATE_CIDR_SUBNET --query Subnet.SubnetId --output text) aws ec2 create-tags --resources USDPRIVATE_SUBNET_ID --tags Key=Name,Value=USDCLUSTER_NAME-private echo \"done.\" Create an internet gateway for outbound traffic and attach it to the VPC. echo -n \"Creating internet gateway...\" IGW_ID=USD(aws ec2 create-internet-gateway --query InternetGateway.InternetGatewayId --output text) echo \"done.\" aws ec2 create-tags --resources USDIGW_ID --tags Key=Name,Value=USDCLUSTER_NAME aws ec2 attach-internet-gateway --vpc-id USDVPC_ID --internet-gateway-id USDIGW_ID > /dev/null 2>&1 echo \"Attached IGW to VPC.\" Create a route table for outbound traffic and associate it to the public subnet. echo -n \"Creating route table for public subnet...\" PUBLIC_ROUTE_TABLE_ID=USD(aws ec2 create-route-table --vpc-id USDVPC_ID --query RouteTable.RouteTableId --output text) aws ec2 create-tags --resources USDPUBLIC_ROUTE_TABLE_ID --tags Key=Name,Value=USDCLUSTER_NAME echo \"done.\" aws ec2 create-route --route-table-id USDPUBLIC_ROUTE_TABLE_ID --destination-cidr-block 0.0.0.0/0 --gateway-id USDIGW_ID > /dev/null 2>&1 echo \"Created default public route.\" aws ec2 associate-route-table --subnet-id USDPUBLIC_SUBNET_ID --route-table-id USDPUBLIC_ROUTE_TABLE_ID > /dev/null 2>&1 echo \"Public route table associated\" Create a NAT gateway in the public subnet for outgoing traffic from the private network. echo -n \"Creating NAT Gateway...\" NAT_IP_ADDRESS=USD(aws ec2 allocate-address --domain vpc --query AllocationId --output text) NAT_GATEWAY_ID=USD(aws ec2 create-nat-gateway --subnet-id USDPUBLIC_SUBNET_ID --allocation-id USDNAT_IP_ADDRESS --query NatGateway.NatGatewayId --output text) aws ec2 create-tags --resources USDNAT_IP_ADDRESS --resources USDNAT_GATEWAY_ID --tags Key=Name,Value=USDCLUSTER_NAME sleep 10 echo \"done.\" Create a route table for the private subnet to the NAT gateway. echo -n \"Creating a route table for the private subnet to the NAT gateway...\" PRIVATE_ROUTE_TABLE_ID=USD(aws ec2 create-route-table --vpc-id USDVPC_ID --query RouteTable.RouteTableId --output text) aws ec2 create-tags --resources USDPRIVATE_ROUTE_TABLE_ID USDNAT_IP_ADDRESS --tags Key=Name,Value=USDCLUSTER_NAME-private aws ec2 create-route --route-table-id USDPRIVATE_ROUTE_TABLE_ID --destination-cidr-block 0.0.0.0/0 --gateway-id USDNAT_GATEWAY_ID > /dev/null 2>&1 aws ec2 associate-route-table --subnet-id USDPRIVATE_SUBNET_ID --route-table-id USDPRIVATE_ROUTE_TABLE_ID > /dev/null 2>&1 echo \"done.\" echo \"***********VARIABLE VALUES*********\" echo \"VPC_ID=\"USDVPC_ID echo \"PUBLIC_SUBNET_ID=\"USDPUBLIC_SUBNET_ID echo \"PRIVATE_SUBNET_ID=\"USDPRIVATE_SUBNET_ID echo \"PUBLIC_ROUTE_TABLE_ID=\"USDPUBLIC_ROUTE_TABLE_ID echo \"PRIVATE_ROUTE_TABLE_ID=\"USDPRIVATE_ROUTE_TABLE_ID echo \"NAT_GATEWAY_ID=\"USDNAT_GATEWAY_ID echo \"IGW_ID=\"USDIGW_ID echo \"NAT_IP_ADDRESS=\"USDNAT_IP_ADDRESS echo \"Setup complete.\" echo \"\" echo \"To make the cluster create commands easier, please run the following commands to set the environment variables:\" echo \"export PUBLIC_SUBNET_ID=USDPUBLIC_SUBNET_ID\" echo \"export PRIVATE_SUBNET_ID=USDPRIVATE_SUBNET_ID\"", "export PUBLIC_SUBNET_ID=USDPUBLIC_SUBNET_ID export PRIVATE_SUBNET_ID=USDPRIVATE_SUBNET_ID", "echo \"Public Subnet: USDPUBLIC_SUBNET_ID\"; echo \"Private Subnet: USDPRIVATE_SUBNET_ID\"", "Public Subnet: subnet-0faeeeb0000000000 Private Subnet: subnet-011fe340000000000", "export OIDC_ID=USD(rosa create oidc-config --mode auto --managed --yes -o json | jq -r '.id')", "export CLUSTER_NAME=<cluster_name> export REGION=<VPC_region>", "rosa create account-roles --mode auto --yes", "rosa create cluster --cluster-name USDCLUSTER_NAME --subnet-ids USD{PUBLIC_SUBNET_ID},USD{PRIVATE_SUBNET_ID} --hosted-cp --region USDREGION --oidc-config-id USDOIDC_ID --sts --mode auto --yes", "rosa describe cluster --cluster USDCLUSTER_NAME", "rosa list clusters", "rosa logs install --cluster USDCLUSTER_NAME --watch", "rosa create admin --cluster=<cluster-name>", "W: It is recommended to add an identity provider to login to this cluster. See 'rosa create idp --help' for more information. I: Admin account has been added to cluster 'my-rosa-cluster'. It may take up to a minute for the account to become active. I: To login, run the following command: login https://api.my-rosa-cluster.abcd.p1.openshiftapps.com:6443 --username cluster-admin --password FWGYL-2mkJI-00000-00000", "oc login https://api.my-rosa-cluster.abcd.p1.openshiftapps.com:6443 > --username cluster-admin > --password FWGYL-2mkJI-00000-00000", "Login successful. You have access to 79 projects, the list has been suppressed. You can list all projects with ' projects' Using project \"default\".", "oc whoami", "cluster-admin", "get all -n openshift-apiserver", "rosa create idp --help", "rosa create idp --cluster=<cluster name> --interactive", "Type of identity provider: github Identity Provider Name: <IDP-name> Restrict to members of: organizations GitHub organizations: <organization-account-name>", "rosa grant user cluster-admin --user <idp_user_name> --cluster=<cluster-name>", "rosa grant user dedicated-admin --user <idp_user_name> --cluster=<cluster-name>", "rosa list users --cluster=<cluster-name>", "rosa list users --cluster=my-rosa-cluster ID GROUPS <idp_user_name> cluster-admins", "oc get all -n openshift-apiserver", "oc login --token=sha256~GBAfS4JQ0t1UTKYHbWAK6OUWGUkdMGz000000000000 --server=https://api.my-rosa-cluster.abcd.p1.openshiftapps.com:6443", "Logged into \"https://api.my-rosa-cluster.abcd.p1.openshiftapps.com:6443\" as \"rosa-user\" using the token provided. You have access to 79 projects, the list has been suppressed. You can list all projects with ' projects' Using project \"default\".", "oc whoami", "rosa-user", "rosa describe cluster -c <cluster-name> | grep Console", "rosa create machinepool --cluster=<cluster-name> --name=<machinepool-name> --replicas=<number-nodes>", "rosa create machinepool --cluster=my-rosa-cluster --name=new-mp --replicas=2", "I: Machine pool 'new-mp' created successfully on cluster 'my-rosa-cluster' I: To view all machine pools, run 'rosa list machinepools -c my-rosa-cluster'", "rosa create machinepool --cluster=<cluster-name> --name=<machinepool-name> --replicas=<number-nodes> --labels=`<key=pair>`", "rosa create machinepool --cluster=my-rosa-cluster --name=db-nodes-mp --replicas=2 --labels='app=db','tier=backend'", "I: Machine pool 'db-nodes-mp' created successfully on cluster 'my-rosa-cluster'", "rosa list machinepools --cluster=<cluster-name>", "ID AUTOSCALING REPLICAS INSTANCE TYPE LABELS TAINTS AVAILABILITY ZONES Default No 2 m5.xlarge us-east-1a", "rosa list machinepools --cluster=<cluster-name>", "ID AUTOSCALING REPLICAS INSTANCE TYPE LABELS TAINTS AVAILABILITY ZONES Default No 2 m5.xlarge us-east-1a", "rosa edit machinepool --cluster=<cluster-name> --replicas=<number-nodes> <machinepool-name>", "rosa edit machinepool --cluster=my-rosa-cluster --replicas 3 Default", "rosa describe cluster --cluster=<cluster-name> | grep Compute", "rosa describe cluster --cluster=my-rosa-cluster | grep Compute", "- Compute: 3 (m5.xlarge)", "rosa edit machinepool --cluster=<cluster-name> --replicas=<number-nodes> --labels='key=value' <machinepool-name>", "rosa edit machinepool --cluster=my-rosa-cluster --replicas=2 --labels 'foo=bar','baz=one' new-mp", "rosa create machinepool --cluster=<cluster-name> --name=<mp-name> --replicas=<number-nodes> --labels='<key=pair>' --instance-type=<type>", "rosa create machinepool --cluster=my-rosa-cluster --name=db-nodes-large-mp --replicas=2 --labels='app=db','tier=backend' --instance-type=m5.2xlarge", "rosa list instance-types", "rosa create machinepool -c <cluster-name> --interactive", "rosa list machinepools -c <cluster-name>", "rosa list machinepools -c <cluster-name>", "ID AUTOSCALING REPLICAS INSTANCE TYPE LABELS TAINTS AVAILABILITY ZONES Default No 2 m5.xlarge us-east-1a", "rosa edit machinepool -c <cluster-name> --enable-autoscaling <machinepool-name> --min-replicas=<num> --max-replicas=<num>", "rosa edit machinepool -c my-rosa-cluster --enable-autoscaling Default --min-replicas=2 --max-replicas=4", "rosa list machinepools -c <cluster-name>", "ID AUTOSCALING REPLICAS INSTANCE TYPE LABELS TAINTS AVAILABILITY ZONES Default Yes 2-4 m5.xlarge us-east-1a", "rosa upgrade cluster --help", "rosa list upgrade -c <cluster-name>", "rosa list upgrade -c <cluster-name> VERSION NOTES 4.14.7 recommended 4.14.6", "rosa upgrade cluster -c <cluster-name> --version <desired-version>", "rosa upgrade cluster -c <cluster-name> --version <desired-version> --schedule-date <future-date-for-update> --schedule-time <future-time-for-update>", "rosa list clusters", "rosa delete cluster --cluster <cluster-name>", "rosa list clusters", "rosa delete oidc-provider -c <clusterID> --mode auto --yes", "rosa delete operator-roles -c <clusterID> --mode auto --yes", "rosa delete account-roles --prefix <prefix> --mode auto --yes", "rosa create ocm-role", "rosa create user-role", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: ostoy-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi --- apiVersion: apps/v1 kind: Deployment metadata: name: ostoy-frontend labels: app: ostoy spec: selector: matchLabels: app: ostoy-frontend strategy: type: Recreate replicas: 1 template: metadata: labels: app: ostoy-frontend spec: # Uncomment to use with ACK portion of the workshop # If you chose a different service account name please replace it. # serviceAccount: ostoy-sa containers: - name: ostoy-frontend securityContext: allowPrivilegeEscalation: false runAsNonRoot: true seccompProfile: type: RuntimeDefault capabilities: drop: - ALL image: quay.io/ostoylab/ostoy-frontend:1.6.0 imagePullPolicy: IfNotPresent ports: - name: ostoy-port containerPort: 8080 resources: requests: memory: \"256Mi\" cpu: \"100m\" limits: memory: \"512Mi\" cpu: \"200m\" volumeMounts: - name: configvol mountPath: /var/config - name: secretvol mountPath: /var/secret - name: datavol mountPath: /var/demo_files livenessProbe: httpGet: path: /health port: 8080 initialDelaySeconds: 10 periodSeconds: 5 env: - name: ENV_TOY_SECRET valueFrom: secretKeyRef: name: ostoy-secret-env key: ENV_TOY_SECRET - name: MICROSERVICE_NAME value: OSTOY_MICROSERVICE_SVC - name: NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumes: - name: configvol configMap: name: ostoy-configmap-files - name: secretvol secret: defaultMode: 420 secretName: ostoy-secret - name: datavol persistentVolumeClaim: claimName: ostoy-pvc --- apiVersion: v1 kind: Service metadata: name: ostoy-frontend-svc labels: app: ostoy-frontend spec: type: ClusterIP ports: - port: 8080 targetPort: ostoy-port protocol: TCP name: ostoy selector: app: ostoy-frontend --- apiVersion: route.openshift.io/v1 kind: Route metadata: name: ostoy-route spec: to: kind: Service name: ostoy-frontend-svc --- apiVersion: v1 kind: Secret metadata: name: ostoy-secret-env type: Opaque data: ENV_TOY_SECRET: VGhpcyBpcyBhIHRlc3Q= --- kind: ConfigMap apiVersion: v1 metadata: name: ostoy-configmap-files data: config.json: '{ \"default\": \"123\" }' --- apiVersion: v1 kind: Secret metadata: name: ostoy-secret data: secret.txt: VVNFUk5BTUU9bXlfdXNlcgpQQVNTV09SRD1AT3RCbCVYQXAhIzYzMlk1RndDQE1UUWsKU01UUD1sb2NhbGhvc3QKU01UUF9QT1JUPTI1 type: Opaque", "apiVersion: apps/v1 kind: Deployment metadata: name: ostoy-microservice labels: app: ostoy spec: selector: matchLabels: app: ostoy-microservice replicas: 1 template: metadata: labels: app: ostoy-microservice spec: containers: - name: ostoy-microservice securityContext: allowPrivilegeEscalation: false runAsNonRoot: true seccompProfile: type: RuntimeDefault capabilities: drop: - ALL image: quay.io/ostoylab/ostoy-microservice:1.5.0 imagePullPolicy: IfNotPresent ports: - containerPort: 8080 protocol: TCP resources: requests: memory: \"128Mi\" cpu: \"50m\" limits: memory: \"256Mi\" cpu: \"100m\" --- apiVersion: v1 kind: Service metadata: name: ostoy-microservice-svc labels: app: ostoy-microservice spec: type: ClusterIP ports: - port: 8080 targetPort: 8080 protocol: TCP selector: app: ostoy-microservice", "apiVersion: s3.services.k8s.aws/v1alpha1 kind: Bucket metadata: name: ostoy-bucket namespace: ostoy spec: name: ostoy-bucket", "oc login --token=<your_token> --server=https://api.osd4-demo.abc1.p1.openshiftapps.com:6443 Logged into \"https://api.myrosacluster.abcd.p1.openshiftapps.com:6443\" as \"rosa-user\" using the token provided. You don't have any projects. You can try to create a new project, by running new-project <project name>", "oc new-project ostoy", "Now using project \"ostoy\" on server \"https://api.myrosacluster.abcd.p1.openshiftapps.com:6443\".", "oc new-project ostoy-USD(uuidgen | cut -d - -f 2 | tr '[:upper:]' '[:lower:]')", "oc apply -f https://raw.githubusercontent.com/openshift-cs/rosaworkshop/master/rosa-workshop/ostoy/yaml/ostoy-microservice-deployment.yaml", "oc apply -f https://raw.githubusercontent.com/openshift-cs/rosaworkshop/master/rosa-workshop/ostoy/yaml/ostoy-microservice-deployment.yaml deployment.apps/ostoy-microservice created service/ostoy-microservice-svc created", "oc apply -f https://raw.githubusercontent.com/openshift-cs/rosaworkshop/master/rosa-workshop/ostoy/yaml/ostoy-frontend-deployment.yaml", "persistentvolumeclaim/ostoy-pvc created deployment.apps/ostoy-frontend created service/ostoy-frontend-svc created route.route.openshift.io/ostoy-route created configmap/ostoy-configmap-env created secret/ostoy-secret-env created configmap/ostoy-configmap-files created secret/ostoy-secret created", "oc get route", "NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD ostoy-route ostoy-route-ostoy.apps.<your-rosa-cluster>.abcd.p1.openshiftapps.com ostoy-frontend-svc <all> None", "oc get pods", "oc rsh <pod_name>", "cd /var/demo_files", "ls", "cat test-pv.txt", "oc get pods NAME READY STATUS RESTARTS AGE ostoy-frontend-5fc8d486dc-wsw24 1/1 Running 0 18m ostoy-microservice-6cf764974f-hx4qm 1/1 Running 0 18m oc rsh ostoy-frontend-5fc8d486dc-wsw24 cd /var/demo_files/ ls lost+found test-pv.txt cat test-pv.txt OpenShift is the greatest thing since sliced bread!", "kind: ConfigMap apiVersion: v1 metadata: name: ostoy-configmap-files data: config.json: '{ \"default\": \"123\" }'", "USERNAME=my_user PASSWORD=VVNFUk5BTUU9bXlfdXNlcgpQQVNTV09SRD1AT3RCbCVYQXAhIzYzMlk1RndDQE1UUWsKU01UUD1sb2NhbGhvc3QKU01UUF9QT1JUPTI1 SMTP=localhost SMTP_PORT=25", "{ \"npm_config_local_prefix\": \"/opt/app-root/src\", \"STI_SCRIPTS_PATH\": \"/usr/libexec/s2i\", \"npm_package_version\": \"1.7.0\", \"APP_ROOT\": \"/opt/app-root\", \"NPM_CONFIG_PREFIX\": \"/opt/app-root/src/.npm-global\", \"OSTOY_MICROSERVICE_PORT_8080_TCP_PORT\": \"8080\", \"NODE\": \"/usr/bin/node\", \"LD_PRELOAD\": \"libnss_wrapper.so\", \"KUBERNETES_SERVICE_HOST\": \"172.30.0.1\", \"OSTOY_MICROSERVICE_PORT\": \"tcp://172.30.60.255:8080\", \"OSTOY_PORT\": \"tcp://172.30.152.25:8080\", \"npm_package_name\": \"ostoy\", \"OSTOY_SERVICE_PORT_8080_TCP\": \"8080\", \"_\": \"/usr/bin/node\" \"ENV_TOY_CONFIGMAP\": \"ostoy-configmap -env\" }", "oc get service <name_of_service> -o yaml", "apiVersion: v1 kind: Service metadata: name: ostoy-microservice-svc labels: app: ostoy-microservice spec: type: ClusterIP ports: - port: 8080 targetPort: 8080 protocol: TCP selector: app: ostoy-microservice", "oc get pods", "NAME READY STATUS RESTARTS AGE ostoy-frontend-679cb85695-5cn7x 1/1 Running 0 1h ostoy-microservice-86b4c6f559-p594d 1/1 Running 0 1h", "spec: selector: matchLabels: app: ostoy-microservice replicas: 3", "oc apply -f ostoy-microservice-deployment.yaml", "oc get pods", "NAME READY STATUS RESTARTS AGE ostoy-frontend-5fbcc7d9-rzlgz 1/1 Running 0 26m ostoy-microservice-6666dcf455-2lcv4 1/1 Running 0 81s ostoy-microservice-6666dcf455-5z56w 1/1 Running 0 81s ostoy-microservice-6666dcf455-tqzmn 1/1 Running 0 26m", "oc scale deployment ostoy-microservice --replicas=2", "oc get pods", "NAME READY STATUS RESTARTS AGE ostoy-frontend-5fbcc7d9-rzlgz 1/1 Running 0 75m ostoy-microservice-6666dcf455-2lcv4 1/1 Running 0 50m ostoy-microservice-6666dcf455-tqzmn 1/1 Running 0 75m", "oc autoscale deployment/ostoy-microservice --cpu-percent=80 --min=1 --max=10", "get pods --field-selector=status.phase=Running | grep microservice", "ostoy-microservice-79894f6945-cdmbd 1/1 Running 0 3m14s ostoy-microservice-79894f6945-mgwk7 1/1 Running 0 4h24m ostoy-microservice-79894f6945-q925d 1/1 Running 0 3m14s", "oc new-project autoscale-ex", "oc create -f https://raw.githubusercontent.com/openshift-cs/rosaworkshop/master/rosa-workshop/ostoy/yaml/job-work-queue.yaml", "oc get pods", "NAME READY STATUS RESTARTS AGE work-queue-5x2nq-24xxn 0/1 Pending 0 10s work-queue-5x2nq-57zpt 0/1 Pending 0 10s work-queue-5x2nq-58bvs 0/1 Pending 0 10s work-queue-5x2nq-6c5tl 1/1 Running 0 10s work-queue-5x2nq-7b84p 0/1 Pending 0 10s work-queue-5x2nq-7hktm 0/1 Pending 0 10s work-queue-5x2nq-7md52 0/1 Pending 0 10s work-queue-5x2nq-7qgmp 0/1 Pending 0 10s work-queue-5x2nq-8279r 0/1 Pending 0 10s work-queue-5x2nq-8rkj2 0/1 Pending 0 10s work-queue-5x2nq-96cdl 0/1 Pending 0 10s work-queue-5x2nq-96tfr 0/1 Pending 0 10s", "oc get nodes", "NAME STATUS ROLES AGE VERSION ip-10-0-138-106.us-west-2.compute.internal Ready infra,worker 22h v1.23.5+3afdacb ip-10-0-153-68.us-west-2.compute.internal Ready worker 2m12s v1.23.5+3afdacb ip-10-0-165-183.us-west-2.compute.internal Ready worker 2m8s v1.23.5+3afdacb ip-10-0-176-123.us-west-2.compute.internal Ready infra,worker 22h v1.23.5+3afdacb ip-10-0-195-210.us-west-2.compute.internal Ready master 23h v1.23.5+3afdacb ip-10-0-196-84.us-west-2.compute.internal Ready master 23h v1.23.5+3afdacb ip-10-0-203-104.us-west-2.compute.internal Ready worker 2m6s v1.23.5+3afdacb ip-10-0-217-202.us-west-2.compute.internal Ready master 23h v1.23.5+3afdacb ip-10-0-225-141.us-west-2.compute.internal Ready worker 23h v1.23.5+3afdacb ip-10-0-231-245.us-west-2.compute.internal Ready worker 2m11s v1.23.5+3afdacb ip-10-0-245-27.us-west-2.compute.internal Ready worker 2m8s v1.23.5+3afdacb ip-10-0-245-7.us-west-2.compute.internal Ready worker 23h v1.23.5+3afdacb", "oc project ostoy", "curl https://raw.githubusercontent.com/openshift-cs/rosaworkshop/master/rosa-workshop/ostoy/resources/configure-cloudwatch.sh | bash", "Varaibles are set...ok. Policy already exists...ok. Created RosaCloudWatch-mycluster role. Attached role policy. Deploying the Red Hat OpenShift Logging Operator namespace/openshift-logging configured operatorgroup.operators.coreos.com/cluster-logging created subscription.operators.coreos.com/cluster-logging created Waiting for Red Hat OpenShift Logging Operator deployment to complete Red Hat OpenShift Logging Operator deployed. secret/cloudwatch-credentials created clusterlogforwarder.logging.openshift.io/instance created clusterlogging.logging.openshift.io/instance created Complete.", "aws logs describe-log-groups --log-group-name-prefix rosa-mycluster", "{ \"logGroups\": [ { \"logGroupName\": \"rosa-mycluster.application\", \"creationTime\": 1724104537717, \"metricFilterCount\": 0, \"arn\": \"arn:aws:logs:us-west-2:000000000000:log-group:rosa-mycluster.application:*\", \"storedBytes\": 0, \"logGroupClass\": \"STANDARD\", \"logGroupArn\": \"arn:aws:logs:us-west-2:000000000000:log-group:rosa-mycluster.application\" }, { \"logGroupName\": \"rosa-mycluster.audit\", \"creationTime\": 1724104152968, \"metricFilterCount\": 0, \"arn\": \"arn:aws:logs:us-west-2:000000000000:log-group:rosa-mycluster.audit:*\", \"storedBytes\": 0, \"logGroupClass\": \"STANDARD\", \"logGroupArn\": \"arn:aws:logs:us-west-2:000000000000:log-group:rosa-mycluster.audit\" },", "oc get pods -o name", "pod/ostoy-frontend-679cb85695-5cn7x 1 pod/ostoy-microservice-86b4c6f559-p594d", "oc logs <pod-name>", "oc logs ostoy-frontend-679cb85695-5cn7x [...] ostoy-frontend-679cb85695-5cn7x: server starting on port 8080 Redirecting to /home stdout: All is well! stderr: Oh no! Error!", "oc login --token=RYhFlXXXXXXXXXXXX --server=https://api.osd4-demo.abc1.p1.openshiftapps.com:6443", "Logged into \"https://api.myrosacluster.abcd.p1.openshiftapps.com:6443\" as \"rosa-user\" using the token provided. You don't have any projects. You can try to create a new project, by running new-project <project name>", "oc new-project ostoy-s2i", "oc create -f https://raw.githubusercontent.com/<UserName>/ostoy/master/deployment/yaml/secret.yaml", "oc create -f https://raw.githubusercontent.com/<UserName>/ostoy/master/deployment/yaml/configmap.yaml", "oc new-app https://github.com/<UserName>/ostoy --context-dir=microservice --name=ostoy-microservice --labels=app=ostoy", "--> Creating resources with label app=ostoy imagestream.image.openshift.io \"ostoy-microservice\" created buildconfig.build.openshift.io \"ostoy-microservice\" created deployment.apps \"ostoy-microservice\" created service \"ostoy-microservice\" created --> Success Build scheduled, use 'oc logs -f buildconfig/ostoy-microservice' to track its progress. Application is not exposed. You can expose services to the outside world by executing one or more of the commands below: 'oc expose service/ostoy-microservice' Run 'oc status' to view your app.", "oc status", "In project ostoy-s2i on server https://api.myrosacluster.g14t.p1.openshiftapps.com:6443 svc/ostoy-microservice - 172.30.47.74:8080 dc/ostoy-microservice deploys istag/ostoy-microservice:latest <- bc/ostoy-microservice source builds https://github.com/UserName/ostoy on openshift/nodejs:14-ubi8 deployment #1 deployed 34 seconds ago - 1 pod", "oc new-app https://github.com/<UserName>/ostoy --env=MICROSERVICE_NAME=OSTOY_MICROSERVICE", "--> Creating resources imagestream.image.openshift.io \"ostoy\" created buildconfig.build.openshift.io \"ostoy\" created deployment.apps \"ostoy\" created service \"ostoy\" created --> Success Build scheduled, use 'oc logs -f buildconfig/ostoy' to track its progress. Application is not exposed. You can expose services to the outside world by executing one or more of the commands below: 'oc expose service/ostoy' Run 'oc status' to view your app.", "oc patch deployment ostoy --type=json -p '[{\"op\": \"replace\", \"path\": \"/spec/strategy/type\", \"value\": \"Recreate\"}, {\"op\": \"remove\", \"path\": \"/spec/strategy/rollingUpdate\"}]'", "oc set probe deployment ostoy --liveness --get-url=http://:8080/health", "oc set volume deployment ostoy --add --secret-name=ostoy-secret --mount-path=/var/secret", "oc set volume deployment ostoy --add --configmap-name=ostoy-config -m /var/config", "oc set volume deployment ostoy --add --type=pvc --claim-size=1G -m /var/demo_files", "oc create route edge --service=ostoy --insecure-policy=Redirect", "python -m webbrowser \"USD(oc get route ostoy -o template --template='https://{{.spec.host}}')\"", "oc get route", "oc get bc/ostoy-microservice -o=jsonpath='{.spec.triggers..github.secret}'", "`o_3x9M1qoI2Wj_cz1WiK`", "oc describe bc/ostoy-microservice", "[...] Webhook GitHub: URL: https://api.demo1234.openshift.com:443/apis/build.openshift.io/v1/namespaces/ostoy-s2i/buildconfigs/ostoy/webhooks/<secret>/github [...]", "https://api.demo1234.openshift.com:443/apis/build.openshift.io/v1/namespaces/ostoy-s2i/buildconfigs/ostoy-microservice/webhooks/o_3x9M1qoI2Wj_czR1WiK/github", "7 app.get('/', function(request, response) { 8 //let randomColor = getRandomColor(); // <-- comment this 9 let randomColor = getRandomGrayScaleColor(); // <-- uncomment this 10 11 response.writeHead(200, {'Content-Type': 'application/json'});", "curl https://raw.githubusercontent.com/openshift-cs/rosaworkshop/master/rosa-workshop/ostoy/resources/setup-s3-ack-controller.sh | bash", "oc describe pod ack-s3-controller -n ack-system | grep \"^\\s*AWS_\"", "AWS_ROLE_ARN: arn:aws:iam::000000000000:role/ack-s3-controller AWS_WEB_IDENTITY_TOKEN_FILE: /var/run/secrets/eks.amazonaws.com/serviceaccount/token", "oc rollout restart deployment ack-s3-controller -n ack-system", "oc new-project ostoy-USD(uuidgen | cut -d - -f 2 | tr '[:upper:]' '[:lower:]')", "export OSTOY_NAMESPACE=USD(oc config view --minify -o 'jsonpath={..namespace}')", "export AWS_ACCOUNT_ID=USD(aws sts get-caller-identity --query Account --output text)", "export OIDC_PROVIDER=USD(rosa describe cluster -c <cluster-name> -o yaml | awk '/oidc_endpoint_url/ {print USD2}' | cut -d '/' -f 3,4)", "cat <<EOF > ./ostoy-sa-trust.json { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"Federated\": \"arn:aws:iam::USD{AWS_ACCOUNT_ID}:oidc-provider/USD{OIDC_PROVIDER}\" }, \"Action\": \"sts:AssumeRoleWithWebIdentity\", \"Condition\": { \"StringEquals\": { \"USD{OIDC_PROVIDER}:sub\": \"system:serviceaccount:USD{OSTOY_NAMESPACE}:ostoy-sa\" } } } ] } EOF", "aws iam create-role --role-name \"ostoy-sa-role\" --assume-role-policy-document file://ostoy-sa-trust.json", "export POLICY_ARN=USD(aws iam list-policies --query 'Policies[?PolicyName==`AmazonS3FullAccess`].Arn' --output text)", "aws iam attach-role-policy --role-name \"ostoy-sa-role\" --policy-arn \"USD{POLICY_ARN}\"", "export APP_IAM_ROLE_ARN=USD(aws iam get-role --role-name=ostoy-sa-role --query Role.Arn --output text)", "cat <<EOF | oc apply -f - apiVersion: v1 kind: ServiceAccount metadata: name: ostoy-sa namespace: USD{OSTOY_NAMESPACE} annotations: eks.amazonaws.com/role-arn: \"USDAPP_IAM_ROLE_ARN\" EOF", "oc adm policy add-scc-to-user restricted system:serviceaccount:USD{OSTOY_NAMESPACE}:ostoy-sa", "oc describe serviceaccount ostoy-sa -n USD{OSTOY_NAMESPACE}", "Name: ostoy-sa Namespace: ostoy Labels: <none> Annotations: eks.amazonaws.com/role-arn: arn:aws:iam::000000000000:role/ostoy-sa-role Image pull secrets: ostoy-sa-dockercfg-b2l94 Mountable secrets: ostoy-sa-dockercfg-b2l94 Tokens: ostoy-sa-token-jlc6d Events: <none>", "cat <<EOF | oc apply -f - apiVersion: s3.services.k8s.aws/v1alpha1 kind: Bucket metadata: name: USD{OSTOY_NAMESPACE}-bucket namespace: USD{OSTOY_NAMESPACE} spec: name: USD{OSTOY_NAMESPACE}-bucket EOF", "aws s3 ls | grep USD{OSTOY_NAMESPACE}-bucket", "- oc apply -f https://raw.githubusercontent.com/openshift-cs/rosaworkshop/master/rosa-workshop/ostoy/yaml/ostoy-microservice-deployment.yaml", "- oc apply -f https://raw.githubusercontent.com/openshift-cs/rosaworkshop/master/rosa-workshop/ostoy/yaml/ostoy-frontend-deployment.yaml", "oc patch deploy ostoy-frontend -n USD{OSTOY_NAMESPACE} --type=merge --patch '{\"spec\": {\"template\": {\"spec\":{\"serviceAccount\":\"ostoy-sa\"}}}}'", "spec: # Uncomment to use with ACK portion of the workshop # If you chose a different service account name please replace it. serviceAccount: ostoy-sa containers: - name: ostoy-frontend image: quay.io/ostoylab/ostoy-frontend:1.6.0 imagePullPolicy: IfNotPresent [...]", "oc describe pod ostoy-frontend -n USD{OSTOY_NAMESPACE} | grep \"^\\s*AWS_\"", "AWS_ROLE_ARN: arn:aws:iam::000000000000:role/ostoy-sa AWS_WEB_IDENTITY_TOKEN_FILE: /var/run/secrets/eks.amazonaws.com/serviceaccount/token", "oc get route ostoy-route -n USD{OSTOY_NAMESPACE} -o jsonpath='{.spec.host}{\"\\n\"}'", "aws s3 ls s3://USD{OSTOY_NAMESPACE}-bucket", "aws s3 ls s3://ostoy-bucket 2023-05-04 22:20:51 51 OSToy.txt" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html-single/tutorials/index
Chapter 3. Getting started
Chapter 3. Getting started This chapter guides you through the steps to set up your environment and run a simple messaging program. 3.1. Prerequisites You must complete the installation procedure for your environment. You must have an AMQP 1.0 message broker listening for connections on interface localhost and port 5672 . It must have anonymous access enabled. For more information, see Starting the broker . You must have a queue named examples . For more information, see Creating a queue . 3.2. Running Hello World on Red Hat Enterprise Linux The Hello World example creates a connection to the broker, sends a message containing a greeting to the examples queue, and receives it back. On success, it prints the received message to the console. Change to the examples directory and run the helloworld.js example. USD cd <install-dir> /node_modules/rhea/examples USD node helloworld.js Hello World! 3.3. Running Hello World on Microsoft Windows The Hello World example creates a connection to the broker, sends a message containing a greeting to the examples queue, and receives it back. On success, it prints the received message to the console. Change to the examples directory and run the helloworld.js example. > cd <install-dir> /node_modules/rhea/examples > node helloworld.js Hello World!
[ "cd <install-dir> /node_modules/rhea/examples node helloworld.js Hello World!", "> cd <install-dir> /node_modules/rhea/examples > node helloworld.js Hello World!" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/using_the_amq_javascript_client/getting_started
Chapter 1. Overview of device mapper multipathing
Chapter 1. Overview of device mapper multipathing DM Multipath provides: Redundancy DM Multipath can provide failover in an active/passive configuration. In an active/passive configuration, only a subset of the paths is used at any time for I/O. If any element of an I/O path such as the cable, switch, or controller fails, DM Multipath switches to an alternate path. Note The number of paths is dependent on the setup. Usually, DM Multipath setups have 2, 4, or 8 paths to the storage, but this is a common setup and other numbers are possible for the paths. Improved Performance DM Multipath can be configured in an active/active mode, where I/O is spread over the paths in a round-robin fashion. In some configurations, DM Multipath can detect loading on the I/O paths and dynamically rebalance the load. 1.1. Active/Passive multipath configuration with one RAID device In this configuration, there are two Host Bus Adapters (HBAs) on the server, two SAN switches, and two RAID controllers. Following are the possible failure in this configuration: HBA failure Fibre Channel cable failure SAN switch failure Array controller port failure With DM Multipath configured, a failure at any of these points causes DM Multipath to switch to the alternate I/O path. The following image describes the configuration with two I/O paths from the server to a RAID device. Here, there is one I/O path that goes through hba1 , SAN1 , and cntrlr1 and a second I/O path that goes through hba2 , SAN2 , and cntrlr2 . Figure 1.1. Active/Passive multipath configuration with one RAID device 1.2. Active/Passive multipath configuration with two RAID devices In this configuration, there are two HBAs on the server, two SAN switches, and two RAID devices with two RAID controllers each. With DM Multipath configured, a failure at any of the points of the I/O path to either of the RAID devices causes DM Multipath to switch to the alternate I/O path for that device. The following image describes the configuration with two I/O paths to each RAID device. Here, there are two I/O paths to each RAID device. Figure 1.2. Active/Passive multipath configuration with two RAID device 1.3. Active/Active multipath configuration with one RAID device In this configuration, there are two HBAs on the server, two SAN switches, and two RAID controllers. The following image describes the configuration with two I/O paths from the server to a storage device. Here, I/O can be spread among these two paths. Figure 1.3. Active/Active multipath configuration with one RAID device 1.4. DM Multipath components The following table describes the DM Multipath components. Table 1.1. Components of DM Multipath Component Description dm_multipath kernel module Reroutes I/O and supports failover for paths and path groups. mpathconf utility Configures and enables device mapper multipathing. multipath command Lists and configures the multipath devices. It is also executed by udev whenever a block device is added, to determine if the device should be part of a multipath device or not. multipathd daemon Automatically creates and removes multipath devices and monitors paths; as paths fail and come back, it may update the multipath device. Allows interactive changes to multipath devices. Reload the service if there are any changes to the /etc/multipath.conf file. kpartx command Creates device mapper devices for the partitions on a device. This command is automatically executed by udev when multipath devices are created to create partition devices on top of them. The kpartx command is provided in its own package, but the device-mapper-multipath package depends on it. mpathpersist Sets up SCSI-3 persistent reservations on multipath devices. This command works similarly to the way sg_persist works for SCSI devices that are not multipathed, but it handles setting persistent reservations on all paths of a multipath device. It coordinates with multipathd to ensure that the reservations are set up correctly on paths that are added later. To use this functionality, the reservation_key attribute must be defined in the /etc/multipath.conf file. Otherwise the multipathd daemon will not check for persistent reservations for newly discovered paths or reinstated paths. 1.5. The multipath command The multipath command is used to detect and combine multiple paths to devices. It provides a variety of options you can use to administer your multipathed devices. The following table describes some options of the multipath command that you may find useful. Table 1.2. Useful multipath command options Option Description -l Display the current multipath topology gathered from sysfs and the device mapper. -ll Display the current multipath topology gathered from sysfs , the device mapper, and all other available components on the system. -f device Remove the named multipath device. -F Remove all unused multipath devices. -w device Remove the wwid of the specified device from the wwids file. -W Reset the wwids file to include only the current multipath devices. -r Force reload of the multipath device. 1.6. Displaying multipath topology To effectively monitor paths, troubleshoot multipath issues, or check whether the multipath configurations are set correctly, you can display the multipath topology. Procedure Display the multipath device topology: The output can be split into three parts. Each part displays information for the following group: Multipath device information: mpatha (3600d0230000000000e13954ed5f89300) : alias (wwid if it's different from the alias) dm-4 : dm device name WINSYS,SF2372 : vendor, product size=233G : size features='1 queue_if_no_path' : features hwhandler='0' : hardware handler wp=rw : write permissions Path group information: policy='service-time 0' : scheduling policy prio=1 : path group priority status=active : path group status Path information: 6:0:0:0 : host:channel:id:lun sdf : devnode 8:80 : major:minor numbers active : dm status ready : path status running : online status For more information about the dm, path and online status, see Path status . Other multipath commands, which are used to list, create, or reload multipath devices, also display the device topology. However, some information might be unknown and shown as undef in the output. This is normal behavior. Use the multipath -ll command to view the correct state. Note In certain cases, such as creating a multipath device, the multipath topology displays a parameter, which represents if any action was taken. For example, the following command output shows the create: parameter to represent that a multipath device was created: 1.7. Path status The path status is updated periodically by the multipathd daemon based on the polling interval defined in the /etc/multipath.conf file. In terms of the kernel, the dm status is similar to the path status. The dm state will retain its current status until the path checker has completed. Path status ready, ghost The path is up and ready for I/O. faulty, shaky The path is down. i/o pending The checker is actively checking this path, and the state will be updated shortly. i/o timeout The checker did not return success / failure before the timeout period. This is treated the same as faulty . removed The path has been removed from the system, and will shortly be removed from the multipath device. This is treated the same as faulty . wild multipathd was unable to run the path checker, because of an internal error or configuration issue. This is treated the same as faulty , except multipath will skip many actions on the path. unchecked The path checker has not run on this path, either because it has just been discovered, it does not have an assigned path checker, or the path checker encountered an error. This is treated the same as wild . delayed The path checker returns that the path is up, but multipath is delaying the reinstatement of the path because the path has recently failed multiple times and multipath has been configured to delay paths in this case. This is treated the same as faulty . Dm status Active Maps to the ready and ghost path status. Failed Maps to all other path status, except i/o pending that does not have an equivalent dm state. Online status Running The device is enabled. Offline The device has been disabled. 1.8. Additional resources multipath(8) and multipathd(8) man pages /etc/multipath.conf file
[ "multipath -ll mpatha (3600d0230000000000e13954ed5f89300) dm-4 WINSYS,SF2372 size=233G features='1 queue_if_no_path' hwhandler='0' wp=rw `-+- policy='service-time 0' prio=1 status=active `- 6:0:0:0 sdf 8:80 active ready running", "create: mpatha (3600d0230000000000e13954ed5f89300) undef WINSYS,SF2372 size=233G features='1 queue_if_no_path' hwhandler='0' wp=undef `-+- policy='service-time 0' prio=1 status=undef `- 6:0:0:0 sdf 8:80 undef ready running" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_device_mapper_multipath/overview-of-device-mapper-multipathing_configuring-device-mapper-multipath
Chapter 23. KIE Server and KIE container commands in Red Hat Decision Manager
Chapter 23. KIE Server and KIE container commands in Red Hat Decision Manager Red Hat Decision Manager supports server commands that you can send to KIE Server for server-related or container-related operations, such as retrieving server information or creating or deleting a container. The full list of supported KIE Server configuration commands is located in the org.kie.server.api.commands package in your Red Hat Decision Manager instance. In the KIE Server REST API, you use the org.kie.server.api.commands commands as the request body for POST requests to http://SERVER:PORT/kie-server/services/rest/server/config . For more information about using the KIE Server REST API, see Chapter 21, KIE Server REST API for KIE containers and business assets . In the KIE Server Java client API, you use the corresponding method in the parent KieServicesClient Java client as an embedded API request in your Java application. All KIE Server commands are executed by methods provided in the Java client API, so you do not need to embed the actual KIE Server commands in your Java application. For more information about using the KIE Server Java client API, see Chapter 22, KIE Server Java client API for KIE containers and business assets . 23.1. Sample KIE Server and KIE container commands The following are sample KIE Server commands that you can use with the KIE Server REST API or Java client API for server-related or container-related operations in KIE Server: GetServerInfoCommand GetServerStateCommand CreateContainerCommand GetContainerInfoCommand ListContainersCommand CallContainerCommand DisposeContainerCommand GetScannerInfoCommand UpdateScannerCommand UpdateReleaseIdCommand For the full list of supported KIE Server configuration and management commands, see the org.kie.server.api.commands package in your Red Hat Decision Manager instance. You can run KIE Server commands individually or together as a batch REST API request or batch Java API request: Batch REST API request to create, call, and dispose a KIE container (JSON) { "commands": [ { "create-container": { "container": { "status": "STARTED", "container-id": "command-script-container", "release-id": { "version": "1.0", "group-id": "com.redhat", "artifact-id": "Project1" } } } }, { "call-container": { "payload": "{\n \"commands\" : [ {\n \"fire-all-rules\" : {\n \"max\" : -1,\n \"out-identifier\" : null\n }\n } ]\n}", "container-id": "command-script-container" } }, { "dispose-container": { "container-id": "command-script-container" } } ] } Batch Java API request to retrieve, dispose, and re-create a KIE container public void disposeAndCreateContainer() { System.out.println("== Disposing and creating containers =="); // Retrieve list of KIE containers List<KieContainerResource> kieContainers = kieServicesClient.listContainers().getResult().getContainers(); if (kieContainers.size() == 0) { System.out.println("No containers available..."); return; } // Dispose KIE container KieContainerResource container = kieContainers.get(0); String containerId = container.getContainerId(); ServiceResponse<Void> responseDispose = kieServicesClient.disposeContainer(containerId); if (responseDispose.getType() == ResponseType.FAILURE) { System.out.println("Error disposing " + containerId + ". Message: "); System.out.println(responseDispose.getMsg()); return; } System.out.println("Success Disposing container " + containerId); System.out.println("Trying to recreate the container..."); // Re-create KIE container ServiceResponse<KieContainerResource> createResponse = kieServicesClient.createContainer(containerId, container); if(createResponse.getType() == ResponseType.FAILURE) { System.out.println("Error creating " + containerId + ". Message: "); System.out.println(responseDispose.getMsg()); return; } System.out.println("Container recreated with success!"); } Each command in this section includes a REST request body example (JSON) for the KIE Server REST API and an embedded method example from the KieServicesClient Java client for the KIE Server Java client API. GetServerInfoCommand Returns information about this KIE Server instance. Example REST request body (JSON) { "commands" : [ { "get-server-info" : { } } ] } Example Java client method KieServerInfo serverInfo = kieServicesClient.getServerInfo(); Example server response (JSON) { "response": [ { "type": "SUCCESS", "msg": "Kie Server info", "result": { "kie-server-info": { "id": "default-kieserver", "version": "7.11.0.Final-redhat-00001", "name": "default-kieserver", "location": "http://localhost:8080/kie-server/services/rest/server", "capabilities": [ "KieServer", "BRM", "BPM", "CaseMgmt", "BPM-UI", "BRP", "DMN", "Swagger" ], "messages": [ { "severity": "INFO", "timestamp": { "java.util.Date": 1538502533321 }, "content": [ "Server KieServerInfo{serverId='default-kieserver', version='7.11.0.Final-redhat-00001', name='default-kieserver', location='http://localhost:8080/kie-server/services/rest/server', capabilities=[KieServer, BRM, BPM, CaseMgmt, BPM-UI, BRP, DMN, Swagger], messages=null}started successfully at Tue Oct 02 13:48:53 EDT 2018" ] } ] } } } ] } GetServerStateCommand Returns information about the current state and configurations of this KIE Server instance. Example REST request body (JSON) { "commands" : [ { "get-server-state" : { } } ] } Example Java client method KieServerStateInfo serverStateInfo = kieServicesClient.getServerState(); Example server response (JSON) { "response": [ { "type": "SUCCESS", "msg": "Successfully loaded server state for server id default-kieserver", "result": { "kie-server-state-info": { "controller": [ "http://localhost:8080/business-central/rest/controller" ], "config": { "config-items": [ { "itemName": "org.kie.server.location", "itemValue": "http://localhost:8080/kie-server/services/rest/server", "itemType": "java.lang.String" }, { "itemName": "org.kie.server.controller.user", "itemValue": "controllerUser", "itemType": "java.lang.String" }, { "itemName": "org.kie.server.controller", "itemValue": "http://localhost:8080/business-central/rest/controller", "itemType": "java.lang.String" } ] }, "containers": [ { "container-id": "employee-rostering", "release-id": { "group-id": "employeerostering", "artifact-id": "employeerostering", "version": "1.0.0-SNAPSHOT" }, "resolved-release-id": null, "status": "STARTED", "scanner": { "status": "STOPPED", "poll-interval": null }, "config-items": [ { "itemName": "KBase", "itemValue": "", "itemType": "BPM" }, { "itemName": "KSession", "itemValue": "", "itemType": "BPM" }, { "itemName": "MergeMode", "itemValue": "MERGE_COLLECTIONS", "itemType": "BPM" }, { "itemName": "RuntimeStrategy", "itemValue": "SINGLETON", "itemType": "BPM" } ], "messages": [], "container-alias": "employeerostering" } ] } } } ] } CreateContainerCommand Creates a KIE container in the KIE Server. Table 23.1. Command attributes Name Description Requirement container Map containing the container-id , release-id data (group ID, artifact ID, version), status , and any other components of the new KIE container Required Example REST request body (JSON) { "commands" : [ { "create-container" : { "container" : { "status" : null, "messages" : [ ], "container-id" : "command-script-container", "release-id" : { "version" : "1.0", "group-id" : "com.redhat", "artifact-id" : "Project1" }, "config-items" : [ ] } } } ] } Example Java client method ServiceResponse<KieContainerResource> response = kieServicesClient.createContainer("command-script-container", resource); Example server response (JSON) { "response": [ { "type": "SUCCESS", "msg": "Container command-script-container successfully deployed with module com.redhat:Project1:1.0.", "result": { "kie-container": { "container-id": "command-script-container", "release-id": { "version" : "1.0", "group-id" : "com.redhat", "artifact-id" : "Project1" }, "resolved-release-id": { "version" : "1.0", "group-id" : "com.redhat", "artifact-id" : "Project1" }, "status": "STARTED", "scanner": { "status": "DISPOSED", "poll-interval": null }, "config-items": [], "messages": [ { "severity": "INFO", "timestamp": { "java.util.Date": 1538762455510 }, "content": [ "Container command-script-container successfully created with module com.redhat:Project1:1.0." ] } ], "container-alias": null } } } ] } GetContainerInfoCommand Returns information about a specified KIE container in KIE Server. Table 23.2. Command attributes Name Description Requirement container-id ID of the KIE container Required Example REST request body (JSON) { "commands" : [ { "get-container-info" : { "container-id" : "command-script-container" } } ] } Example Java client method ServiceResponse<KieContainerResource> response = kieServicesClient.getContainerInfo("command-script-container"); Example server response (JSON) { "response": [ { "type": "SUCCESS", "msg": "Info for container command-script-container", "result": { "kie-container": { "container-id": "command-script-container", "release-id": { "group-id": "com.redhat", "artifact-id": "Project1", "version": "1.0" }, "resolved-release-id": { "group-id": "com.redhat", "artifact-id": "Project1", "version": "1.0" }, "status": "STARTED", "scanner": { "status": "DISPOSED", "poll-interval": null }, "config-items": [ ], "container-alias": null } } } ] } ListContainersCommand Returns a list of KIE containers that have been created in this KIE Server instance. Table 23.3. Command attributes Name Description Requirement kie-container-filter Optional map containing release-id-filter , container-status-filter , and any other KIE container properties by which you want to filter results Optional Example REST request body (JSON) { "commands" : [ { "list-containers" : { "kie-container-filter" : { "release-id-filter" : { }, "container-status-filter" : { "accepted-status" : ["FAILED"] } } } } ] } Example Java client method KieContainerResourceFilter filter = new KieContainerResourceFilter.Builder() .status(KieContainerStatus.FAILED) .build(); KieContainerResourceList containersList = kieServicesClient.listContainers(filter); Example server response (JSON) { "response": [ { "type": "SUCCESS", "msg": "List of created containers", "result": { "kie-containers": { "kie-container": [ { "container-id": "command-script-container", "release-id": { "group-id": "com.redhat", "artifact-id": "Project1", "version": "1.0" }, "resolved-release-id": { "group-id": "com.redhat", "artifact-id": "Project1", "version": "1.0" }, "status": "STARTED", "scanner": { "status": "STARTED", "poll-interval": 5000 }, "config-items": [ { "itemName": "RuntimeStrategy", "itemValue": "SINGLETON", "itemType": "java.lang.String" }, { "itemName": "MergeMode", "itemValue": "MERGE_COLLECTIONS", "itemType": "java.lang.String" }, { "itemName": "KBase", "itemValue": "", "itemType": "java.lang.String" }, { "itemName": "KSession", "itemValue": "", "itemType": "java.lang.String" } ], "messages": [ { "severity": "INFO", "timestamp": { "java.util.Date": 1538504619749 }, "content": [ "Container command-script-container successfully created with module com.redhat:Project1:1.0." ] } ], "container-alias": null } ] } } } ] } CallContainerCommand Calls a KIE container and executes one or more runtime commands. For information about Red Hat Decision Manager runtime commands, see Chapter 24, Runtime commands in Red Hat Decision Manager . Table 23.4. Command attributes Name Description Requirement container-id ID of the KIE container to be called Required payload One or more commands in a BatchExecutionCommand wrapper to be executed on the KIE container Required Example REST request body (JSON) { "commands" : [ { "call-container" : { "payload" : "{\n \"lookup\" : \"defaultKieSession\",\n \"commands\" : [ {\n \"fire-all-rules\" : {\n \"max\" : -1,\n \"out-identifier\" : null\n }\n } ]\n}", "container-id" : "command-script-container" } } ] } Example Java client method List<Command<?>> commands = new ArrayList<Command<?>>(); BatchExecutionCommand batchExecution1 = commandsFactory.newBatchExecution(commands, "defaultKieSession"); commands.add(commandsFactory.newFireAllRules()); ServiceResponse<ExecutionResults> response1 = ruleClient.executeCommandsWithResults("command-script-container", batchExecution1); Example server response (JSON) { "response": [ { "type": "SUCCESS", "msg": "Container command-script-container successfully called.", "result": "{\n \"results\" : [ ],\n \"facts\" : [ ]\n}" } ] } DisposeContainerCommand Disposes a specified KIE container in the KIE Server. Table 23.5. Command attributes Name Description Requirement container-id ID of the KIE container to be disposed Required Example REST request body (JSON) { "commands" : [ { "dispose-container" : { "container-id" : "command-script-container" } } ] } Example Java client method ServiceResponse<Void> response = kieServicesClient.disposeContainer("command-script-container"); Example server response (JSON) { "response": [ { "type": "SUCCESS", "msg": "Container command-script-container successfully disposed.", "result": null } ] } GetScannerInfoCommand Returns information about the KIE scanner used for automatic updates in a specified KIE container, if applicable. Table 23.6. Command attributes Name Description Requirement container-id ID of the KIE container where the KIE scanner is used Required Example REST request body (JSON) { "commands" : [ { "get-scanner-info" : { "container-id" : "command-script-container" } } ] } Example Java client method ServiceResponse<KieScannerResource> response = kieServicesClient.getScannerInfo("command-script-container"); Example server response (JSON) { "response": [ { "type": "SUCCESS", "msg": "Scanner info successfully retrieved", "result": { "kie-scanner": { "status": "DISPOSED", "poll-interval": null } } } ] } UpdateScannerCommand Starts or stops a KIE scanner that controls polling for updated KIE container deployments. Note Avoid using a KIE scanner with business processes. Using a KIE scanner with processes can lead to unforeseen updates that can then cause errors in long-running processes when changes are not compatible with running process instances. Table 23.7. Command attributes Name Description Requirement container-id ID of the KIE container where the KIE scanner is used Required status Status to be set on the KIE scanner ( STARTED , STOPPED ) Required poll-interval Permitted polling duration in milliseconds Required only when starting scanner Example REST request body (JSON) { "commands" : [ { "update-scanner" : { "scanner" : { "status" : "STARTED", "poll-interval" : 10000 }, "container-id" : "command-script-container" } } ] } Example Java client method KieScannerResource scannerResource = new KieScannerResource(); scannerResource.setPollInterval(10000); scannerResource.setStatus(KieScannerStatus. STARTED); ServiceResponse<KieScannerResource> response = kieServicesClient.updateScanner("command-script-container", scannerResource); Example server response (JSON) { "response": [ { "type": "SUCCESS", "msg": "Kie scanner successfully created.", "result": { "kie-scanner": { "status": "STARTED", "poll-interval": 10000 } } } ] } UpdateReleaseIdCommand Updates the release ID data (group ID, artifact ID, version) for a specified KIE container. Table 23.8. Command attributes Name Description Requirement container-id ID of the KIE container to be updated Required releaseId Updated GAV (group ID, artifact ID, version) data to be applied to the KIE container Required Example REST request body (JSON) { "commands" : [ { "update-release-id" : { "releaseId" : { "version" : "1.1", "group-id" : "com.redhat", "artifact-id" : "Project1" }, "container-id" : "command-script-container" } } ] } Example Java client method ServiceResponse<ReleaseId> response = kieServicesClient.updateReleaseId("command-script-container", "com.redhat:Project1:1.1"); Example server response (JSON) { "response": [ { "type": "SUCCESS", "msg": "Release id successfully updated", "result": { "release-id": { "group-id": "com.redhat", "artifact-id": "Project1", "version": "1.1" } } } ] }
[ "{ \"commands\": [ { \"create-container\": { \"container\": { \"status\": \"STARTED\", \"container-id\": \"command-script-container\", \"release-id\": { \"version\": \"1.0\", \"group-id\": \"com.redhat\", \"artifact-id\": \"Project1\" } } } }, { \"call-container\": { \"payload\": \"{\\n \\\"commands\\\" : [ {\\n \\\"fire-all-rules\\\" : {\\n \\\"max\\\" : -1,\\n \\\"out-identifier\\\" : null\\n }\\n } ]\\n}\", \"container-id\": \"command-script-container\" } }, { \"dispose-container\": { \"container-id\": \"command-script-container\" } } ] }", "public void disposeAndCreateContainer() { System.out.println(\"== Disposing and creating containers ==\"); // Retrieve list of KIE containers List<KieContainerResource> kieContainers = kieServicesClient.listContainers().getResult().getContainers(); if (kieContainers.size() == 0) { System.out.println(\"No containers available...\"); return; } // Dispose KIE container KieContainerResource container = kieContainers.get(0); String containerId = container.getContainerId(); ServiceResponse<Void> responseDispose = kieServicesClient.disposeContainer(containerId); if (responseDispose.getType() == ResponseType.FAILURE) { System.out.println(\"Error disposing \" + containerId + \". Message: \"); System.out.println(responseDispose.getMsg()); return; } System.out.println(\"Success Disposing container \" + containerId); System.out.println(\"Trying to recreate the container...\"); // Re-create KIE container ServiceResponse<KieContainerResource> createResponse = kieServicesClient.createContainer(containerId, container); if(createResponse.getType() == ResponseType.FAILURE) { System.out.println(\"Error creating \" + containerId + \". Message: \"); System.out.println(responseDispose.getMsg()); return; } System.out.println(\"Container recreated with success!\"); }", "{ \"commands\" : [ { \"get-server-info\" : { } } ] }", "KieServerInfo serverInfo = kieServicesClient.getServerInfo();", "{ \"response\": [ { \"type\": \"SUCCESS\", \"msg\": \"Kie Server info\", \"result\": { \"kie-server-info\": { \"id\": \"default-kieserver\", \"version\": \"7.11.0.Final-redhat-00001\", \"name\": \"default-kieserver\", \"location\": \"http://localhost:8080/kie-server/services/rest/server\", \"capabilities\": [ \"KieServer\", \"BRM\", \"BPM\", \"CaseMgmt\", \"BPM-UI\", \"BRP\", \"DMN\", \"Swagger\" ], \"messages\": [ { \"severity\": \"INFO\", \"timestamp\": { \"java.util.Date\": 1538502533321 }, \"content\": [ \"Server KieServerInfo{serverId='default-kieserver', version='7.11.0.Final-redhat-00001', name='default-kieserver', location='http://localhost:8080/kie-server/services/rest/server', capabilities=[KieServer, BRM, BPM, CaseMgmt, BPM-UI, BRP, DMN, Swagger], messages=null}started successfully at Tue Oct 02 13:48:53 EDT 2018\" ] } ] } } } ] }", "{ \"commands\" : [ { \"get-server-state\" : { } } ] }", "KieServerStateInfo serverStateInfo = kieServicesClient.getServerState();", "{ \"response\": [ { \"type\": \"SUCCESS\", \"msg\": \"Successfully loaded server state for server id default-kieserver\", \"result\": { \"kie-server-state-info\": { \"controller\": [ \"http://localhost:8080/business-central/rest/controller\" ], \"config\": { \"config-items\": [ { \"itemName\": \"org.kie.server.location\", \"itemValue\": \"http://localhost:8080/kie-server/services/rest/server\", \"itemType\": \"java.lang.String\" }, { \"itemName\": \"org.kie.server.controller.user\", \"itemValue\": \"controllerUser\", \"itemType\": \"java.lang.String\" }, { \"itemName\": \"org.kie.server.controller\", \"itemValue\": \"http://localhost:8080/business-central/rest/controller\", \"itemType\": \"java.lang.String\" } ] }, \"containers\": [ { \"container-id\": \"employee-rostering\", \"release-id\": { \"group-id\": \"employeerostering\", \"artifact-id\": \"employeerostering\", \"version\": \"1.0.0-SNAPSHOT\" }, \"resolved-release-id\": null, \"status\": \"STARTED\", \"scanner\": { \"status\": \"STOPPED\", \"poll-interval\": null }, \"config-items\": [ { \"itemName\": \"KBase\", \"itemValue\": \"\", \"itemType\": \"BPM\" }, { \"itemName\": \"KSession\", \"itemValue\": \"\", \"itemType\": \"BPM\" }, { \"itemName\": \"MergeMode\", \"itemValue\": \"MERGE_COLLECTIONS\", \"itemType\": \"BPM\" }, { \"itemName\": \"RuntimeStrategy\", \"itemValue\": \"SINGLETON\", \"itemType\": \"BPM\" } ], \"messages\": [], \"container-alias\": \"employeerostering\" } ] } } } ] }", "{ \"commands\" : [ { \"create-container\" : { \"container\" : { \"status\" : null, \"messages\" : [ ], \"container-id\" : \"command-script-container\", \"release-id\" : { \"version\" : \"1.0\", \"group-id\" : \"com.redhat\", \"artifact-id\" : \"Project1\" }, \"config-items\" : [ ] } } } ] }", "ServiceResponse<KieContainerResource> response = kieServicesClient.createContainer(\"command-script-container\", resource);", "{ \"response\": [ { \"type\": \"SUCCESS\", \"msg\": \"Container command-script-container successfully deployed with module com.redhat:Project1:1.0.\", \"result\": { \"kie-container\": { \"container-id\": \"command-script-container\", \"release-id\": { \"version\" : \"1.0\", \"group-id\" : \"com.redhat\", \"artifact-id\" : \"Project1\" }, \"resolved-release-id\": { \"version\" : \"1.0\", \"group-id\" : \"com.redhat\", \"artifact-id\" : \"Project1\" }, \"status\": \"STARTED\", \"scanner\": { \"status\": \"DISPOSED\", \"poll-interval\": null }, \"config-items\": [], \"messages\": [ { \"severity\": \"INFO\", \"timestamp\": { \"java.util.Date\": 1538762455510 }, \"content\": [ \"Container command-script-container successfully created with module com.redhat:Project1:1.0.\" ] } ], \"container-alias\": null } } } ] }", "{ \"commands\" : [ { \"get-container-info\" : { \"container-id\" : \"command-script-container\" } } ] }", "ServiceResponse<KieContainerResource> response = kieServicesClient.getContainerInfo(\"command-script-container\");", "{ \"response\": [ { \"type\": \"SUCCESS\", \"msg\": \"Info for container command-script-container\", \"result\": { \"kie-container\": { \"container-id\": \"command-script-container\", \"release-id\": { \"group-id\": \"com.redhat\", \"artifact-id\": \"Project1\", \"version\": \"1.0\" }, \"resolved-release-id\": { \"group-id\": \"com.redhat\", \"artifact-id\": \"Project1\", \"version\": \"1.0\" }, \"status\": \"STARTED\", \"scanner\": { \"status\": \"DISPOSED\", \"poll-interval\": null }, \"config-items\": [ ], \"container-alias\": null } } } ] }", "{ \"commands\" : [ { \"list-containers\" : { \"kie-container-filter\" : { \"release-id-filter\" : { }, \"container-status-filter\" : { \"accepted-status\" : [\"FAILED\"] } } } } ] }", "KieContainerResourceFilter filter = new KieContainerResourceFilter.Builder() .status(KieContainerStatus.FAILED) .build(); KieContainerResourceList containersList = kieServicesClient.listContainers(filter);", "{ \"response\": [ { \"type\": \"SUCCESS\", \"msg\": \"List of created containers\", \"result\": { \"kie-containers\": { \"kie-container\": [ { \"container-id\": \"command-script-container\", \"release-id\": { \"group-id\": \"com.redhat\", \"artifact-id\": \"Project1\", \"version\": \"1.0\" }, \"resolved-release-id\": { \"group-id\": \"com.redhat\", \"artifact-id\": \"Project1\", \"version\": \"1.0\" }, \"status\": \"STARTED\", \"scanner\": { \"status\": \"STARTED\", \"poll-interval\": 5000 }, \"config-items\": [ { \"itemName\": \"RuntimeStrategy\", \"itemValue\": \"SINGLETON\", \"itemType\": \"java.lang.String\" }, { \"itemName\": \"MergeMode\", \"itemValue\": \"MERGE_COLLECTIONS\", \"itemType\": \"java.lang.String\" }, { \"itemName\": \"KBase\", \"itemValue\": \"\", \"itemType\": \"java.lang.String\" }, { \"itemName\": \"KSession\", \"itemValue\": \"\", \"itemType\": \"java.lang.String\" } ], \"messages\": [ { \"severity\": \"INFO\", \"timestamp\": { \"java.util.Date\": 1538504619749 }, \"content\": [ \"Container command-script-container successfully created with module com.redhat:Project1:1.0.\" ] } ], \"container-alias\": null } ] } } } ] }", "{ \"commands\" : [ { \"call-container\" : { \"payload\" : \"{\\n \\\"lookup\\\" : \\\"defaultKieSession\\\",\\n \\\"commands\\\" : [ {\\n \\\"fire-all-rules\\\" : {\\n \\\"max\\\" : -1,\\n \\\"out-identifier\\\" : null\\n }\\n } ]\\n}\", \"container-id\" : \"command-script-container\" } } ] }", "List<Command<?>> commands = new ArrayList<Command<?>>(); BatchExecutionCommand batchExecution1 = commandsFactory.newBatchExecution(commands, \"defaultKieSession\"); commands.add(commandsFactory.newFireAllRules()); ServiceResponse<ExecutionResults> response1 = ruleClient.executeCommandsWithResults(\"command-script-container\", batchExecution1);", "{ \"response\": [ { \"type\": \"SUCCESS\", \"msg\": \"Container command-script-container successfully called.\", \"result\": \"{\\n \\\"results\\\" : [ ],\\n \\\"facts\\\" : [ ]\\n}\" } ] }", "{ \"commands\" : [ { \"dispose-container\" : { \"container-id\" : \"command-script-container\" } } ] }", "ServiceResponse<Void> response = kieServicesClient.disposeContainer(\"command-script-container\");", "{ \"response\": [ { \"type\": \"SUCCESS\", \"msg\": \"Container command-script-container successfully disposed.\", \"result\": null } ] }", "{ \"commands\" : [ { \"get-scanner-info\" : { \"container-id\" : \"command-script-container\" } } ] }", "ServiceResponse<KieScannerResource> response = kieServicesClient.getScannerInfo(\"command-script-container\");", "{ \"response\": [ { \"type\": \"SUCCESS\", \"msg\": \"Scanner info successfully retrieved\", \"result\": { \"kie-scanner\": { \"status\": \"DISPOSED\", \"poll-interval\": null } } } ] }", "{ \"commands\" : [ { \"update-scanner\" : { \"scanner\" : { \"status\" : \"STARTED\", \"poll-interval\" : 10000 }, \"container-id\" : \"command-script-container\" } } ] }", "KieScannerResource scannerResource = new KieScannerResource(); scannerResource.setPollInterval(10000); scannerResource.setStatus(KieScannerStatus. STARTED); ServiceResponse<KieScannerResource> response = kieServicesClient.updateScanner(\"command-script-container\", scannerResource);", "{ \"response\": [ { \"type\": \"SUCCESS\", \"msg\": \"Kie scanner successfully created.\", \"result\": { \"kie-scanner\": { \"status\": \"STARTED\", \"poll-interval\": 10000 } } } ] }", "{ \"commands\" : [ { \"update-release-id\" : { \"releaseId\" : { \"version\" : \"1.1\", \"group-id\" : \"com.redhat\", \"artifact-id\" : \"Project1\" }, \"container-id\" : \"command-script-container\" } } ] }", "ServiceResponse<ReleaseId> response = kieServicesClient.updateReleaseId(\"command-script-container\", \"com.redhat:Project1:1.1\");", "{ \"response\": [ { \"type\": \"SUCCESS\", \"msg\": \"Release id successfully updated\", \"result\": { \"release-id\": { \"group-id\": \"com.redhat\", \"artifact-id\": \"Project1\", \"version\": \"1.1\" } } } ] }" ]
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/deploying_and_managing_red_hat_decision_manager_services/kie-server-commands-con_kie-apis
6.14. Miscellaneous Cluster Configuration
6.14. Miscellaneous Cluster Configuration This section describes using the ccs command to configure the following: Section 6.14.1, "Cluster Configuration Version" Section 6.14.2, "Multicast Configuration" Section 6.14.3, "Configuring a Two-Node Cluster" Section 6.14.4, "Logging" Section 6.14.5, "Configuring Redundant Ring Protocol" You can also use the ccs command to set advanced cluster configuration parameters, including totem options, dlm options, rm options and cman options. For information on setting these parameters see the ccs (8) man page and the annotated cluster configuration file schema at /usr/share/doc/cman-X.Y.ZZ/cluster_conf.html . To view a list of the miscellaneous cluster attributes that have been configured for a cluster, execute the following command: 6.14.1. Cluster Configuration Version A cluster configuration file includes a cluster configuration version value. The configuration version value is set to 1 by default when you create a cluster configuration file and it is automatically incremented each time you modify your cluster configuration. However, if you need to set it to another value, you can specify it with the following command: You can get the current configuration version value on an existing cluster configuration file with the following command: To increment the current configuration version value by 1 in the cluster configuration file on every node in the cluster, execute the following command:
[ "ccs -h host --lsmisc", "ccs -h host --setversion n", "ccs -h host --getversion", "ccs -h host --incversion" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-general-prop-ccs-ca
25.3. Kickstart Considerations
25.3. Kickstart Considerations Commands for using VNC are also available in Kickstart installations. Using only the vnc command results in an installation using Direct Mode. Additional options are available to set up an installation using Connect Mode. For more information about the vnc command and options used in Kickstart files, see Section 27.3.1, "Kickstart Commands and Options" .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/installation_guide/sect-vnc-kickstart-considerations
Preface
Preface Red Hat OpenShift Data Foundation supports deployment on any platform that you provision including bare metal, virtualized, and cloud environments. Both internal and external OpenShift Data Foundation clusters are supported on these environments. See Planning your deployment and Preparing to deploy OpenShift Data Foundation for more information about deployment requirements. To deploy OpenShift Data Foundation, follow the appropriate deployment process based on your requirement: Internal mode Deploy using local storage devices Deploy standalone Multicloud Object Gateway component External mode
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/deploying_openshift_data_foundation_on_any_platform/preface-agnostic
Appendix B. Custom Resource API Reference
Appendix B. Custom Resource API Reference B.1. Common configuration properties Common configuration properties apply to more than one resource. B.1.1. replicas Use the replicas property to configure replicas. The type of replication depends on the resource. KafkaTopic uses a replication factor to configure the number of replicas of each partition within a Kafka cluster. Kafka components use replicas to configure the number of pods in a deployment to provide better availability and scalability. Note When running a Kafka component on OpenShift it may not be necessary to run multiple replicas for high availability. When the node where the component is deployed crashes, OpenShift will automatically reschedule the Kafka component pod to a different node. However, running Kafka components with multiple replicas can provide faster failover times as the other nodes will be up and running. B.1.2. bootstrapServers Use the bootstrapServers property to configure a list of bootstrap servers. The bootstrap server lists can refer to Kafka clusters that are not deployed in the same OpenShift cluster. They can also refer to a Kafka cluster not deployed by AMQ Streams. If on the same OpenShift cluster, each list must ideally contain the Kafka cluster bootstrap service which is named CLUSTER-NAME -kafka-bootstrap and a port number. If deployed by AMQ Streams but on different OpenShift clusters, the list content depends on the approach used for exposing the clusters (routes, nodeports or loadbalancers). When using Kafka with a Kafka cluster not managed by AMQ Streams, you can specify the bootstrap servers list according to the configuration of the given cluster. B.1.3. ssl Use the three allowed ssl configuration options for client connection using a specific cipher suite for a TLS version. A cipher suite combines algorithms for secure connection and data transfer. You can also configure the ssl.endpoint.identification.algorithm property to enable or disable hostname verification. Example SSL configuration # ... spec: config: ssl.cipher.suites: "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" 1 ssl.enabled.protocols: "TLSv1.2" 2 ssl.protocol: "TLSv1.2" 3 ssl.endpoint.identification.algorithm: HTTPS 4 # ... 1 The cipher suite for TLS using a combination of ECDHE key exchange mechanism, RSA authentication algorithm, AES bulk encyption algorithm and SHA384 MAC algorithm. 2 The SSl protocol TLSv1.2 is enabled. 3 Specifies the TLSv1.2 protocol to generate the SSL context. Allowed values are TLSv1.1 and TLSv1.2 . 4 Hostname verification is enabled by setting to HTTPS . An empty string disables the verification. B.1.4. trustedCertificates Having set tls to configure TLS encryption, use the trustedCertificates property to provide a list of secrets with key names under which the certificates are stored in X.509 format. You can use the secrets created by the Cluster Operator for the Kafka cluster, or you can create your own TLS certificate file, then create a Secret from the file: oc create secret generic MY-SECRET \ --from-file= MY-TLS-CERTIFICATE-FILE.crt Example TLS encryption configuration tls: trustedCertificates: - secretName: my-cluster-cluster-cert certificate: ca.crt - secretName: my-cluster-cluster-cert certificate: ca2.crt If certificates are stored in the same secret, it can be listed multiple times. If you want to enable TLS, but use the default set of public certification authorities shipped with Java, you can specify trustedCertificates as an empty array: Example of enabling TLS with the default Java certificates tls: trustedCertificates: [] For information on configuring TLS client authentication, see KafkaClientAuthenticationTls schema reference . B.1.5. resources You request CPU and memory resources for components. Limits specify the maximum resources that can be consumed by a given container. Resource requests and limits for the Topic Operator and User Operator are set in the Kafka resource. Use the reources.requests and resources.limits properties to configure resource requests and limits. For every deployed container, AMQ Streams allows you to request specific resources and define the maximum consumption of those resources. AMQ Streams supports requests and limits for the following types of resources: cpu memory AMQ Streams uses the OpenShift syntax for specifying these resources. For more information about managing computing resources on OpenShift, see Managing Compute Resources for Containers . Resource requests Requests specify the resources to reserve for a given container. Reserving the resources ensures that they are always available. Important If the resource request is for more than the available free resources in the OpenShift cluster, the pod is not scheduled. A request may be configured for one or more supported resources. Resource limits Limits specify the maximum resources that can be consumed by a given container. The limit is not reserved and might not always be available. A container can use the resources up to the limit only when they are available. Resource limits should be always higher than the resource requests. A resource may be configured for one or more supported limits. Supported CPU formats CPU requests and limits are supported in the following formats: Number of CPU cores as integer ( 5 CPU core) or decimal ( 2.5 CPU core). Number or millicpus / millicores ( 100m ) where 1000 millicores is the same 1 CPU core. Note The computing power of 1 CPU core may differ depending on the platform where OpenShift is deployed. For more information on CPU specification, see the Meaning of CPU . Supported memory formats Memory requests and limits are specified in megabytes, gigabytes, mebibytes, and gibibytes. To specify memory in megabytes, use the M suffix. For example 1000M . To specify memory in gigabytes, use the G suffix. For example 1G . To specify memory in mebibytes, use the Mi suffix. For example 1000Mi . To specify memory in gibibytes, use the Gi suffix. For example 1Gi . For more details about memory specification and additional supported units, see Meaning of memory . B.1.6. image Use the image property to configure the container image used by the component. Overriding container images is recommended only in special situations where you need to use a different container registry or a customized image. For example, if your network does not allow access to the container repository used by AMQ Streams, you can copy the AMQ Streams images or build them from the source. However, if the configured image is not compatible with AMQ Streams images, it might not work properly. A copy of the container image might also be customized and used for debugging. You can specify which container image to use for a component using the image property in the following resources: Kafka.spec.kafka Kafka.spec.zookeeper Kafka.spec.entityOperator.topicOperator Kafka.spec.entityOperator.userOperator Kafka.spec.entityOperator.tlsSidecar KafkaConnect.spec KafkaConnectS2I.spec KafkaMirrorMaker.spec KafkaMirrorMaker2.spec KafkaBridge.spec Configuring the image property for Kafka, Kafka Connect, and Kafka MirrorMaker Kafka, Kafka Connect (including Kafka Connect with S2I support), and Kafka MirrorMaker support multiple versions of Kafka. Each component requires its own image. The default images for the different Kafka versions are configured in the following environment variables: STRIMZI_KAFKA_IMAGES STRIMZI_KAFKA_CONNECT_IMAGES STRIMZI_KAFKA_CONNECT_S2I_IMAGES STRIMZI_KAFKA_MIRROR_MAKER_IMAGES These environment variables contain mappings between the Kafka versions and their corresponding images. The mappings are used together with the image and version properties: If neither image nor version are given in the custom resource then the version will default to the Cluster Operator's default Kafka version, and the image will be the one corresponding to this version in the environment variable. If image is given but version is not, then the given image is used and the version is assumed to be the Cluster Operator's default Kafka version. If version is given but image is not, then the image that corresponds to the given version in the environment variable is used. If both version and image are given, then the given image is used. The image is assumed to contain a Kafka image with the given version. The image and version for the different components can be configured in the following properties: For Kafka in spec.kafka.image and spec.kafka.version . For Kafka Connect, Kafka Connect S2I, and Kafka MirrorMaker in spec.image and spec.version . Warning It is recommended to provide only the version and leave the image property unspecified. This reduces the chance of making a mistake when configuring the custom resource. If you need to change the images used for different versions of Kafka, it is preferable to configure the Cluster Operator's environment variables. Configuring the image property in other resources For the image property in the other custom resources, the given value will be used during deployment. If the image property is missing, the image specified in the Cluster Operator configuration will be used. If the image name is not defined in the Cluster Operator configuration, then the default value will be used. For Topic Operator: Container image specified in the STRIMZI_DEFAULT_TOPIC_OPERATOR_IMAGE environment variable from the Cluster Operator configuration. registry.redhat.io/amq7/amq-streams-rhel7-operator:1.6.7 container image. For User Operator: Container image specified in the STRIMZI_DEFAULT_USER_OPERATOR_IMAGE environment variable from the Cluster Operator configuration. registry.redhat.io/amq7/amq-streams-rhel7-operator:1.6.7 container image. For Entity Operator TLS sidecar: Container image specified in the STRIMZI_DEFAULT_TLS_SIDECAR_ENTITY_OPERATOR_IMAGE environment variable from the Cluster Operator configuration. registry.redhat.io/amq7/amq-streams-kafka-26-rhel7:1.6.7 container image. For Kafka Exporter: Container image specified in the STRIMZI_DEFAULT_KAFKA_EXPORTER_IMAGE environment variable from the Cluster Operator configuration. registry.redhat.io/amq7/amq-streams-kafka-26-rhel7:1.6.7 container image. For Kafka Bridge: Container image specified in the STRIMZI_DEFAULT_KAFKA_BRIDGE_IMAGE environment variable from the Cluster Operator configuration. registry.redhat.io/amq7/amq-streams-bridge-rhel7:1.6.7 container image. For Kafka broker initializer: Container image specified in the STRIMZI_DEFAULT_KAFKA_INIT_IMAGE environment variable from the Cluster Operator configuration. registry.redhat.io/amq7/amq-streams-rhel7-operator:1.6.7 container image. For Kafka broker initializer: registry.redhat.io/amq7/amq-streams-rhel7-operator:1.6.7 container image. Example of container image configuration apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # ... image: my-org/my-image:latest # ... zookeeper: # ... B.1.7. livenessProbe and readinessProbe healthchecks Use the livenessProbe and readinessProbe properties to configure healthcheck probes supported in AMQ Streams. Healthchecks are periodical tests which verify the health of an application. When a Healthcheck probe fails, OpenShift assumes that the application is not healthy and attempts to fix it. For more details about the probes, see Configure Liveness and Readiness Probes . Both livenessProbe and readinessProbe support the following options: initialDelaySeconds timeoutSeconds periodSeconds successThreshold failureThreshold Example of liveness and readiness probe configuration # ... readinessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 # ... For more information about the livenessProbe and readinessProbe options, see Probe schema reference . B.1.8. metrics Use the metrics property to enable and configure Prometheus metrics. The metrics property can also contain additional configuration for the Prometheus JMX exporter . AMQ Streams supports Prometheus metrics using Prometheus JMX exporter to convert the JMX metrics supported by Apache Kafka and ZooKeeper to Prometheus metrics. To enable Prometheus metrics export without any further configuration, you can set it to an empty object ( {} ). When metrics are enabled, they are exposed on port 9404. When the metrics property is not defined in the resource, the Prometheus metrics are disabled. For more information about setting up and deploying Prometheus and Grafana, see Introducing Metrics to Kafka in the Deploying and Upgrading AMQ Streams on OpenShift guide. B.1.9. jvmOptions JVM options can be configured using the jvmOptions property in following resources: Kafka.spec.kafka Kafka.spec.zookeeper KafkaConnect.spec KafkaConnectS2I.spec KafkaMirrorMaker.spec KafkaMirrorMaker2.spec KafkaBridge.spec Only the following JVM options are supported: -Xms Configures the minimum initial allocation heap size when the JVM starts. -Xmx Configures the maximum heap size. Note The units accepted by JVM settings such as -Xmx and -Xms are those accepted by the JDK java binary in the corresponding image. Accordingly, 1g or 1G means 1,073,741,824 bytes, and Gi is not a valid unit suffix. This is in contrast to the units used for memory requests and limits , which follow the OpenShift convention where 1G means 1,000,000,000 bytes, and 1Gi means 1,073,741,824 bytes The default values used for -Xms and -Xmx depends on whether there is a memory request limit configured for the container. If there is a memory limit, the JVM's minimum and maximum memory is set to a value corresponding to the limit. If there is no memory limit, the JVM's minimum memory is set to 128M . The JVM's maximum memory is not defined to allow the memory to grow as needed, which is ideal for single node environments in test and development. Important Setting -Xmx explicitly requires some care: The JVM's overall memory usage will be approximately 4 x the maximum heap, as configured by -Xmx . If -Xmx is set without also setting an appropriate OpenShift memory limit, it is possible that the container will be killed should the OpenShift node experience memory pressure (from other Pods running on it). If -Xmx is set without also setting an appropriate OpenShift memory request, it is possible that the container will be scheduled to a node with insufficient memory. In this case, the container will not start but crash (immediately if -Xms is set to -Xmx , or some later time if not). When setting -Xmx explicitly, it is recommended to: Set the memory request and the memory limit to the same value Use a memory request that is at least 4.5 x the -Xmx Consider setting -Xms to the same value as -Xmx Important Containers doing lots of disk I/O (such as Kafka broker containers) will need to leave some memory available for use as an operating system page cache. On such containers, the requested memory should be significantly higher than the memory used by the JVM. Example fragment configuring -Xmx and -Xms # ... jvmOptions: "-Xmx": "2g" "-Xms": "2g" # ... In the above example, the JVM will use 2 GiB (=2,147,483,648 bytes) for its heap. Its total memory usage will be approximately 8GiB. Setting the same value for initial ( -Xms ) and maximum ( -Xmx ) heap sizes avoids the JVM having to allocate memory after startup, at the cost of possibly allocating more heap than is really needed. For Kafka and ZooKeeper pods such allocation could cause unwanted latency. For Kafka Connect avoiding over allocation may be the most important concern, especially in distributed mode where the effects of over-allocation is multiplied by the number of consumers. -server -server enables the server JVM. This option can be set to true or false. Example fragment configuring -server # ... jvmOptions: "-server": true # ... Note When neither of the two options ( -server and -XX ) are specified, the default Apache Kafka configuration of KAFKA_JVM_PERFORMANCE_OPTS is used. -XX -XX object can be used for configuring advanced runtime options of a JVM. The -server and -XX options are used to configure the KAFKA_JVM_PERFORMANCE_OPTS option of Apache Kafka. Example showing the use of the -XX object jvmOptions: "-XX": "UseG1GC": true "MaxGCPauseMillis": 20 "InitiatingHeapOccupancyPercent": 35 "ExplicitGCInvokesConcurrent": true The example configuration above will result in the following JVM options: Note When neither of the two options ( -server and -XX ) are specified, the default Apache Kafka configuration of KAFKA_JVM_PERFORMANCE_OPTS is used. B.1.10. Garbage collector logging The jvmOptions property also allows you to enable and disable garbage collector (GC) logging. GC logging is disabled by default. To enable it, set the gcLoggingEnabled property as follows: Example of enabling GC logging # ... jvmOptions: gcLoggingEnabled: true # ... B.2. Kafka schema reference Property Description spec The specification of the Kafka and ZooKeeper clusters, and Topic Operator. KafkaSpec status The status of the Kafka and ZooKeeper clusters, and Topic Operator. KafkaStatus B.3. KafkaSpec schema reference Used in: Kafka Property Description kafka Configuration of the Kafka cluster. KafkaClusterSpec zookeeper Configuration of the ZooKeeper cluster. ZookeeperClusterSpec topicOperator The property topicOperator has been deprecated. This feature should now be configured at path spec.entityOperator.topicOperator . Configuration of the Topic Operator. TopicOperatorSpec entityOperator Configuration of the Entity Operator. EntityOperatorSpec clusterCa Configuration of the cluster certificate authority. CertificateAuthority clientsCa Configuration of the clients certificate authority. CertificateAuthority cruiseControl Configuration for Cruise Control deployment. Deploys a Cruise Control instance when specified. CruiseControlSpec kafkaExporter Configuration of the Kafka Exporter. Kafka Exporter can provide additional metrics, for example lag of consumer group at topic/partition. KafkaExporterSpec maintenanceTimeWindows A list of time windows for maintenance tasks (that is, certificates renewal). Each time window is defined by a cron expression. string array B.4. KafkaClusterSpec schema reference Used in: KafkaSpec Configures a Kafka cluster. B.4.1. listeners Use the listeners property to configure listeners to provide access to Kafka brokers. Example configuration of a plain (unencrypted) listener without authentication apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka spec: kafka: # ... listeners: - name: plain port: 9092 type: internal tls: false # ... zookeeper: # ... B.4.2. config Use the config properties to configure Kafka brokers as keys with values in one of the following JSON types: String Number Boolean You can specify and configure all of the options in the "Broker Configs" section of the Apache Kafka documentation apart from those managed directly by AMQ Streams. Specifically, all configuration options with keys equal to or starting with one of the following strings are forbidden: listeners advertised. broker. listener. host.name port inter.broker.listener.name sasl. ssl. security. password. principal.builder.class log.dir zookeeper.connect zookeeper.set.acl authorizer. super.user When a forbidden option is present in the config property, it is ignored and a warning message is printed to the Cluster Operator log file. All other supported options are passed to Kafka. There are exceptions to the forbidden options. For client connection using a specific cipher suite for a TLS version, you can configure allowed ssl properties . You can also configure the zookeeper.connection.timeout.ms property to set the maximum time allowed for establishing a ZooKeeper connection. Example Kafka broker configuration apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # ... config: num.partitions: 1 num.recovery.threads.per.data.dir: 1 default.replication.factor: 3 offsets.topic.replication.factor: 3 transaction.state.log.replication.factor: 3 transaction.state.log.min.isr: 1 log.retention.hours: 168 log.segment.bytes: 1073741824 log.retention.check.interval.ms: 300000 num.network.threads: 3 num.io.threads: 8 socket.send.buffer.bytes: 102400 socket.receive.buffer.bytes: 102400 socket.request.max.bytes: 104857600 group.initial.rebalance.delay.ms: 0 ssl.cipher.suites: "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" ssl.enabled.protocols: "TLSv1.2" ssl.protocol: "TLSv1.2" zookeeper.connection.timeout.ms: 6000 # ... Property Description replicas The number of pods in the cluster. integer image The docker image for the pods. The default value depends on the configured Kafka.spec.kafka.version . string storage Storage configuration (disk). Cannot be updated. The type depends on the value of the storage.type property within the given object, which must be one of [ephemeral, persistent-claim, jbod]. EphemeralStorage , PersistentClaimStorage , JbodStorage listeners Configures listeners of Kafka brokers. GenericKafkaListener array or KafkaListeners authorization Authorization configuration for Kafka brokers. The type depends on the value of the authorization.type property within the given object, which must be one of [simple, opa, keycloak]. KafkaAuthorizationSimple , KafkaAuthorizationOpa , KafkaAuthorizationKeycloak config Kafka broker config properties with the following prefixes cannot be set: listeners, advertised., broker., listener., host.name, port, inter.broker.listener.name, sasl., ssl., security., password., principal.builder.class, log.dir, zookeeper.connect, zookeeper.set.acl, zookeeper.ssl, zookeeper.clientCnxnSocket, authorizer., super.user, cruise.control.metrics.topic, cruise.control.metrics.reporter.bootstrap.servers (with the exception of: zookeeper.connection.timeout.ms, ssl.cipher.suites, ssl.protocol, ssl.enabled.protocols,cruise.control.metrics.topic.num.partitions, cruise.control.metrics.topic.replication.factor, cruise.control.metrics.topic.retention.ms,cruise.control.metrics.topic.auto.create.retries, cruise.control.metrics.topic.auto.create.timeout.ms). map rack Configuration of the broker.rack broker config. Rack brokerRackInitImage The image of the init container used for initializing the broker.rack . string affinity The property affinity has been deprecated. This feature should now be configured at path spec.kafka.template.pod.affinity . The pod's affinity rules. See external documentation of core/v1 affinity . Affinity tolerations The property tolerations has been deprecated. This feature should now be configured at path spec.kafka.template.pod.tolerations . The pod's tolerations. See external documentation of core/v1 toleration . Toleration array livenessProbe Pod liveness checking. Probe readinessProbe Pod readiness checking. Probe jvmOptions JVM Options for pods. JvmOptions jmxOptions JMX Options for Kafka brokers. KafkaJmxOptions resources CPU and memory resources to reserve. See external documentation of core/v1 resourcerequirements . ResourceRequirements metrics The Prometheus JMX Exporter configuration. See https://github.com/prometheus/jmx_exporter for details of the structure of this configuration. map logging Logging configuration for Kafka. The type depends on the value of the logging.type property within the given object, which must be one of [inline, external]. InlineLogging , ExternalLogging tlsSidecar The property tlsSidecar has been deprecated. TLS sidecar configuration. TlsSidecar template Template for Kafka cluster resources. The template allows users to specify how are the StatefulSet , Pods and Services generated. KafkaClusterTemplate version The kafka broker version. Defaults to 2.6.0. Consult the user documentation to understand the process required to upgrade or downgrade the version. string B.5. EphemeralStorage schema reference Used in: JbodStorage , KafkaClusterSpec , ZookeeperClusterSpec The type property is a discriminator that distinguishes the use of the type EphemeralStorage from PersistentClaimStorage . It must have the value ephemeral for the type EphemeralStorage . Property Description id Storage identification number. It is mandatory only for storage volumes defined in a storage of type 'jbod'. integer sizeLimit When type=ephemeral, defines the total amount of local storage required for this EmptyDir volume (for example 1Gi). string type Must be ephemeral . string B.6. PersistentClaimStorage schema reference Used in: JbodStorage , KafkaClusterSpec , ZookeeperClusterSpec The type property is a discriminator that distinguishes the use of the type PersistentClaimStorage from EphemeralStorage . It must have the value persistent-claim for the type PersistentClaimStorage . Property Description type Must be persistent-claim . string size When type=persistent-claim, defines the size of the persistent volume claim (i.e 1Gi). Mandatory when type=persistent-claim. string selector Specifies a specific persistent volume to use. It contains key:value pairs representing labels for selecting such a volume. map deleteClaim Specifies if the persistent volume claim has to be deleted when the cluster is un-deployed. boolean class The storage class to use for dynamic volume allocation. string id Storage identification number. It is mandatory only for storage volumes defined in a storage of type 'jbod'. integer overrides Overrides for individual brokers. The overrides field allows to specify a different configuration for different brokers. PersistentClaimStorageOverride array B.7. PersistentClaimStorageOverride schema reference Used in: PersistentClaimStorage Property Description class The storage class to use for dynamic volume allocation for this broker. string broker Id of the kafka broker (broker identifier). integer B.8. JbodStorage schema reference Used in: KafkaClusterSpec The type property is a discriminator that distinguishes the use of the type JbodStorage from EphemeralStorage , PersistentClaimStorage . It must have the value jbod for the type JbodStorage . Property Description type Must be jbod . string volumes List of volumes as Storage objects representing the JBOD disks array. EphemeralStorage , PersistentClaimStorage array B.9. GenericKafkaListener schema reference Used in: KafkaClusterSpec Configures listeners to connect to Kafka brokers within and outside OpenShift. You configure the listeners in the Kafka resource. Example Kafka resource showing listener configuration apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: #... listeners: - name: plain port: 9092 type: internal tls: false - name: tls port: 9093 type: internal tls: true authentication: type: tls - name: external1 port: 9094 type: route tls: true - name: external2 port: 9095 type: ingress tls: false authentication: type: tls configuration: bootstrap: host: bootstrap.myingress.com brokers: - broker: 0 host: broker-0.myingress.com - broker: 1 host: broker-1.myingress.com - broker: 2 host: broker-2.myingress.com #... B.9.1. listeners You configure Kafka broker listeners using the listeners property in the Kafka resource. Listeners are defined as an array. Example listener configuration listeners: - name: plain port: 9092 type: internal tls: false The name and port must be unique within the Kafka cluster. The name can be up to 25 characters long, comprising lower-case letters and numbers. Allowed port numbers are 9092 and higher with the exception of ports 9404 and 9999, which are already used for Prometheus and JMX. By specifying a unique name and port for each listener, you can configure multiple listeners. B.9.2. type The type is set as internal , or for external listeners, as route , loadbalancer , nodeport or ingress . internal You can configure internal listeners with or without encryption using the tls property. Example internal listener configuration #... spec: kafka: #... listeners: #... - name: plain port: 9092 type: internal tls: false - name: tls port: 9093 type: internal tls: true authentication: type: tls #... route Configures an external listener to expose Kafka using OpenShift Routes and the HAProxy router. A dedicated Route is created for every Kafka broker pod. An additional Route is created to serve as a Kafka bootstrap address. Kafka clients can use these Routes to connect to Kafka on port 443. The client connects on port 443, the default router port, but traffic is then routed to the port you configure, which is 9094 in this example. Example route listener configuration #... spec: kafka: #... listeners: #... - name: external1 port: 9094 type: route tls: true #... ingress Configures an external listener to expose Kafka using Kubernetes Ingress and the NGINX Ingress Controller for Kubernetes . A dedicated Ingress resource is created for every Kafka broker pod. An additional Ingress resource is created to serve as a Kafka bootstrap address. Kafka clients can use these Ingress resources to connect to Kafka on port 443. The client connects on port 443, the default controller port, but traffic is then routed to the port you configure, which is 9095 in the following example. You must specify the hostnames used by the bootstrap and per-broker services using GenericKafkaListenerConfigurationBootstrap and GenericKafkaListenerConfigurationBroker properties. Example ingress listener configuration #... spec: kafka: #... listeners: #... - name: external2 port: 9095 type: ingress tls: false authentication: type: tls configuration: bootstrap: host: bootstrap.myingress.com brokers: - broker: 0 host: broker-0.myingress.com - broker: 1 host: broker-1.myingress.com - broker: 2 host: broker-2.myingress.com #... Note External listeners using Ingress are currently only tested with the NGINX Ingress Controller for Kubernetes . loadbalancer Configures an external listener to expose Kafka Loadbalancer type Services . A new loadbalancer service is created for every Kafka broker pod. An additional loadbalancer is created to serve as a Kafka bootstrap address. Loadbalancers listen to the specified port number, which is port 9094 in the following example. You can use the loadBalancerSourceRanges property to configure source ranges to restrict access to the specified IP addresses. Example loadbalancer listener configuration #... spec: kafka: #... listeners: - name: external3 port: 9094 type: loadbalancer tls: true configuration: loadBalancerSourceRanges: - 10.0.0.0/8 - 88.208.76.87/32 #... nodeport Configures an external listener to expose Kafka using NodePort type Services . Kafka clients connect directly to the nodes of OpenShift. An additional NodePort type of service is created to serve as a Kafka bootstrap address. When configuring the advertised addresses for the Kafka broker pods, AMQ Streams uses the address of the node on which the given pod is running. You can use preferredNodePortAddressType property to configure the first address type checked as the node address . Example nodeport listener configuration #... spec: kafka: #... listeners: #... - name: external4 port: 9095 type: nodeport tls: false configuration: preferredNodePortAddressType: InternalDNS #... Note TLS hostname verification is not currently supported when exposing Kafka clusters using node ports. B.9.3. port The port number is the port used in the Kafka cluster, which might not be the same port used for access by a client. loadbalancer listeners use the specified port number, as do internal listeners ingress and route listeners use port 443 for access nodeport listeners use the port number assigned by OpenShift For client connection, use the address and port for the bootstrap service of the listener. You can retrieve this from the status of the Kafka resource. Example command to retrieve the address and port for client connection oc get kafka KAFKA-CLUSTER-NAME -o=jsonpath='{.status.listeners[?(@.type=="external")].bootstrapServers}{"\n"}' Note Listeners cannot be configured to use the ports set aside for interbroker communication (9091) and metrics (9404). B.9.4. tls The TLS property is required. By default, TLS encryption is not enabled. To enable it, set the tls property to true . TLS encryption is always used with route listeners. B.9.5. authentication Authentication for the listener can be specified as: Mutual TLS ( tls ) SCRAM-SHA-512 ( scram-sha-512 ) Token-based OAuth 2.0 ( oauth ). B.9.6. networkPolicyPeers Use networkPolicyPeers to configure network policies that restrict access to a listener at the network level. The following example shows a networkPolicyPeers configuration for a plain and a tls listener. listeners: #... - name: plain port: 9092 type: internal tls: true authentication: type: scram-sha-512 networkPolicyPeers: - podSelector: matchLabels: app: kafka-sasl-consumer - podSelector: matchLabels: app: kafka-sasl-producer - name: tls port: 9093 type: internal tls: true authentication: type: tls networkPolicyPeers: - namespaceSelector: matchLabels: project: myproject - namespaceSelector: matchLabels: project: myproject2 # ... In the example: Only application pods matching the labels app: kafka-sasl-consumer and app: kafka-sasl-producer can connect to the plain listener. The application pods must be running in the same namespace as the Kafka broker. Only application pods running in namespaces matching the labels project: myproject and project: myproject2 can connect to the tls listener. The syntax of the networkPolicyPeers field is the same as the from field in NetworkPolicy resources. Backwards compatibility with KafkaListeners GenericKafkaListener replaces the KafkaListeners schema, which is now deprecated. To convert the listeners configured using the KafkaListeners schema into the format of the GenericKafkaListener schema, with backwards compatibility, use the following names, ports and types: listeners: #... - name: plain port: 9092 type: internal tls: false - name: tls port: 9093 type: internal tls: true - name: external port: 9094 type: EXTERNAL-LISTENER-TYPE 1 tls: true # ... 1 Options: ingress , loadbalancer , nodeport , route Property Description name Name of the listener. The name will be used to identify the listener and the related OpenShift objects. The name has to be unique within given a Kafka cluster. The name can consist of lowercase characters and numbers and be up to 11 characters long. string port Port number used by the listener inside Kafka. The port number has to be unique within a given Kafka cluster. Allowed port numbers are 9092 and higher with the exception of ports 9404 and 9999, which are already used for Prometheus and JMX. Depending on the listener type, the port number might not be the same as the port number that connects Kafka clients. integer type Type of the listener. Currently the supported types are internal , route , loadbalancer , nodeport and ingress . * internal type exposes Kafka internally only within the OpenShift cluster. * route type uses OpenShift Routes to expose Kafka. * loadbalancer type uses LoadBalancer type services to expose Kafka. * nodeport type uses NodePort type services to expose Kafka. * ingress type uses OpenShift Nginx Ingress to expose Kafka. . string (one of [ingress, internal, route, loadbalancer, nodeport]) tls Enables TLS encryption on the listener. This is a required property. boolean authentication Authentication configuration for this listener. The type depends on the value of the authentication.type property within the given object, which must be one of [tls, scram-sha-512, oauth]. KafkaListenerAuthenticationTls , KafkaListenerAuthenticationScramSha512 , KafkaListenerAuthenticationOAuth configuration Additional listener configuration. GenericKafkaListenerConfiguration networkPolicyPeers List of peers which should be able to connect to this listener. Peers in this list are combined using a logical OR operation. If this field is empty or missing, all connections will be allowed for this listener. If this field is present and contains at least one item, the listener only allows the traffic which matches at least one item in this list. See external documentation of networking.k8s.io/v1 networkpolicypeer . NetworkPolicyPeer array B.10. KafkaListenerAuthenticationTls schema reference Used in: GenericKafkaListener , KafkaListenerExternalIngress , KafkaListenerExternalLoadBalancer , KafkaListenerExternalNodePort , KafkaListenerExternalRoute , KafkaListenerPlain , KafkaListenerTls The type property is a discriminator that distinguishes the use of the type KafkaListenerAuthenticationTls from KafkaListenerAuthenticationScramSha512 , KafkaListenerAuthenticationOAuth . It must have the value tls for the type KafkaListenerAuthenticationTls . Property Description type Must be tls . string B.11. KafkaListenerAuthenticationScramSha512 schema reference Used in: GenericKafkaListener , KafkaListenerExternalIngress , KafkaListenerExternalLoadBalancer , KafkaListenerExternalNodePort , KafkaListenerExternalRoute , KafkaListenerPlain , KafkaListenerTls The type property is a discriminator that distinguishes the use of the type KafkaListenerAuthenticationScramSha512 from KafkaListenerAuthenticationTls , KafkaListenerAuthenticationOAuth . It must have the value scram-sha-512 for the type KafkaListenerAuthenticationScramSha512 . Property Description type Must be scram-sha-512 . string B.12. KafkaListenerAuthenticationOAuth schema reference Used in: GenericKafkaListener , KafkaListenerExternalIngress , KafkaListenerExternalLoadBalancer , KafkaListenerExternalNodePort , KafkaListenerExternalRoute , KafkaListenerPlain , KafkaListenerTls The type property is a discriminator that distinguishes the use of the type KafkaListenerAuthenticationOAuth from KafkaListenerAuthenticationTls , KafkaListenerAuthenticationScramSha512 . It must have the value oauth for the type KafkaListenerAuthenticationOAuth . Property Description accessTokenIsJwt Configure whether the access token is treated as JWT. This must be set to false if the authorization server returns opaque tokens. Defaults to true . boolean checkAccessTokenType Configure whether the access token type check is performed or not. This should be set to false if the authorization server does not include 'typ' claim in JWT token. Defaults to true . boolean checkIssuer Enable or disable issuer checking. By default issuer is checked using the value configured by validIssuerUri . Default value is true . boolean clientId OAuth Client ID which the Kafka broker can use to authenticate against the authorization server and use the introspect endpoint URI. string clientSecret Link to OpenShift Secret containing the OAuth client secret which the Kafka broker can use to authenticate against the authorization server and use the introspect endpoint URI. GenericSecretSource disableTlsHostnameVerification Enable or disable TLS hostname verification. Default value is false . boolean enableECDSA Enable or disable ECDSA support by installing BouncyCastle crypto provider. Default value is false . boolean fallbackUserNameClaim The fallback username claim to be used for the user id if the claim specified by userNameClaim is not present. This is useful when client_credentials authentication only results in the client id being provided in another claim. It only takes effect if userNameClaim is set. string fallbackUserNamePrefix The prefix to use with the value of fallbackUserNameClaim to construct the user id. This only takes effect if fallbackUserNameClaim is true, and the value is present for the claim. Mapping usernames and client ids into the same user id space is useful in preventing name collisions. string introspectionEndpointUri URI of the token introspection endpoint which can be used to validate opaque non-JWT tokens. string jwksEndpointUri URI of the JWKS certificate endpoint, which can be used for local JWT validation. string jwksExpirySeconds Configures how often are the JWKS certificates considered valid. The expiry interval has to be at least 60 seconds longer then the refresh interval specified in jwksRefreshSeconds . Defaults to 360 seconds. integer jwksMinRefreshPauseSeconds The minimum pause between two consecutive refreshes. When an unknown signing key is encountered the refresh is scheduled immediately, but will always wait for this minimum pause. Defaults to 1 second. integer jwksRefreshSeconds Configures how often are the JWKS certificates refreshed. The refresh interval has to be at least 60 seconds shorter then the expiry interval specified in jwksExpirySeconds . Defaults to 300 seconds. integer maxSecondsWithoutReauthentication Maximum number of seconds the authenticated session remains valid without re-authentication. This enables Apache Kafka re-authentication feature, and causes sessions to expire when the access token expires. If the access token expires before max time or if max time is reached, the client has to re-authenticate, otherwise the server will drop the connection. Not set by default - the authenticated session does not expire when the access token expires. integer tlsTrustedCertificates Trusted certificates for TLS connection to the OAuth server. CertSecretSource array type Must be oauth . string userInfoEndpointUri URI of the User Info Endpoint to use as a fallback to obtaining the user id when the Introspection Endpoint does not return information that can be used for the user id. string userNameClaim Name of the claim from the JWT authentication token, Introspection Endpoint response or User Info Endpoint response which will be used to extract the user id. Defaults to sub . string validIssuerUri URI of the token issuer used for authentication. string validTokenType Valid value for the token_type attribute returned by the Introspection Endpoint. No default value, and not checked by default. string B.13. GenericSecretSource schema reference Used in: KafkaClientAuthenticationOAuth , KafkaListenerAuthenticationOAuth Property Description key The key under which the secret value is stored in the OpenShift Secret. string secretName The name of the OpenShift Secret containing the secret value. string B.14. CertSecretSource schema reference Used in: KafkaAuthorizationKeycloak , KafkaBridgeTls , KafkaClientAuthenticationOAuth , KafkaConnectTls , KafkaListenerAuthenticationOAuth , KafkaMirrorMaker2Tls , KafkaMirrorMakerTls Property Description certificate The name of the file certificate in the Secret. string secretName The name of the Secret containing the certificate. string B.15. GenericKafkaListenerConfiguration schema reference Used in: GenericKafkaListener Configuration for Kafka listeners. B.15.1. brokerCertChainAndKey The brokerCertChainAndKey property is only used with listeners that have TLS encryption enabled. You can use the property to providing your own Kafka listener certificates. Example configuration for a loadbalancer external listener with TLS encryption enabled listeners: #... - name: external port: 9094 type: loadbalancer tls: true authentication: type: tls configuration: brokerCertChainAndKey: secretName: my-secret certificate: my-listener-certificate.crt key: my-listener-key.key # ... B.15.2. externalTrafficPolicy The externalTrafficPolicy property is used with loadbalancer and nodeport listeners. When exposing Kafka outside of OpenShift you can choose Local or Cluster . Local avoids hops to other nodes and preserves the client IP, whereas Cluster does neither. The default is Cluster . B.15.3. loadBalancerSourceRanges The loadBalancerSourceRanges property is only used with loadbalancer listeners. When exposing Kafka outside of OpenShift use source ranges, in addition to labels and annotations, to customize how a service is created. Example source ranges configured for a loadbalancer listener listeners: #... - name: external port: 9094 type: loadbalancer tls: false configuration: externalTrafficPolicy: Local loadBalancerSourceRanges: - 10.0.0.0/8 - 88.208.76.87/32 # ... # ... B.15.4. class The class property is only used with ingress listeners. By default, the Ingress class is set to nginx . You can change the Ingress class using the class property. Example of an external listener of type ingress using Ingress class nginx-internal listeners: #... - name: external port: 9094 type: ingress tls: false configuration: class: nginx-internal # ... # ... B.15.5. preferredNodePortAddressType The preferredNodePortAddressType property is only used with nodeport listeners. Use the preferredNodePortAddressType property in your listener configuration to specify the first address type checked as the node address. This property is useful, for example, if your deployment does not have DNS support, or you only want to expose a broker internally through an internal DNS or IP address. If an address of this type is found, it is used. If the preferred address type is not found, AMQ Streams proceeds through the types in the standard order of priority: ExternalDNS ExternalIP Hostname InternalDNS InternalIP Example of an external listener configured with a preferred node port address type listeners: #... - name: external port: 9094 type: nodeport tls: false configuration: preferredNodePortAddressType: InternalDNS # ... # ... B.15.6. useServiceDnsDomain The useServiceDnsDomain property is only used with internal listeners. It defines whether the fully-qualified DNS names that include the cluster service suffix (usually .cluster.local ) are used. With useServiceDnsDomain set as false , the advertised addresses are generated without the service suffix; for example, my-cluster-kafka-0.my-cluster-kafka-brokers.myproject.svc . With useServiceDnsDomain set as true , the advertised addresses are generated with the service suffix; for example, my-cluster-kafka-0.my-cluster-kafka-brokers.myproject.svc.cluster.local . Default is false . Example of an internal listener configured to use the Service DNS domain listeners: #... - name: plain port: 9092 type: internal tls: false configuration: useServiceDnsDomain: true # ... # ... If your OpenShift cluster uses a different service suffix than .cluster.local , you can configure the suffix using the KUBERNETES_SERVICE_DNS_DOMAIN environment variable in the Cluster Operator configuration. See Section 5.1.1, "Cluster Operator configuration" for more details. Property Description brokerCertChainAndKey Reference to the Secret which holds the certificate and private key pair which will be used for this listener. The certificate can optionally contain the whole chain. This field can be used only with listeners with enabled TLS encryption. CertAndKeySecretSource externalTrafficPolicy Specifies whether the service routes external traffic to node-local or cluster-wide endpoints. Cluster may cause a second hop to another node and obscures the client source IP. Local avoids a second hop for LoadBalancer and Nodeport type services and preserves the client source IP (when supported by the infrastructure). If unspecified, OpenShift will use Cluster as the default.This field can be used only with loadbalancer or nodeport type listener. string (one of [Local, Cluster]) loadBalancerSourceRanges A list of CIDR ranges (for example 10.0.0.0/8 or 130.211.204.1/32 ) from which clients can connect to load balancer type listeners. If supported by the platform, traffic through the loadbalancer is restricted to the specified CIDR ranges. This field is applicable only for loadbalancer type services and is ignored if the cloud provider does not support the feature. For more information, see https://v1-17.docs.kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/ . This field can be used only with loadbalancer type listener. string array bootstrap Bootstrap configuration. GenericKafkaListenerConfigurationBootstrap brokers Per-broker configurations. GenericKafkaListenerConfigurationBroker array class Configures the Ingress class that defines which Ingress controller will be used. If not set, the Ingress class is set to nginx . This field can be used only with ingress type listener. string preferredNodePortAddressType Defines which address type should be used as the node address. Available types are: ExternalDNS , ExternalIP , InternalDNS , InternalIP and Hostname . By default, the addresses will be used in the following order (the first one found will be used): * ExternalDNS * ExternalIP * InternalDNS * InternalIP * Hostname This field can be used to select the address type which will be used as the preferred type and checked first. In case no address will be found for this address type, the other types will be used in the default order.This field can be used only with nodeport type listener.. string (one of [ExternalDNS, ExternalIP, Hostname, InternalIP, InternalDNS]) useServiceDnsDomain Configures whether the OpenShift service DNS domain should be used or not. If set to true , the generated addresses with contain the service DNS domain suffix (by default .cluster.local , can be configured using environment variable KUBERNETES_SERVICE_DNS_DOMAIN ). Defaults to false .This field can be used only with internal type listener. boolean B.16. CertAndKeySecretSource schema reference Used in: GenericKafkaListenerConfiguration , IngressListenerConfiguration , KafkaClientAuthenticationTls , KafkaListenerExternalConfiguration , NodePortListenerConfiguration , TlsListenerConfiguration Property Description certificate The name of the file certificate in the Secret. string key The name of the private key in the Secret. string secretName The name of the Secret containing the certificate. string B.17. GenericKafkaListenerConfigurationBootstrap schema reference Used in: GenericKafkaListenerConfiguration Configures bootstrap service overrides for external listeners. Broker service equivalents of nodePort , host , loadBalancerIP and annotations properties are configured in the GenericKafkaListenerConfigurationBroker schema . B.17.1. alternativeNames You can specify alternative names for the bootstrap service. The names are added to the broker certificates and can be used for TLS hostname verification. The alternativeNames property is applicable to all types of external listeners. Example of an external route listener configured with an additional bootstrap address listeners: #... - name: external port: 9094 type: route tls: true authentication: type: tls configuration: bootstrap: alternativeNames: - example.hostname1 - example.hostname2 # ... B.17.2. host The host property is used with route and ingress listeners to specify the hostnames used by the bootstrap and per-broker services. A host property value is mandatory for ingress listener configuration, as the Ingress controller does not assign any hostnames automatically. Make sure that the hostnames resolve to the Ingress endpoints. AMQ Streams will not perform any validation that the requested hosts are available and properly routed to the Ingress endpoints. Example of host configuration for an ingress listener listeners: #... - name: external port: 9094 type: ingress tls: true authentication: type: tls configuration: bootstrap: host: bootstrap.myingress.com brokers: - broker: 0 host: broker-0.myingress.com - broker: 1 host: broker-1.myingress.com - broker: 2 host: broker-2.myingress.com # ... By default, route listener hosts are automatically assigned by OpenShift. However, you can override the assigned route hosts by specifying hosts. AMQ Streams does not perform any validation that the requested hosts are available. You must ensure that they are free and can be used. Example of host configuration for a route listener # ... listeners: #... - name: external port: 9094 type: route tls: true authentication: type: tls configuration: bootstrap: host: bootstrap.myrouter.com brokers: - broker: 0 host: broker-0.myrouter.com - broker: 1 host: broker-1.myrouter.com - broker: 2 host: broker-2.myrouter.com # ... B.17.3. nodePort By default, the port numbers used for the bootstrap and broker services are automatically assigned by OpenShift. You can override the assigned node ports for nodeport listeners by specifying the requested port numbers. AMQ Streams does not perform any validation on the requested ports. You must ensure that they are free and available for use. Example of an external listener configured with overrides for node ports # ... listeners: #... - name: external port: 9094 type: nodeport tls: true authentication: type: tls configuration: bootstrap: nodePort: 32100 brokers: - broker: 0 nodePort: 32000 - broker: 1 nodePort: 32001 - broker: 2 nodePort: 32002 # ... B.17.4. loadBalancerIP Use the loadBalancerIP property to request a specific IP address when creating a loadbalancer. Use this property when you need to use a loadbalancer with a specific IP address. The loadBalancerIP field is ignored if the cloud provider does not support the feature. Example of an external listener of type loadbalancer with specific loadbalancer IP address requests # ... listeners: #... - name: external port: 9094 type: loadbalancer tls: true authentication: type: tls configuration: bootstrap: loadBalancerIP: 172.29.3.10 brokers: - broker: 0 loadBalancerIP: 172.29.3.1 - broker: 1 loadBalancerIP: 172.29.3.2 - broker: 2 loadBalancerIP: 172.29.3.3 # ... B.17.5. annotations Use the annotations property to add annotations to loadbalancer , nodeport or ingress listeners. You can use these annotations to instrument DNS tooling such as External DNS , which automatically assigns DNS names to the loadbalancer services. Example of an external listener of type loadbalancer using annotations # ... listeners: #... - name: external port: 9094 type: loadbalancer tls: true authentication: type: tls configuration: bootstrap: annotations: external-dns.alpha.kubernetes.io/hostname: kafka-bootstrap.mydomain.com. external-dns.alpha.kubernetes.io/ttl: "60" brokers: - broker: 0 annotations: external-dns.alpha.kubernetes.io/hostname: kafka-broker-0.mydomain.com. external-dns.alpha.kubernetes.io/ttl: "60" - broker: 1 annotations: external-dns.alpha.kubernetes.io/hostname: kafka-broker-1.mydomain.com. external-dns.alpha.kubernetes.io/ttl: "60" - broker: 2 annotations: external-dns.alpha.kubernetes.io/hostname: kafka-broker-2.mydomain.com. external-dns.alpha.kubernetes.io/ttl: "60" # ... Property Description alternativeNames Additional alternative names for the bootstrap service. The alternative names will be added to the list of subject alternative names of the TLS certificates. string array host The bootstrap host. This field will be used in the Ingress resource or in the Route resource to specify the desired hostname. This field can be used only with route (optional) or ingress (required) type listeners. string nodePort Node port for the bootstrap service. This field can be used only with nodeport type listener. integer loadBalancerIP The loadbalancer is requested with the IP address specified in this field. This feature depends on whether the underlying cloud provider supports specifying the loadBalancerIP when a load balancer is created. This field is ignored if the cloud provider does not support the feature.This field can be used only with loadbalancer type listener. string annotations Annotations that will be added to the Ingress or Service resource. You can use this field to configure DNS providers such as External DNS. This field can be used only with loadbalancer , nodeport , or ingress type listeners. map B.18. GenericKafkaListenerConfigurationBroker schema reference Used in: GenericKafkaListenerConfiguration Configures broker service overrides for external listeners. You can see example configuration for the nodePort , host , loadBalancerIP and annotations properties in the GenericKafkaListenerConfigurationBootstrap schema , which configures bootstrap service overrides for external listeners. Advertised addresses for brokers By default, AMQ Streams tries to automatically determine the hostnames and ports that your Kafka cluster advertises to its clients. This is not sufficient in all situations, because the infrastructure on which AMQ Streams is running might not provide the right hostname or port through which Kafka can be accessed. You can specify a broker ID and customize the advertised hostname and port in the configuration property of the external listener. AMQ Streams will then automatically configure the advertised address in the Kafka brokers and add it to the broker certificates so it can be used for TLS hostname verification. Overriding the advertised host and ports is available for all types of external listeners. Example of an external route listener configured with overrides for advertised addresses listeners: #... - name: external port: 9094 type: route tls: true authentication: type: tls configuration: brokers: - broker: 0 advertisedHost: example.hostname.0 advertisedPort: 12340 - broker: 1 advertisedHost: example.hostname.1 advertisedPort: 12341 - broker: 2 advertisedHost: example.hostname.2 advertisedPort: 12342 # ... Property Description broker ID of the kafka broker (broker identifier). Broker IDs start from 0 and correspond to the number of broker replicas. integer advertisedHost The host name which will be used in the brokers' advertised.brokers . string advertisedPort The port number which will be used in the brokers' advertised.brokers . integer host The broker host. This field will be used in the Ingress resource or in the Route resource to specify the desired hostname. This field can be used only with route (optional) or ingress (required) type listeners. string nodePort Node port for the per-broker service. This field can be used only with nodeport type listener. integer loadBalancerIP The loadbalancer is requested with the IP address specified in this field. This feature depends on whether the underlying cloud provider supports specifying the loadBalancerIP when a load balancer is created. This field is ignored if the cloud provider does not support the feature.This field can be used only with loadbalancer type listener. string annotations Annotations that will be added to the Ingress or Service resource. You can use this field to configure DNS providers such as External DNS. This field can be used only with loadbalancer , nodeport , or ingress type listeners. map B.19. KafkaListeners schema reference The type KafkaListeners has been deprecated. Please use GenericKafkaListener instead. Used in: KafkaClusterSpec Refer to documentation for example configuration. Property Description plain Configures plain listener on port 9092. KafkaListenerPlain tls Configures TLS listener on port 9093. KafkaListenerTls external Configures external listener on port 9094. The type depends on the value of the external.type property within the given object, which must be one of [route, loadbalancer, nodeport, ingress]. KafkaListenerExternalRoute , KafkaListenerExternalLoadBalancer , KafkaListenerExternalNodePort , KafkaListenerExternalIngress B.20. KafkaListenerPlain schema reference Used in: KafkaListeners Property Description authentication Authentication configuration for this listener. Since this listener does not use TLS transport you cannot configure an authentication with type: tls . The type depends on the value of the authentication.type property within the given object, which must be one of [tls, scram-sha-512, oauth]. KafkaListenerAuthenticationTls , KafkaListenerAuthenticationScramSha512 , KafkaListenerAuthenticationOAuth networkPolicyPeers List of peers which should be able to connect to this listener. Peers in this list are combined using a logical OR operation. If this field is empty or missing, all connections will be allowed for this listener. If this field is present and contains at least one item, the listener only allows the traffic which matches at least one item in this list. See external documentation of networking.k8s.io/v1 networkpolicypeer . NetworkPolicyPeer array B.21. KafkaListenerTls schema reference Used in: KafkaListeners Property Description authentication Authentication configuration for this listener. The type depends on the value of the authentication.type property within the given object, which must be one of [tls, scram-sha-512, oauth]. KafkaListenerAuthenticationTls , KafkaListenerAuthenticationScramSha512 , KafkaListenerAuthenticationOAuth configuration Configuration of TLS listener. TlsListenerConfiguration networkPolicyPeers List of peers which should be able to connect to this listener. Peers in this list are combined using a logical OR operation. If this field is empty or missing, all connections will be allowed for this listener. If this field is present and contains at least one item, the listener only allows the traffic which matches at least one item in this list. See external documentation of networking.k8s.io/v1 networkpolicypeer . NetworkPolicyPeer array B.22. TlsListenerConfiguration schema reference Used in: KafkaListenerTls Property Description brokerCertChainAndKey Reference to the Secret which holds the certificate and private key pair. The certificate can optionally contain the whole chain. CertAndKeySecretSource B.23. KafkaListenerExternalRoute schema reference Used in: KafkaListeners The type property is a discriminator that distinguishes the use of the type KafkaListenerExternalRoute from KafkaListenerExternalLoadBalancer , KafkaListenerExternalNodePort , KafkaListenerExternalIngress . It must have the value route for the type KafkaListenerExternalRoute . Property Description type Must be route . string authentication Authentication configuration for Kafka brokers. The type depends on the value of the authentication.type property within the given object, which must be one of [tls, scram-sha-512, oauth]. KafkaListenerAuthenticationTls , KafkaListenerAuthenticationScramSha512 , KafkaListenerAuthenticationOAuth overrides Overrides for external bootstrap and broker services and externally advertised addresses. RouteListenerOverride configuration External listener configuration. KafkaListenerExternalConfiguration networkPolicyPeers List of peers which should be able to connect to this listener. Peers in this list are combined using a logical OR operation. If this field is empty or missing, all connections will be allowed for this listener. If this field is present and contains at least one item, the listener only allows the traffic which matches at least one item in this list. See external documentation of networking.k8s.io/v1 networkpolicypeer . NetworkPolicyPeer array B.24. RouteListenerOverride schema reference Used in: KafkaListenerExternalRoute Property Description bootstrap External bootstrap service configuration. RouteListenerBootstrapOverride brokers External broker services configuration. RouteListenerBrokerOverride array B.25. RouteListenerBootstrapOverride schema reference Used in: RouteListenerOverride Property Description address Additional address name for the bootstrap service. The address will be added to the list of subject alternative names of the TLS certificates. string host Host for the bootstrap route. This field will be used in the spec.host field of the OpenShift Route. string B.26. RouteListenerBrokerOverride schema reference Used in: RouteListenerOverride Property Description broker Id of the kafka broker (broker identifier). integer advertisedHost The host name which will be used in the brokers' advertised.brokers . string advertisedPort The port number which will be used in the brokers' advertised.brokers . integer host Host for the broker route. This field will be used in the spec.host field of the OpenShift Route. string B.27. KafkaListenerExternalConfiguration schema reference Used in: KafkaListenerExternalLoadBalancer , KafkaListenerExternalRoute Property Description brokerCertChainAndKey Reference to the Secret which holds the certificate and private key pair. The certificate can optionally contain the whole chain. CertAndKeySecretSource B.28. KafkaListenerExternalLoadBalancer schema reference Used in: KafkaListeners The type property is a discriminator that distinguishes the use of the type KafkaListenerExternalLoadBalancer from KafkaListenerExternalRoute , KafkaListenerExternalNodePort , KafkaListenerExternalIngress . It must have the value loadbalancer for the type KafkaListenerExternalLoadBalancer . Property Description type Must be loadbalancer . string authentication Authentication configuration for Kafka brokers. The type depends on the value of the authentication.type property within the given object, which must be one of [tls, scram-sha-512, oauth]. KafkaListenerAuthenticationTls , KafkaListenerAuthenticationScramSha512 , KafkaListenerAuthenticationOAuth overrides Overrides for external bootstrap and broker services and externally advertised addresses. LoadBalancerListenerOverride configuration External listener configuration. KafkaListenerExternalConfiguration networkPolicyPeers List of peers which should be able to connect to this listener. Peers in this list are combined using a logical OR operation. If this field is empty or missing, all connections will be allowed for this listener. If this field is present and contains at least one item, the listener only allows the traffic which matches at least one item in this list. See external documentation of networking.k8s.io/v1 networkpolicypeer . NetworkPolicyPeer array tls Enables TLS encryption on the listener. By default set to true for enabled TLS encryption. boolean B.29. LoadBalancerListenerOverride schema reference Used in: KafkaListenerExternalLoadBalancer Property Description bootstrap External bootstrap service configuration. LoadBalancerListenerBootstrapOverride brokers External broker services configuration. LoadBalancerListenerBrokerOverride array B.30. LoadBalancerListenerBootstrapOverride schema reference Used in: LoadBalancerListenerOverride Property Description address Additional address name for the bootstrap service. The address will be added to the list of subject alternative names of the TLS certificates. string dnsAnnotations Annotations that will be added to the Service resource. You can use this field to configure DNS providers such as External DNS. map loadBalancerIP The loadbalancer is requested with the IP address specified in this field. This feature depends on whether the underlying cloud provider supports specifying the loadBalancerIP when a load balancer is created. This field is ignored if the cloud provider does not support the feature. string B.31. LoadBalancerListenerBrokerOverride schema reference Used in: LoadBalancerListenerOverride Property Description broker Id of the kafka broker (broker identifier). integer advertisedHost The host name which will be used in the brokers' advertised.brokers . string advertisedPort The port number which will be used in the brokers' advertised.brokers . integer dnsAnnotations Annotations that will be added to the Service resources for individual brokers. You can use this field to configure DNS providers such as External DNS. map loadBalancerIP The loadbalancer is requested with the IP address specified in this field. This feature depends on whether the underlying cloud provider supports specifying the loadBalancerIP when a load balancer is created. This field is ignored if the cloud provider does not support the feature. string B.32. KafkaListenerExternalNodePort schema reference Used in: KafkaListeners The type property is a discriminator that distinguishes the use of the type KafkaListenerExternalNodePort from KafkaListenerExternalRoute , KafkaListenerExternalLoadBalancer , KafkaListenerExternalIngress . It must have the value nodeport for the type KafkaListenerExternalNodePort . Property Description type Must be nodeport . string authentication Authentication configuration for Kafka brokers. The type depends on the value of the authentication.type property within the given object, which must be one of [tls, scram-sha-512, oauth]. KafkaListenerAuthenticationTls , KafkaListenerAuthenticationScramSha512 , KafkaListenerAuthenticationOAuth overrides Overrides for external bootstrap and broker services and externally advertised addresses. NodePortListenerOverride configuration External listener configuration. NodePortListenerConfiguration networkPolicyPeers List of peers which should be able to connect to this listener. Peers in this list are combined using a logical OR operation. If this field is empty or missing, all connections will be allowed for this listener. If this field is present and contains at least one item, the listener only allows the traffic which matches at least one item in this list. See external documentation of networking.k8s.io/v1 networkpolicypeer . NetworkPolicyPeer array tls Enables TLS encryption on the listener. By default set to true for enabled TLS encryption. boolean B.33. NodePortListenerOverride schema reference Used in: KafkaListenerExternalNodePort Property Description bootstrap External bootstrap service configuration. NodePortListenerBootstrapOverride brokers External broker services configuration. NodePortListenerBrokerOverride array B.34. NodePortListenerBootstrapOverride schema reference Used in: NodePortListenerOverride Property Description address Additional address name for the bootstrap service. The address will be added to the list of subject alternative names of the TLS certificates. string dnsAnnotations Annotations that will be added to the Service resource. You can use this field to configure DNS providers such as External DNS. map nodePort Node port for the bootstrap service. integer B.35. NodePortListenerBrokerOverride schema reference Used in: NodePortListenerOverride Property Description broker Id of the kafka broker (broker identifier). integer advertisedHost The host name which will be used in the brokers' advertised.brokers . string advertisedPort The port number which will be used in the brokers' advertised.brokers . integer nodePort Node port for the broker service. integer dnsAnnotations Annotations that will be added to the Service resources for individual brokers. You can use this field to configure DNS providers such as External DNS. map B.36. NodePortListenerConfiguration schema reference Used in: KafkaListenerExternalNodePort Property Description brokerCertChainAndKey Reference to the Secret which holds the certificate and private key pair. The certificate can optionally contain the whole chain. CertAndKeySecretSource preferredAddressType Defines which address type should be used as the node address. Available types are: ExternalDNS , ExternalIP , InternalDNS , InternalIP and Hostname . By default, the addresses will be used in the following order (the first one found will be used): * ExternalDNS * ExternalIP * InternalDNS * InternalIP * Hostname This field can be used to select the address type which will be used as the preferred type and checked first. In case no address will be found for this address type, the other types will be used in the default order.. string (one of [ExternalDNS, ExternalIP, Hostname, InternalIP, InternalDNS]) B.37. KafkaListenerExternalIngress schema reference Used in: KafkaListeners The type property is a discriminator that distinguishes the use of the type KafkaListenerExternalIngress from KafkaListenerExternalRoute , KafkaListenerExternalLoadBalancer , KafkaListenerExternalNodePort . It must have the value ingress for the type KafkaListenerExternalIngress . Property Description type Must be ingress . string authentication Authentication configuration for Kafka brokers. The type depends on the value of the authentication.type property within the given object, which must be one of [tls, scram-sha-512, oauth]. KafkaListenerAuthenticationTls , KafkaListenerAuthenticationScramSha512 , KafkaListenerAuthenticationOAuth class Configures the Ingress class that defines which Ingress controller will be used. If not set, the Ingress class is set to nginx . string configuration External listener configuration. IngressListenerConfiguration networkPolicyPeers List of peers which should be able to connect to this listener. Peers in this list are combined using a logical OR operation. If this field is empty or missing, all connections will be allowed for this listener. If this field is present and contains at least one item, the listener only allows the traffic which matches at least one item in this list. See external documentation of networking.k8s.io/v1 networkpolicypeer . NetworkPolicyPeer array B.38. IngressListenerConfiguration schema reference Used in: KafkaListenerExternalIngress Property Description bootstrap External bootstrap ingress configuration. IngressListenerBootstrapConfiguration brokers External broker ingress configuration. IngressListenerBrokerConfiguration array brokerCertChainAndKey Reference to the Secret which holds the certificate and private key pair. The certificate can optionally contain the whole chain. CertAndKeySecretSource B.39. IngressListenerBootstrapConfiguration schema reference Used in: IngressListenerConfiguration Property Description address Additional address name for the bootstrap service. The address will be added to the list of subject alternative names of the TLS certificates. string dnsAnnotations Annotations that will be added to the Ingress resource. You can use this field to configure DNS providers such as External DNS. map host Host for the bootstrap route. This field will be used in the Ingress resource. string B.40. IngressListenerBrokerConfiguration schema reference Used in: IngressListenerConfiguration Property Description broker Id of the kafka broker (broker identifier). integer advertisedHost The host name which will be used in the brokers' advertised.brokers . string advertisedPort The port number which will be used in the brokers' advertised.brokers . integer host Host for the broker ingress. This field will be used in the Ingress resource. string dnsAnnotations Annotations that will be added to the Ingress resources for individual brokers. You can use this field to configure DNS providers such as External DNS. map B.41. KafkaAuthorizationSimple schema reference Used in: KafkaClusterSpec Simple authorization in AMQ Streams uses the AclAuthorizer plugin, the default Access Control Lists (ACLs) authorization plugin provided with Apache Kafka. ACLs allow you to define which users have access to which resources at a granular level. Configure the Kafka custom resource to use simple authorization. Set the type property in the authorization section to the value simple , and configure a list of super users. Access rules are configured for the KafkaUser , as described in the ACLRule schema reference . B.41.1. superUsers A list of user principals treated as super users, so that they are always allowed without querying ACL rules. For more information see Kafka authorization . An example of simple authorization configuration authorization: type: simple superUsers: - CN=client_1 - user_2 - CN=client_3 Note The super.user configuration option in the config property in Kafka.spec.kafka is ignored. Designate super users in the authorization property instead. For more information, see Kafka broker configuration . The type property is a discriminator that distinguishes the use of the type KafkaAuthorizationSimple from KafkaAuthorizationOpa , KafkaAuthorizationKeycloak . It must have the value simple for the type KafkaAuthorizationSimple . Property Description type Must be simple . string superUsers List of super users. Should contain list of user principals which should get unlimited access rights. string array B.42. KafkaAuthorizationOpa schema reference Used in: KafkaClusterSpec To use Open Policy Agent authorization, set the type property in the authorization section to the value opa , and configure OPA properties as required. B.42.1. url The URL used to connect to the Open Policy Agent server. The URL has to include the policy which will be queried by the authorizer. Required. B.42.2. allowOnError Defines whether a Kafka client should be allowed or denied by default when the authorizer fails to query the Open Policy Agent, for example, when it is temporarily unavailable. Defaults to false - all actions will be denied. B.42.3. initialCacheCapacity Initial capacity of the local cache used by the authorizer to avoid querying the Open Policy Agent for every request. Defaults to 5000 . B.42.4. maximumCacheSize Maximum capacity of the local cache used by the authorizer to avoid querying the Open Policy Agent for every request. Defaults to 50000 . B.42.5. expireAfterMs The expiration of the records kept in the local cache to avoid querying the Open Policy Agent for every request. Defines how often the cached authorization decisions are reloaded from the Open Policy Agent server. In milliseconds. Defaults to 3600000 milliseconds (1 hour). B.42.6. superUsers A list of user principals treated as super users, so that they are always allowed without querying the open Policy Agent policy. For more information see Kafka authorization . An example of Open Policy Agent authorizer configuration authorization: type: opa url: http://opa:8181/v1/data/kafka/allow allowOnError: false initialCacheCapacity: 1000 maximumCacheSize: 10000 expireAfterMs: 60000 superUsers: - CN=fred - sam - CN=edward The type property is a discriminator that distinguishes the use of the type KafkaAuthorizationOpa from KafkaAuthorizationSimple , KafkaAuthorizationKeycloak . It must have the value opa for the type KafkaAuthorizationOpa . Property Description type Must be opa . string url The URL used to connect to the Open Policy Agent server. The URL has to include the policy which will be queried by the authorizer. This option is required. string allowOnError Defines whether a Kafka client should be allowed or denied by default when the authorizer fails to query the Open Policy Agent, for example, when it is temporarily unavailable). Defaults to false - all actions will be denied. boolean initialCacheCapacity Initial capacity of the local cache used by the authorizer to avoid querying the Open Policy Agent for every request Defaults to 5000 . integer maximumCacheSize Maximum capacity of the local cache used by the authorizer to avoid querying the Open Policy Agent for every request. Defaults to 50000 . integer expireAfterMs The expiration of the records kept in the local cache to avoid querying the Open Policy Agent for every request. Defines how often the cached authorization decisions are reloaded from the Open Policy Agent server. In milliseconds. Defaults to 3600000 . integer superUsers List of super users, which is specifically a list of user principals that have unlimited access rights. string array B.43. KafkaAuthorizationKeycloak schema reference Used in: KafkaClusterSpec The type property is a discriminator that distinguishes the use of the type KafkaAuthorizationKeycloak from KafkaAuthorizationSimple , KafkaAuthorizationOpa . It must have the value keycloak for the type KafkaAuthorizationKeycloak . Property Description type Must be keycloak . string clientId OAuth Client ID which the Kafka client can use to authenticate against the OAuth server and use the token endpoint URI. string tokenEndpointUri Authorization server token endpoint URI. string tlsTrustedCertificates Trusted certificates for TLS connection to the OAuth server. CertSecretSource array disableTlsHostnameVerification Enable or disable TLS hostname verification. Default value is false . boolean delegateToKafkaAcls Whether authorization decision should be delegated to the 'Simple' authorizer if DENIED by Red Hat Single Sign-On Authorization Services policies. Default value is false . boolean grantsRefreshPeriodSeconds The time between two consecutive grants refresh runs in seconds. The default value is 60. integer grantsRefreshPoolSize The number of threads to use to refresh grants for active sessions. The more threads, the more parallelism, so the sooner the job completes. However, using more threads places a heavier load on the authorization server. The default value is 5. integer superUsers List of super users. Should contain list of user principals which should get unlimited access rights. string array B.44. Rack schema reference Used in: KafkaClusterSpec , KafkaConnectS2ISpec , KafkaConnectSpec Property Description topologyKey A key that matches labels assigned to the OpenShift cluster nodes. The value of the label is used to set the broker's broker.rack config. string B.45. Probe schema reference Used in: CruiseControlSpec , EntityTopicOperatorSpec , EntityUserOperatorSpec , KafkaBridgeSpec , KafkaClusterSpec , KafkaConnectS2ISpec , KafkaConnectSpec , KafkaExporterSpec , KafkaMirrorMaker2Spec , KafkaMirrorMakerSpec , TlsSidecar , TopicOperatorSpec , ZookeeperClusterSpec Property Description failureThreshold Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. integer initialDelaySeconds The initial delay before first the health is first checked. integer periodSeconds How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. integer successThreshold Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness. Minimum value is 1. integer timeoutSeconds The timeout for each attempted health check. integer B.46. JvmOptions schema reference Used in: CruiseControlSpec , EntityTopicOperatorSpec , EntityUserOperatorSpec , KafkaBridgeSpec , KafkaClusterSpec , KafkaConnectS2ISpec , KafkaConnectSpec , KafkaMirrorMaker2Spec , KafkaMirrorMakerSpec , TopicOperatorSpec , ZookeeperClusterSpec Property Description -XX A map of -XX options to the JVM. map -Xms -Xms option to to the JVM. string -Xmx -Xmx option to to the JVM. string gcLoggingEnabled Specifies whether the Garbage Collection logging is enabled. The default is false. boolean javaSystemProperties A map of additional system properties which will be passed using the -D option to the JVM. SystemProperty array B.47. SystemProperty schema reference Used in: JvmOptions Property Description name The system property name. string value The system property value. string B.48. KafkaJmxOptions schema reference Used in: KafkaClusterSpec Property Description authentication Authentication configuration for connecting to the Kafka JMX port. The type depends on the value of the authentication.type property within the given object, which must be one of [password]. KafkaJmxAuthenticationPassword B.49. KafkaJmxAuthenticationPassword schema reference Used in: KafkaJmxOptions The type property is a discriminator that distinguishes the use of the type KafkaJmxAuthenticationPassword from other subtypes which may be added in the future. It must have the value password for the type KafkaJmxAuthenticationPassword . Property Description type Must be password . string B.50. InlineLogging schema reference Used in: CruiseControlSpec , EntityTopicOperatorSpec , EntityUserOperatorSpec , KafkaBridgeSpec , KafkaClusterSpec , KafkaConnectS2ISpec , KafkaConnectSpec , KafkaMirrorMaker2Spec , KafkaMirrorMakerSpec , TopicOperatorSpec , ZookeeperClusterSpec The type property is a discriminator that distinguishes the use of the type InlineLogging from ExternalLogging . It must have the value inline for the type InlineLogging . Property Description type Must be inline . string loggers A Map from logger name to logger level. map B.51. ExternalLogging schema reference Used in: CruiseControlSpec , EntityTopicOperatorSpec , EntityUserOperatorSpec , KafkaBridgeSpec , KafkaClusterSpec , KafkaConnectS2ISpec , KafkaConnectSpec , KafkaMirrorMaker2Spec , KafkaMirrorMakerSpec , TopicOperatorSpec , ZookeeperClusterSpec The type property is a discriminator that distinguishes the use of the type ExternalLogging from InlineLogging . It must have the value external for the type ExternalLogging . Property Description type Must be external . string name The name of the ConfigMap from which to get the logging configuration. string B.52. TlsSidecar schema reference Used in: CruiseControlSpec , EntityOperatorSpec , KafkaClusterSpec , TopicOperatorSpec , ZookeeperClusterSpec Property Description image The docker image for the container. string livenessProbe Pod liveness checking. Probe logLevel The log level for the TLS sidecar. Default value is notice . string (one of [emerg, debug, crit, err, alert, warning, notice, info]) readinessProbe Pod readiness checking. Probe resources CPU and memory resources to reserve. See external documentation of core/v1 resourcerequirements . ResourceRequirements B.53. KafkaClusterTemplate schema reference Used in: KafkaClusterSpec Property Description statefulset Template for Kafka StatefulSet . StatefulSetTemplate pod Template for Kafka Pods . PodTemplate bootstrapService Template for Kafka bootstrap Service . ResourceTemplate brokersService Template for Kafka broker Service . ResourceTemplate externalBootstrapService Template for Kafka external bootstrap Service . ExternalServiceTemplate perPodService Template for Kafka per-pod Services used for access from outside of OpenShift. ExternalServiceTemplate externalBootstrapRoute Template for Kafka external bootstrap Route . ResourceTemplate perPodRoute Template for Kafka per-pod Routes used for access from outside of OpenShift. ResourceTemplate externalBootstrapIngress Template for Kafka external bootstrap Ingress . ResourceTemplate perPodIngress Template for Kafka per-pod Ingress used for access from outside of OpenShift. ResourceTemplate persistentVolumeClaim Template for all Kafka PersistentVolumeClaims . ResourceTemplate podDisruptionBudget Template for Kafka PodDisruptionBudget . PodDisruptionBudgetTemplate kafkaContainer Template for the Kafka broker container. ContainerTemplate tlsSidecarContainer The property tlsSidecarContainer has been deprecated. Template for the Kafka broker TLS sidecar container. ContainerTemplate initContainer Template for the Kafka init container. ContainerTemplate B.54. StatefulSetTemplate schema reference Used in: KafkaClusterTemplate , ZookeeperClusterTemplate Property Description metadata Metadata applied to the resource. MetadataTemplate podManagementPolicy PodManagementPolicy which will be used for this StatefulSet. Valid values are Parallel and OrderedReady . Defaults to Parallel . string (one of [OrderedReady, Parallel]) B.55. MetadataTemplate schema reference Used in: ExternalServiceTemplate , PodDisruptionBudgetTemplate , PodTemplate , ResourceTemplate , StatefulSetTemplate Labels and Annotations are used to identify and organize resources, and are configured in the metadata property. For example: # ... template: statefulset: metadata: labels: label1: value1 label2: value2 annotations: annotation1: value1 annotation2: value2 # ... The labels and annotations fields can contain any labels or annotations that do not contain the reserved string strimzi.io . Labels and annotations containing strimzi.io are used internally by AMQ Streams and cannot be configured. Property Description labels Labels added to the resource template. Can be applied to different resources such as StatefulSets , Deployments , Pods , and Services . map annotations Annotations added to the resource template. Can be applied to different resources such as StatefulSets , Deployments , Pods , and Services . map B.56. PodTemplate schema reference Used in: CruiseControlTemplate , EntityOperatorTemplate , KafkaBridgeTemplate , KafkaClusterTemplate , KafkaConnectTemplate , KafkaExporterTemplate , KafkaMirrorMakerTemplate , ZookeeperClusterTemplate Example PodTemplate configuration # ... template: pod: metadata: labels: label1: value1 annotations: anno1: value1 imagePullSecrets: - name: my-docker-credentials securityContext: runAsUser: 1000001 fsGroup: 0 terminationGracePeriodSeconds: 120 # ... B.56.1. hostAliases Use the hostAliases property to a specify a list of hosts and IP addresses, which are injected into the /etc/hosts file of the pod. This configuration is especially useful for Kafka Connect or MirrorMaker when a connection outside of the cluster is also requested by users. Example hostAliases configuration apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnect #... spec: # ... template: pod: hostAliases: - ip: "192.168.1.86" hostnames: - "my-host-1" - "my-host-2" #... Property Description metadata Metadata applied to the resource. MetadataTemplate imagePullSecrets List of references to secrets in the same namespace to use for pulling any of the images used by this Pod. When the STRIMZI_IMAGE_PULL_SECRETS environment variable in Cluster Operator and the imagePullSecrets option are specified, only the imagePullSecrets variable is used and the STRIMZI_IMAGE_PULL_SECRETS variable is ignored. See external documentation of core/v1 localobjectreference . LocalObjectReference array securityContext Configures pod-level security attributes and common container settings. See external documentation of core/v1 podsecuritycontext . PodSecurityContext terminationGracePeriodSeconds The grace period is the duration in seconds after the processes running in the pod are sent a termination signal, and the time when the processes are forcibly halted with a kill signal. Set this value to longer than the expected cleanup time for your process. Value must be a non-negative integer. A zero value indicates delete immediately. You might need to increase the grace period for very large Kafka clusters, so that the Kafka brokers have enough time to transfer their work to another broker before they are terminated. Defaults to 30 seconds. integer affinity The pod's affinity rules. See external documentation of core/v1 affinity . Affinity tolerations The pod's tolerations. See external documentation of core/v1 toleration . Toleration array priorityClassName The name of the priority class used to assign priority to the pods. For more information about priority classes, see Pod Priority and Preemption . string schedulerName The name of the scheduler used to dispatch this Pod . If not specified, the default scheduler will be used. string hostAliases The pod's HostAliases. HostAliases is an optional list of hosts and IPs that will be injected into the pod's hosts file if specified. See external documentation of core/v1 HostAlias . HostAlias array B.57. ResourceTemplate schema reference Used in: CruiseControlTemplate , EntityOperatorTemplate , KafkaBridgeTemplate , KafkaClusterTemplate , KafkaConnectTemplate , KafkaExporterTemplate , KafkaMirrorMakerTemplate , KafkaUserTemplate , ZookeeperClusterTemplate Property Description metadata Metadata applied to the resource. MetadataTemplate B.58. ExternalServiceTemplate schema reference Used in: KafkaClusterTemplate When exposing Kafka outside of OpenShift using loadbalancers or node ports, you can use properties, in addition to labels and annotations, to customize how a Service is created. An example showing customized external services # ... template: externalBootstrapService: externalTrafficPolicy: Local loadBalancerSourceRanges: - 10.0.0.0/8 - 88.208.76.87/32 perPodService: externalTrafficPolicy: Local loadBalancerSourceRanges: - 10.0.0.0/8 - 88.208.76.87/32 # ... Property Description metadata Metadata applied to the resource. MetadataTemplate externalTrafficPolicy The property externalTrafficPolicy has been deprecated. Specifies whether the service routes external traffic to node-local or cluster-wide endpoints. Cluster may cause a second hop to another node and obscures the client source IP. Local avoids a second hop for LoadBalancer and Nodeport type services and preserves the client source IP (when supported by the infrastructure). If unspecified, OpenShift will use Cluster as the default. string (one of [Local, Cluster]) loadBalancerSourceRanges The property loadBalancerSourceRanges has been deprecated. A list of CIDR ranges (for example 10.0.0.0/8 or 130.211.204.1/32 ) from which clients can connect to load balancer type listeners. If supported by the platform, traffic through the loadbalancer is restricted to the specified CIDR ranges. This field is applicable only for loadbalancer type services and is ignored if the cloud provider does not support the feature. For more information, see https://v1-17.docs.kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/ . string array B.59. PodDisruptionBudgetTemplate schema reference Used in: CruiseControlTemplate , KafkaBridgeTemplate , KafkaClusterTemplate , KafkaConnectTemplate , KafkaMirrorMakerTemplate , ZookeeperClusterTemplate AMQ Streams creates a PodDisruptionBudget for every new StatefulSet or Deployment . By default, pod disruption budgets only allow a single pod to be unavailable at a given time. You can increase the amount of unavailable pods allowed by changing the default value of the maxUnavailable property in the PodDisruptionBudget.spec resource. An example of PodDisruptionBudget template # ... template: podDisruptionBudget: metadata: labels: key1: label1 key2: label2 annotations: key1: label1 key2: label2 maxUnavailable: 1 # ... Property Description metadata Metadata to apply to the PodDistruptionBugetTemplate resource. MetadataTemplate maxUnavailable Maximum number of unavailable pods to allow automatic Pod eviction. A Pod eviction is allowed when the maxUnavailable number of pods or fewer are unavailable after the eviction. Setting this value to 0 prevents all voluntary evictions, so the pods must be evicted manually. Defaults to 1. integer B.60. ContainerTemplate schema reference Used in: CruiseControlTemplate , EntityOperatorTemplate , KafkaBridgeTemplate , KafkaClusterTemplate , KafkaConnectTemplate , KafkaExporterTemplate , KafkaMirrorMakerTemplate , ZookeeperClusterTemplate You can set custom security context and environment variables for a container. The environment variables are defined under the env property as a list of objects with name and value fields. The following example shows two custom environment variables and a custom security context set for the Kafka broker containers: # ... template: kafkaContainer: env: - name: EXAMPLE_ENV_1 value: example.env.one - name: EXAMPLE_ENV_2 value: example.env.two securityContext: runAsUser: 2000 # ... Environment variables prefixed with KAFKA_ are internal to AMQ Streams and should be avoided. If you set a custom environment variable that is already in use by AMQ Streams, it is ignored and a warning is recorded in the log. Property Description env Environment variables which should be applied to the container. ContainerEnvVar array securityContext Security context for the container. See external documentation of core/v1 securitycontext . SecurityContext B.61. ContainerEnvVar schema reference Used in: ContainerTemplate Property Description name The environment variable key. string value The environment variable value. string B.62. ZookeeperClusterSpec schema reference Used in: KafkaSpec Property Description replicas The number of pods in the cluster. integer image The docker image for the pods. string storage Storage configuration (disk). Cannot be updated. The type depends on the value of the storage.type property within the given object, which must be one of [ephemeral, persistent-claim]. EphemeralStorage , PersistentClaimStorage config The ZooKeeper broker config. Properties with the following prefixes cannot be set: server., dataDir, dataLogDir, clientPort, authProvider, quorum.auth, requireClientAuthScheme, snapshot.trust.empty, standaloneEnabled, reconfigEnabled, 4lw.commands.whitelist, secureClientPort, ssl., serverCnxnFactory, sslQuorum (with the exception of: ssl.protocol, ssl.quorum.protocol, ssl.enabledProtocols, ssl.quorum.enabledProtocols, ssl.ciphersuites, ssl.quorum.ciphersuites, ssl.hostnameVerification, ssl.quorum.hostnameVerification). map affinity The property affinity has been deprecated. This feature should now be configured at path spec.zookeeper.template.pod.affinity . The pod's affinity rules. See external documentation of core/v1 affinity . Affinity tolerations The property tolerations has been deprecated. This feature should now be configured at path spec.zookeeper.template.pod.tolerations . The pod's tolerations. See external documentation of core/v1 toleration . Toleration array livenessProbe Pod liveness checking. Probe readinessProbe Pod readiness checking. Probe jvmOptions JVM Options for pods. JvmOptions resources CPU and memory resources to reserve. See external documentation of core/v1 resourcerequirements . ResourceRequirements metrics The Prometheus JMX Exporter configuration. See https://github.com/prometheus/jmx_exporter for details of the structure of this configuration. map logging Logging configuration for ZooKeeper. The type depends on the value of the logging.type property within the given object, which must be one of [inline, external]. InlineLogging , ExternalLogging template Template for ZooKeeper cluster resources. The template allows users to specify how are the StatefulSet , Pods and Services generated. ZookeeperClusterTemplate tlsSidecar The property tlsSidecar has been deprecated. TLS sidecar configuration. The TLS sidecar is not used anymore and this option will be ignored. TlsSidecar B.63. ZookeeperClusterTemplate schema reference Used in: ZookeeperClusterSpec Property Description statefulset Template for ZooKeeper StatefulSet . StatefulSetTemplate pod Template for ZooKeeper Pods . PodTemplate clientService Template for ZooKeeper client Service . ResourceTemplate nodesService Template for ZooKeeper nodes Service . ResourceTemplate persistentVolumeClaim Template for all ZooKeeper PersistentVolumeClaims . ResourceTemplate podDisruptionBudget Template for ZooKeeper PodDisruptionBudget . PodDisruptionBudgetTemplate zookeeperContainer Template for the ZooKeeper container. ContainerTemplate tlsSidecarContainer The property tlsSidecarContainer has been deprecated. Template for the Zookeeper server TLS sidecar container. The TLS sidecar is not used anymore and this option will be ignored. ContainerTemplate B.64. TopicOperatorSpec schema reference The type TopicOperatorSpec has been deprecated. Please use EntityTopicOperatorSpec instead. Used in: KafkaSpec Property Description watchedNamespace The namespace the Topic Operator should watch. string image The image to use for the Topic Operator. string reconciliationIntervalSeconds Interval between periodic reconciliations. integer zookeeperSessionTimeoutSeconds Timeout for the ZooKeeper session. integer affinity Pod affinity rules. See external documentation of core/v1 affinity . Affinity resources CPU and memory resources to reserve. See external documentation of core/v1 resourcerequirements . ResourceRequirements topicMetadataMaxAttempts The number of attempts at getting topic metadata. integer tlsSidecar TLS sidecar configuration. TlsSidecar logging Logging configuration. The type depends on the value of the logging.type property within the given object, which must be one of [inline, external]. InlineLogging , ExternalLogging jvmOptions JVM Options for pods. JvmOptions livenessProbe Pod liveness checking. Probe readinessProbe Pod readiness checking. Probe B.65. EntityOperatorSpec schema reference Used in: KafkaSpec Property Description topicOperator Configuration of the Topic Operator. EntityTopicOperatorSpec userOperator Configuration of the User Operator. EntityUserOperatorSpec affinity The property affinity has been deprecated. This feature should now be configured at path spec.template.pod.affinity . The pod's affinity rules. See external documentation of core/v1 affinity . Affinity tolerations The property tolerations has been deprecated. This feature should now be configured at path spec.template.pod.tolerations . The pod's tolerations. See external documentation of core/v1 toleration . Toleration array tlsSidecar TLS sidecar configuration. TlsSidecar template Template for Entity Operator resources. The template allows users to specify how is the Deployment and Pods generated. EntityOperatorTemplate B.66. EntityTopicOperatorSpec schema reference Used in: EntityOperatorSpec Property Description watchedNamespace The namespace the Topic Operator should watch. string image The image to use for the Topic Operator. string reconciliationIntervalSeconds Interval between periodic reconciliations. integer zookeeperSessionTimeoutSeconds Timeout for the ZooKeeper session. integer livenessProbe Pod liveness checking. Probe readinessProbe Pod readiness checking. Probe resources CPU and memory resources to reserve. See external documentation of core/v1 resourcerequirements . ResourceRequirements topicMetadataMaxAttempts The number of attempts at getting topic metadata. integer logging Logging configuration. The type depends on the value of the logging.type property within the given object, which must be one of [inline, external]. InlineLogging , ExternalLogging jvmOptions JVM Options for pods. JvmOptions B.67. EntityUserOperatorSpec schema reference Used in: EntityOperatorSpec Property Description watchedNamespace The namespace the User Operator should watch. string image The image to use for the User Operator. string reconciliationIntervalSeconds Interval between periodic reconciliations. integer zookeeperSessionTimeoutSeconds Timeout for the ZooKeeper session. integer livenessProbe Pod liveness checking. Probe readinessProbe Pod readiness checking. Probe resources CPU and memory resources to reserve. See external documentation of core/v1 resourcerequirements . ResourceRequirements logging Logging configuration. The type depends on the value of the logging.type property within the given object, which must be one of [inline, external]. InlineLogging , ExternalLogging jvmOptions JVM Options for pods. JvmOptions B.68. EntityOperatorTemplate schema reference Used in: EntityOperatorSpec Property Description deployment Template for Entity Operator Deployment . ResourceTemplate pod Template for Entity Operator Pods . PodTemplate tlsSidecarContainer Template for the Entity Operator TLS sidecar container. ContainerTemplate topicOperatorContainer Template for the Entity Topic Operator container. ContainerTemplate userOperatorContainer Template for the Entity User Operator container. ContainerTemplate B.69. CertificateAuthority schema reference Used in: KafkaSpec Configuration of how TLS certificates are used within the cluster. This applies to certificates used for both internal communication within the cluster and to certificates used for client access via Kafka.spec.kafka.listeners.tls . Property Description generateCertificateAuthority If true then Certificate Authority certificates will be generated automatically. Otherwise the user will need to provide a Secret with the CA certificate. Default is true. boolean validityDays The number of days generated certificates should be valid for. The default is 365. integer renewalDays The number of days in the certificate renewal period. This is the number of days before the a certificate expires during which renewal actions may be performed. When generateCertificateAuthority is true, this will cause the generation of a new certificate. When generateCertificateAuthority is true, this will cause extra logging at WARN level about the pending certificate expiry. Default is 30. integer certificateExpirationPolicy How should CA certificate expiration be handled when generateCertificateAuthority=true . The default is for a new CA certificate to be generated reusing the existing private key. string (one of [replace-key, renew-certificate]) B.70. CruiseControlSpec schema reference Used in: KafkaSpec Property Description image The docker image for the pods. string tlsSidecar TLS sidecar configuration. TlsSidecar resources CPU and memory resources to reserve for the Cruise Control container. See external documentation of core/v1 resourcerequirements . ResourceRequirements livenessProbe Pod liveness checking for the Cruise Control container. Probe readinessProbe Pod readiness checking for the Cruise Control container. Probe jvmOptions JVM Options for the Cruise Control container. JvmOptions logging Logging configuration (log4j1) for Cruise Control. The type depends on the value of the logging.type property within the given object, which must be one of [inline, external]. InlineLogging , ExternalLogging template Template to specify how Cruise Control resources, Deployments and Pods , are generated. CruiseControlTemplate brokerCapacity The Cruise Control brokerCapacity configuration. BrokerCapacity config The Cruise Control configuration. For a full list of configuration options refer to https://github.com/linkedin/cruise-control/wiki/Configurations . Note that properties with the following prefixes cannot be set: bootstrap.servers, client.id, zookeeper., network., security., failed.brokers.zk.path,webserver.http., webserver.api.urlprefix, webserver.session.path, webserver.accesslog., two.step., request.reason.required,metric.reporter.sampler.bootstrap.servers, metric.reporter.topic, partition.metric.sample.store.topic, broker.metric.sample.store.topic,capacity.config.file, self.healing., anomaly.detection., ssl. map metrics The Prometheus JMX Exporter configuration. See https://github.com/prometheus/jmx_exporter for details of the structure of this configuration. map B.71. CruiseControlTemplate schema reference Used in: CruiseControlSpec Property Description deployment Template for Cruise Control Deployment . ResourceTemplate pod Template for Cruise Control Pods . PodTemplate apiService Template for Cruise Control API Service . ResourceTemplate podDisruptionBudget Template for Cruise Control PodDisruptionBudget . PodDisruptionBudgetTemplate cruiseControlContainer Template for the Cruise Control container. ContainerTemplate tlsSidecarContainer Template for the Cruise Control TLS sidecar container. ContainerTemplate B.72. BrokerCapacity schema reference Used in: CruiseControlSpec Property Description disk Broker capacity for disk in bytes, for example, 100Gi. string cpuUtilization Broker capacity for CPU resource utilization as a percentage (0 - 100). integer inboundNetwork Broker capacity for inbound network throughput in bytes per second, for example, 10000KB/s. string outboundNetwork Broker capacity for outbound network throughput in bytes per second, for example 10000KB/s. string B.73. KafkaExporterSpec schema reference Used in: KafkaSpec Property Description image The docker image for the pods. string groupRegex Regular expression to specify which consumer groups to collect. Default value is .* . string topicRegex Regular expression to specify which topics to collect. Default value is .* . string resources CPU and memory resources to reserve. See external documentation of core/v1 resourcerequirements . ResourceRequirements logging Only log messages with the given severity or above. Valid levels: [ debug , info , warn , error , fatal ]. Default log level is info . string enableSaramaLogging Enable Sarama logging, a Go client library used by the Kafka Exporter. boolean template Customization of deployment templates and pods. KafkaExporterTemplate livenessProbe Pod liveness check. Probe readinessProbe Pod readiness check. Probe B.74. KafkaExporterTemplate schema reference Used in: KafkaExporterSpec Property Description deployment Template for Kafka Exporter Deployment . ResourceTemplate pod Template for Kafka Exporter Pods . PodTemplate service Template for Kafka Exporter Service . ResourceTemplate container Template for the Kafka Exporter container. ContainerTemplate B.75. KafkaStatus schema reference Used in: Kafka Property Description conditions List of status conditions. Condition array observedGeneration The generation of the CRD that was last reconciled by the operator. integer listeners Addresses of the internal and external listeners. ListenerStatus array B.76. Condition schema reference Used in: KafkaBridgeStatus , KafkaConnectorStatus , KafkaConnectS2IStatus , KafkaConnectStatus , KafkaMirrorMaker2Status , KafkaMirrorMakerStatus , KafkaRebalanceStatus , KafkaStatus , KafkaTopicStatus , KafkaUserStatus Property Description type The unique identifier of a condition, used to distinguish between other conditions in the resource. string status The status of the condition, either True, False or Unknown. string lastTransitionTime Last time the condition of a type changed from one status to another. The required format is 'yyyy-MM-ddTHH:mm:ssZ', in the UTC time zone. string reason The reason for the condition's last transition (a single word in CamelCase). string message Human-readable message indicating details about the condition's last transition. string B.77. ListenerStatus schema reference Used in: KafkaStatus Property Description type The type of the listener. Can be one of the following three types: plain , tls , and external . string addresses A list of the addresses for this listener. ListenerAddress array bootstrapServers A comma-separated list of host:port pairs for connecting to the Kafka cluster using this listener. string certificates A list of TLS certificates which can be used to verify the identity of the server when connecting to the given listener. Set only for tls and external listeners. string array B.78. ListenerAddress schema reference Used in: ListenerStatus Property Description host The DNS name or IP address of the Kafka bootstrap service. string port The port of the Kafka bootstrap service. integer B.79. KafkaConnect schema reference Property Description spec The specification of the Kafka Connect cluster. KafkaConnectSpec status The status of the Kafka Connect cluster. KafkaConnectStatus B.80. KafkaConnectSpec schema reference Used in: KafkaConnect Configures a Kafka Connect cluster. B.80.1. config Use the config properties to configure Kafka options as keys. Standard Apache Kafka Connect configuration may be provided, restricted to those properties not managed directly by AMQ Streams. Configuration options that cannot be configured relate to: Kafka cluster bootstrap address Security (Encryption, Authentication, and Authorization) Listener / REST interface configuration Plugin path configuration The values can be one of the following JSON types: String Number Boolean You can specify and configure the options listed in the Apache Kafka documentation with the exception of those options that are managed directly by AMQ Streams. Specifically, configuration options with keys equal to or starting with one of the following strings are forbidden: ssl. sasl. security. listeners plugin.path rest. bootstrap.servers When a forbidden option is present in the config property, it is ignored and a warning message is printed to the Cluster Operator log file. All other options are passed to Kafka Connect. Important The Cluster Operator does not validate keys or values in the config object provided. When an invalid configuration is provided, the Kafka Connect cluster might not start or might become unstable. In this circumstance, fix the configuration in the KafkaConnect.spec.config or KafkaConnectS2I.spec.config object, then the Cluster Operator can roll out the new configuration to all Kafka Connect nodes. Certain options have default values: group.id with default value connect-cluster offset.storage.topic with default value connect-cluster-offsets config.storage.topic with default value connect-cluster-configs status.storage.topic with default value connect-cluster-status key.converter with default value org.apache.kafka.connect.json.JsonConverter value.converter with default value org.apache.kafka.connect.json.JsonConverter These options are automatically configured in case they are not present in the KafkaConnect.spec.config or KafkaConnectS2I.spec.config properties. There are exceptions to the forbidden options. You can use three allowed ssl configuration options for client connection using a specific cipher suite for a TLS version. A cipher suite combines algorithms for secure connection and data transfer. You can also configure the ssl.endpoint.identification.algorithm property to enable or disable hostname verification. Example Kafka Connect configuration apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnect metadata: name: my-connect spec: # ... config: group.id: my-connect-cluster offset.storage.topic: my-connect-cluster-offsets config.storage.topic: my-connect-cluster-configs status.storage.topic: my-connect-cluster-status key.converter: org.apache.kafka.connect.json.JsonConverter value.converter: org.apache.kafka.connect.json.JsonConverter key.converter.schemas.enable: true value.converter.schemas.enable: true config.storage.replication.factor: 3 offset.storage.replication.factor: 3 status.storage.replication.factor: 3 ssl.cipher.suites: "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" ssl.enabled.protocols: "TLSv1.2" ssl.protocol: "TLSv1.2" ssl.endpoint.identification.algorithm: HTTPS # ... For client connection using a specific cipher suite for a TLS version, you can configure allowed ssl properties . You can also configure the ssl.endpoint.identification.algorithm property to enable or disable hostname verification. B.80.2. logging Kafka Connect (and Kafka Connect with Source2Image support) has its own configurable loggers: connect.root.logger.level log4j.logger.org.reflections Further loggers are added depending on the Kafka Connect plugins running. Use a curl request to get a complete list of Kafka Connect loggers running from any Kafka broker pod: curl -s http://<connect-cluster-name>-connect-api:8083/admin/loggers/ Kafka Connect uses the Apache log4j logger implementation. Use the logging property to configure loggers and logger levels. You can set the log levels by specifying the logger and level directly (inline) or use a custom (external) ConfigMap. If a ConfigMap is used, you set logging.name property to the name of the ConfigMap containing the external logging configuration. Inside the ConfigMap, the logging configuration is described using log4j.properties . For more information about log levels, see Apache logging services . Here we see examples of inline and external logging. Inline logging apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnect spec: # ... logging: type: inline loggers: connect.root.logger.level: "INFO" # ... External logging apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnect spec: # ... logging: type: external name: customConfigMap # ... Any available loggers that are not configured have their level set to OFF . If Kafka Connect was deployed using the Cluster Operator, changes to Kafka Connect logging levels are applied dynamically. If you use external logging, a rolling update is triggered when logging appenders are changed. Garbage collector (GC) Garbage collector logging can also be enabled (or disabled) using the jvmOptions property . Property Description replicas The number of pods in the Kafka Connect group. integer version The Kafka Connect version. Defaults to 2.6.0. Consult the user documentation to understand the process required to upgrade or downgrade the version. string image The docker image for the pods. string bootstrapServers Bootstrap servers to connect to. This should be given as a comma separated list of <hostname> : <port> pairs. string tls TLS configuration. KafkaConnectTls authentication Authentication configuration for Kafka Connect. The type depends on the value of the authentication.type property within the given object, which must be one of [tls, scram-sha-512, plain, oauth]. KafkaClientAuthenticationTls , KafkaClientAuthenticationScramSha512 , KafkaClientAuthenticationPlain , KafkaClientAuthenticationOAuth config The Kafka Connect configuration. Properties with the following prefixes cannot be set: ssl., sasl., security., listeners, plugin.path, rest., bootstrap.servers, consumer.interceptor.classes, producer.interceptor.classes (with the exception of: ssl.endpoint.identification.algorithm, ssl.cipher.suites, ssl.protocol, ssl.enabled.protocols). map resources The maximum limits for CPU and memory resources and the requested initial resources. See external documentation of core/v1 resourcerequirements . ResourceRequirements livenessProbe Pod liveness checking. Probe readinessProbe Pod readiness checking. Probe jvmOptions JVM Options for pods. JvmOptions affinity The property affinity has been deprecated. This feature should now be configured at path spec.template.pod.affinity . The pod's affinity rules. See external documentation of core/v1 affinity . Affinity tolerations The property tolerations has been deprecated. This feature should now be configured at path spec.template.pod.tolerations . The pod's tolerations. See external documentation of core/v1 toleration . Toleration array logging Logging configuration for Kafka Connect. The type depends on the value of the logging.type property within the given object, which must be one of [inline, external]. InlineLogging , ExternalLogging metrics The Prometheus JMX Exporter configuration. See https://github.com/prometheus/jmx_exporter for details of the structure of this configuration. map tracing The configuration of tracing in Kafka Connect. The type depends on the value of the tracing.type property within the given object, which must be one of [jaeger]. JaegerTracing template Template for Kafka Connect and Kafka Connect S2I resources. The template allows users to specify how the Deployment , Pods and Service are generated. KafkaConnectTemplate externalConfiguration Pass data from Secrets or ConfigMaps to the Kafka Connect pods and use them to configure connectors. ExternalConfiguration clientRackInitImage The image of the init container used for initializing the client.rack . string rack Configuration of the node label which will be used as the client.rack consumer configuration. Rack B.81. KafkaConnectTls schema reference Used in: KafkaConnectS2ISpec , KafkaConnectSpec Configures TLS trusted certificates for connecting Kafka Connect to the cluster. B.81.1. trustedCertificates Provide a list of secrets using the trustedCertificates property . Property Description trustedCertificates Trusted certificates for TLS connection. CertSecretSource array B.82. KafkaClientAuthenticationTls schema reference Used in: KafkaBridgeSpec , KafkaConnectS2ISpec , KafkaConnectSpec , KafkaMirrorMaker2ClusterSpec , KafkaMirrorMakerConsumerSpec , KafkaMirrorMakerProducerSpec To use TLS client authentication, set the type property to the value tls . TLS client authentication uses a TLS certificate to authenticate. B.82.1. certificateAndKey The certificate is specified in the certificateAndKey property and is always loaded from an OpenShift secret. In the secret, the certificate must be stored in X509 format under two different keys: public and private. You can use the secrets created by the User Operator, or you can create your own TLS certificate file, with the keys used for authentication, then create a Secret from the file: oc create secret generic MY-SECRET \ --from-file= MY-PUBLIC-TLS-CERTIFICATE-FILE.crt \ --from-file= MY-PRIVATE.key Note TLS client authentication can only be used with TLS connections. Example TLS client authentication configuration authentication: type: tls certificateAndKey: secretName: my-secret certificate: my-public-tls-certificate-file.crt key: private.key The type property is a discriminator that distinguishes the use of the type KafkaClientAuthenticationTls from KafkaClientAuthenticationScramSha512 , KafkaClientAuthenticationPlain , KafkaClientAuthenticationOAuth . It must have the value tls for the type KafkaClientAuthenticationTls . Property Description certificateAndKey Reference to the Secret which holds the certificate and private key pair. CertAndKeySecretSource type Must be tls . string B.83. KafkaClientAuthenticationScramSha512 schema reference Used in: KafkaBridgeSpec , KafkaConnectS2ISpec , KafkaConnectSpec , KafkaMirrorMaker2ClusterSpec , KafkaMirrorMakerConsumerSpec , KafkaMirrorMakerProducerSpec To configure SASL-based SCRAM-SHA-512 authentication, set the type property to scram-sha-512 . The SCRAM-SHA-512 authentication mechanism requires a username and password. B.83.1. username Specify the username in the username property. B.83.2. passwordSecret In the passwordSecret property, specify a link to a Secret containing the password. You can use the secrets created by the User Operator. If required, you can create a text file that contains the password, in cleartext, to use for authentication: echo -n PASSWORD > MY-PASSWORD .txt You can then create a Secret from the text file, setting your own field name (key) for the password: oc create secret generic MY-CONNECT-SECRET-NAME --from-file= MY-PASSWORD-FIELD-NAME =./ MY-PASSWORD .txt Example Secret for SCRAM-SHA-512 client authentication for Kafka Connect apiVersion: v1 kind: Secret metadata: name: my-connect-secret-name type: Opaque data: my-connect-password-field: LFTIyFRFlMmU2N2Tm The secretName property contains the name of the Secret , and the password property contains the name of the key under which the password is stored inside the Secret . Important Do not specify the actual password in the password property. Example SASL-based SCRAM-SHA-512 client authentication configuration for Kafka Connect authentication: type: scram-sha-512 username: my-connect-username passwordSecret: secretName: my-connect-secret-name password: my-connect-password-field The type property is a discriminator that distinguishes the use of the type KafkaClientAuthenticationScramSha512 from KafkaClientAuthenticationTls , KafkaClientAuthenticationPlain , KafkaClientAuthenticationOAuth . It must have the value scram-sha-512 for the type KafkaClientAuthenticationScramSha512 . Property Description passwordSecret Reference to the Secret which holds the password. PasswordSecretSource type Must be scram-sha-512 . string username Username used for the authentication. string B.84. PasswordSecretSource schema reference Used in: KafkaClientAuthenticationPlain , KafkaClientAuthenticationScramSha512 Property Description password The name of the key in the Secret under which the password is stored. string secretName The name of the Secret containing the password. string B.85. KafkaClientAuthenticationPlain schema reference Used in: KafkaBridgeSpec , KafkaConnectS2ISpec , KafkaConnectSpec , KafkaMirrorMaker2ClusterSpec , KafkaMirrorMakerConsumerSpec , KafkaMirrorMakerProducerSpec To configure SASL-based PLAIN authentication, set the type property to plain . SASL PLAIN authentication mechanism requires a username and password. Warning The SASL PLAIN mechanism will transfer the username and password across the network in cleartext. Only use SASL PLAIN authentication if TLS encryption is enabled. B.85.1. username Specify the username in the username property. B.85.2. passwordSecret In the passwordSecret property, specify a link to a Secret containing the password. You can use the secrets created by the User Operator. If required, create a text file that contains the password, in cleartext, to use for authentication: echo -n PASSWORD > MY-PASSWORD .txt You can then create a Secret from the text file, setting your own field name (key) for the password: oc create secret generic MY-CONNECT-SECRET-NAME --from-file= MY-PASSWORD-FIELD-NAME =./ MY-PASSWORD .txt Example Secret for PLAIN client authentication for Kafka Connect apiVersion: v1 kind: Secret metadata: name: my-connect-secret-name type: Opaque data: my-password-field-name: LFTIyFRFlMmU2N2Tm The secretName property contains the name of the Secret and the password property contains the name of the key under which the password is stored inside the Secret . Important Do not specify the actual password in the password property. An example SASL based PLAIN client authentication configuration authentication: type: plain username: my-connect-username passwordSecret: secretName: my-connect-secret-name password: my-password-field-name The type property is a discriminator that distinguishes the use of the type KafkaClientAuthenticationPlain from KafkaClientAuthenticationTls , KafkaClientAuthenticationScramSha512 , KafkaClientAuthenticationOAuth . It must have the value plain for the type KafkaClientAuthenticationPlain . Property Description passwordSecret Reference to the Secret which holds the password. PasswordSecretSource type Must be plain . string username Username used for the authentication. string B.86. KafkaClientAuthenticationOAuth schema reference Used in: KafkaBridgeSpec , KafkaConnectS2ISpec , KafkaConnectSpec , KafkaMirrorMaker2ClusterSpec , KafkaMirrorMakerConsumerSpec , KafkaMirrorMakerProducerSpec To use OAuth client authentication, set the type property to the value oauth . OAuth authentication can be configured using one of the following options: Client ID and secret Client ID and refresh token Access token TLS Client ID and secret You can configure the address of your authorization server in the tokenEndpointUri property together with the client ID and client secret used in authentication. The OAuth client will connect to the OAuth server, authenticate using the client ID and secret and get an access token which it will use to authenticate with the Kafka broker. In the clientSecret property, specify a link to a Secret containing the client secret. An example of OAuth client authentication using client ID and client secret authentication: type: oauth tokenEndpointUri: https://sso.myproject.svc:8443/auth/realms/internal/protocol/openid-connect/token clientId: my-client-id clientSecret: secretName: my-client-oauth-secret key: client-secret Client ID and refresh token You can configure the address of your OAuth server in the tokenEndpointUri property together with the OAuth client ID and refresh token. The OAuth client will connect to the OAuth server, authenticate using the client ID and refresh token and get an access token which it will use to authenticate with the Kafka broker. In the refreshToken property, specify a link to a Secret containing the refresh token. + .An example of OAuth client authentication using client ID and refresh token authentication: type: oauth tokenEndpointUri: https://sso.myproject.svc:8443/auth/realms/internal/protocol/openid-connect/token clientId: my-client-id refreshToken: secretName: my-refresh-token-secret key: refresh-token Access token You can configure the access token used for authentication with the Kafka broker directly. In this case, you do not specify the tokenEndpointUri . In the accessToken property, specify a link to a Secret containing the access token. An example of OAuth client authentication using only an access token authentication: type: oauth accessToken: secretName: my-access-token-secret key: access-token TLS Accessing the OAuth server using the HTTPS protocol does not require any additional configuration as long as the TLS certificates used by it are signed by a trusted certification authority and its hostname is listed in the certificate. If your OAuth server is using certificates which are self-signed or are signed by a certification authority which is not trusted, you can configure a list of trusted certificates in the custom resoruce. The tlsTrustedCertificates property contains a list of secrets with key names under which the certificates are stored. The certificates must be stored in X509 format. An example of TLS certificates provided authentication: type: oauth tokenEndpointUri: https://sso.myproject.svc:8443/auth/realms/internal/protocol/openid-connect/token clientId: my-client-id refreshToken: secretName: my-refresh-token-secret key: refresh-token tlsTrustedCertificates: - secretName: oauth-server-ca certificate: tls.crt The OAuth client will by default verify that the hostname of your OAuth server matches either the certificate subject or one of the alternative DNS names. If it is not required, you can disable the hostname verification. An example of disabled TLS hostname verification authentication: type: oauth tokenEndpointUri: https://sso.myproject.svc:8443/auth/realms/internal/protocol/openid-connect/token clientId: my-client-id refreshToken: secretName: my-refresh-token-secret key: refresh-token disableTlsHostnameVerification: true The type property is a discriminator that distinguishes the use of the type KafkaClientAuthenticationOAuth from KafkaClientAuthenticationTls , KafkaClientAuthenticationScramSha512 , KafkaClientAuthenticationPlain . It must have the value oauth for the type KafkaClientAuthenticationOAuth . Property Description accessToken Link to OpenShift Secret containing the access token which was obtained from the authorization server. GenericSecretSource accessTokenIsJwt Configure whether access token should be treated as JWT. This should be set to false if the authorization server returns opaque tokens. Defaults to true . boolean clientId OAuth Client ID which the Kafka client can use to authenticate against the OAuth server and use the token endpoint URI. string clientSecret Link to OpenShift Secret containing the OAuth client secret which the Kafka client can use to authenticate against the OAuth server and use the token endpoint URI. GenericSecretSource disableTlsHostnameVerification Enable or disable TLS hostname verification. Default value is false . boolean maxTokenExpirySeconds Set or limit time-to-live of the access tokens to the specified number of seconds. This should be set if the authorization server returns opaque tokens. integer refreshToken Link to OpenShift Secret containing the refresh token which can be used to obtain access token from the authorization server. GenericSecretSource scope OAuth scope to use when authenticating against the authorization server. Some authorization servers require this to be set. The possible values depend on how authorization server is configured. By default scope is not specified when doing the token endpoint request. string tlsTrustedCertificates Trusted certificates for TLS connection to the OAuth server. CertSecretSource array tokenEndpointUri Authorization server token endpoint URI. string type Must be oauth . string B.87. JaegerTracing schema reference Used in: KafkaBridgeSpec , KafkaConnectS2ISpec , KafkaConnectSpec , KafkaMirrorMaker2Spec , KafkaMirrorMakerSpec The type property is a discriminator that distinguishes the use of the type JaegerTracing from other subtypes which may be added in the future. It must have the value jaeger for the type JaegerTracing . Property Description type Must be jaeger . string B.88. KafkaConnectTemplate schema reference Used in: KafkaConnectS2ISpec , KafkaConnectSpec , KafkaMirrorMaker2Spec Property Description deployment Template for Kafka Connect Deployment . ResourceTemplate pod Template for Kafka Connect Pods . PodTemplate apiService Template for Kafka Connect API Service . ResourceTemplate connectContainer Template for the Kafka Connect container. ContainerTemplate initContainer Template for the Kafka init container. ContainerTemplate podDisruptionBudget Template for Kafka Connect PodDisruptionBudget . PodDisruptionBudgetTemplate B.89. ExternalConfiguration schema reference Used in: KafkaConnectS2ISpec , KafkaConnectSpec , KafkaMirrorMaker2Spec Configures external storage properties that define configuration options for Kafka Connect connectors. You can mount ConfigMaps or Secrets into a Kafka Connect pod as environment variables or volumes. Volumes and environment variables are configured in the externalConfiguration property in KafkaConnect.spec and KafkaConnectS2I.spec . When applied, the environment variables and volumes are available for use when developing your connectors. B.89.1. env The env property is used to specify one or more environment variables. These variables can contain a value from either a ConfigMap or a Secret. Example Secret containing values for environment variables apiVersion: v1 kind: Secret metadata: name: aws-creds type: Opaque data: awsAccessKey: QUtJQVhYWFhYWFhYWFhYWFg= awsSecretAccessKey: Ylhsd1lYTnpkMjl5WkE= Note The names of user-defined environment variables cannot start with KAFKA_ or STRIMZI_ . To mount a value from a Secret to an environment variable, use the valueFrom property and the secretKeyRef . Example environment variables set to values from a Secret apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnect metadata: name: my-connect spec: # ... externalConfiguration: env: - name: AWS_ACCESS_KEY_ID valueFrom: secretKeyRef: name: aws-creds key: awsAccessKey - name: AWS_SECRET_ACCESS_KEY valueFrom: secretKeyRef: name: aws-creds key: awsSecretAccessKey A common use case for mounting Secrets to environment variables is when your connector needs to communicate with Amazon AWS and needs to read the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables with credentials. To mount a value from a ConfigMap to an environment variable, use configMapKeyRef in the valueFrom property as shown in the following example. Example environment variables set to values from a ConfigMap apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnect metadata: name: my-connect spec: # ... externalConfiguration: env: - name: MY_ENVIRONMENT_VARIABLE valueFrom: configMapKeyRef: name: my-config-map key: my-key B.89.2. volumes You can also mount ConfigMaps or Secrets to a Kafka Connect pod as volumes. Using volumes instead of environment variables is useful in the following scenarios: Mounting truststores or keystores with TLS certificates Mounting a properties file that is used to configure Kafka Connect connectors Example Secret with properties apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque stringData: connector.properties: |- 1 dbUsername: my-user 2 dbPassword: my-password 1 The connector configuration in properties file format. 2 Database username and password properties used in the configuration. In this example, a Secret named mysecret is mounted to a volume named connector-config . In the config property, a configuration provider ( FileConfigProvider ) is specified, which will load configuration values from external sources. The Kafka FileConfigProvider is given the alias file , and will read and extract database username and password property values from the file to use in the connector configuration. Example external volumes set to values from a Secret apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnect metadata: name: my-connect spec: # ... config: config.providers: file 1 config.providers.file.class: org.apache.kafka.common.config.provider.FileConfigProvider 2 #... externalConfiguration: volumes: - name: connector-config 3 secret: secretName: mysecret 4 1 The alias for the configuration provider, which is used to define other configuration parameters. Use a comma-separated list if you want to add more than one provider. 2 The FileConfigProvider is the configuration provider that provides values from properties files. The parameter uses the alias from config.providers , taking the form config.providers.USD{alias}.class . 3 The name of the volume containing the Secret. Each volume must specify a name in the name property and a reference to ConfigMap or Secret. 4 The name of the Secret. The volumes are mounted inside the Kafka Connect containers in the path /opt/kafka/external-configuration/ <volume-name> . For example, the files from a volume named connector-config would appear in the directory /opt/kafka/external-configuration/connector-config . The FileConfigProvider is used to read the values from the mounted properties files in connector configurations. Property Description env Allows to pass data from Secret or ConfigMap to the Kafka Connect pods as environment variables. ExternalConfigurationEnv array volumes Allows to pass data from Secret or ConfigMap to the Kafka Connect pods as volumes. ExternalConfigurationVolumeSource array B.90. ExternalConfigurationEnv schema reference Used in: ExternalConfiguration Property Description name Name of the environment variable which will be passed to the Kafka Connect pods. The name of the environment variable cannot start with KAFKA_ or STRIMZI_ . string valueFrom Value of the environment variable which will be passed to the Kafka Connect pods. It can be passed either as a reference to Secret or ConfigMap field. The field has to specify exactly one Secret or ConfigMap. ExternalConfigurationEnvVarSource B.91. ExternalConfigurationEnvVarSource schema reference Used in: ExternalConfigurationEnv Property Description configMapKeyRef Refernce to a key in a ConfigMap. See external documentation of core/v1 configmapkeyselector . ConfigMapKeySelector secretKeyRef Reference to a key in a Secret. See external documentation of core/v1 secretkeyselector . SecretKeySelector B.92. ExternalConfigurationVolumeSource schema reference Used in: ExternalConfiguration Property Description configMap Reference to a key in a ConfigMap. Exactly one Secret or ConfigMap has to be specified. See external documentation of core/v1 configmapvolumesource . ConfigMapVolumeSource name Name of the volume which will be added to the Kafka Connect pods. string secret Reference to a key in a Secret. Exactly one Secret or ConfigMap has to be specified. See external documentation of core/v1 secretvolumesource . SecretVolumeSource B.93. KafkaConnectStatus schema reference Used in: KafkaConnect Property Description conditions List of status conditions. Condition array observedGeneration The generation of the CRD that was last reconciled by the operator. integer url The URL of the REST API endpoint for managing and monitoring Kafka Connect connectors. string connectorPlugins The list of connector plugins available in this Kafka Connect deployment. ConnectorPlugin array labelSelector Label selector for pods providing this resource. string replicas The current number of pods being used to provide this resource. integer B.94. ConnectorPlugin schema reference Used in: KafkaConnectS2IStatus , KafkaConnectStatus , KafkaMirrorMaker2Status Property Description type The type of the connector plugin. The available types are sink and source . string version The version of the connector plugin. string class The class of the connector plugin. string B.95. KafkaConnectS2I schema reference Property Description spec The specification of the Kafka Connect Source-to-Image (S2I) cluster. KafkaConnectS2ISpec status The status of the Kafka Connect Source-to-Image (S2I) cluster. KafkaConnectS2IStatus B.96. KafkaConnectS2ISpec schema reference Used in: KafkaConnectS2I Configures a Kafka Connect cluster with Source-to-Image (S2I) support. When extending Kafka Connect with connector plugins on OpenShift (only), you can use OpenShift builds and S2I to create a container image that is used by the Kafka Connect deployment. The configuration options are similar to Kafka Connect configuration using the KafkaConnectSpec schema . Property Description replicas The number of pods in the Kafka Connect group. integer image The docker image for the pods. string buildResources CPU and memory resources to reserve. See external documentation of core/v1 resourcerequirements . ResourceRequirements livenessProbe Pod liveness checking. Probe readinessProbe Pod readiness checking. Probe jvmOptions JVM Options for pods. JvmOptions affinity The property affinity has been deprecated. This feature should now be configured at path spec.template.pod.affinity . The pod's affinity rules. See external documentation of core/v1 affinity . Affinity logging Logging configuration for Kafka Connect. The type depends on the value of the logging.type property within the given object, which must be one of [inline, external]. InlineLogging , ExternalLogging metrics The Prometheus JMX Exporter configuration. See https://github.com/prometheus/jmx_exporter for details of the structure of this configuration. map template Template for Kafka Connect and Kafka Connect S2I resources. The template allows users to specify how the Deployment , Pods and Service are generated. KafkaConnectTemplate authentication Authentication configuration for Kafka Connect. The type depends on the value of the authentication.type property within the given object, which must be one of [tls, scram-sha-512, plain, oauth]. KafkaClientAuthenticationTls , KafkaClientAuthenticationScramSha512 , KafkaClientAuthenticationPlain , KafkaClientAuthenticationOAuth bootstrapServers Bootstrap servers to connect to. This should be given as a comma separated list of <hostname> : <port> pairs. string clientRackInitImage The image of the init container used for initializing the client.rack . string config The Kafka Connect configuration. Properties with the following prefixes cannot be set: ssl., sasl., security., listeners, plugin.path, rest., bootstrap.servers, consumer.interceptor.classes, producer.interceptor.classes (with the exception of: ssl.endpoint.identification.algorithm, ssl.cipher.suites, ssl.protocol, ssl.enabled.protocols). map externalConfiguration Pass data from Secrets or ConfigMaps to the Kafka Connect pods and use them to configure connectors. ExternalConfiguration insecureSourceRepository When true this configures the source repository with the 'Local' reference policy and an import policy that accepts insecure source tags. boolean rack Configuration of the node label which will be used as the client.rack consumer configuration. Rack resources The maximum limits for CPU and memory resources and the requested initial resources. See external documentation of core/v1 resourcerequirements . ResourceRequirements tls TLS configuration. KafkaConnectTls tolerations The property tolerations has been deprecated. This feature should now be configured at path spec.template.pod.tolerations . The pod's tolerations. See external documentation of core/v1 toleration . Toleration array tracing The configuration of tracing in Kafka Connect. The type depends on the value of the tracing.type property within the given object, which must be one of [jaeger]. JaegerTracing version The Kafka Connect version. Defaults to 2.6.0. Consult the user documentation to understand the process required to upgrade or downgrade the version. string B.97. KafkaConnectS2IStatus schema reference Used in: KafkaConnectS2I Property Description conditions List of status conditions. Condition array observedGeneration The generation of the CRD that was last reconciled by the operator. integer url The URL of the REST API endpoint for managing and monitoring Kafka Connect connectors. string connectorPlugins The list of connector plugins available in this Kafka Connect deployment. ConnectorPlugin array buildConfigName The name of the build configuration. string labelSelector Label selector for pods providing this resource. string replicas The current number of pods being used to provide this resource. integer B.98. KafkaTopic schema reference Property Description spec The specification of the topic. KafkaTopicSpec status The status of the topic. KafkaTopicStatus B.99. KafkaTopicSpec schema reference Used in: KafkaTopic Property Description partitions The number of partitions the topic should have. This cannot be decreased after topic creation. It can be increased after topic creation, but it is important to understand the consequences that has, especially for topics with semantic partitioning. integer replicas The number of replicas the topic should have. integer config The topic configuration. map topicName The name of the topic. When absent this will default to the metadata.name of the topic. It is recommended to not set this unless the topic name is not a valid OpenShift resource name. string B.100. KafkaTopicStatus schema reference Used in: KafkaTopic Property Description conditions List of status conditions. Condition array observedGeneration The generation of the CRD that was last reconciled by the operator. integer B.101. KafkaUser schema reference Property Description spec The specification of the user. KafkaUserSpec status The status of the Kafka User. KafkaUserStatus B.102. KafkaUserSpec schema reference Used in: KafkaUser Property Description authentication Authentication mechanism enabled for this Kafka user. The type depends on the value of the authentication.type property within the given object, which must be one of [tls, scram-sha-512]. KafkaUserTlsClientAuthentication , KafkaUserScramSha512ClientAuthentication authorization Authorization rules for this Kafka user. The type depends on the value of the authorization.type property within the given object, which must be one of [simple]. KafkaUserAuthorizationSimple quotas Quotas on requests to control the broker resources used by clients. Network bandwidth and request rate quotas can be enforced.Kafka documentation for Kafka User quotas can be found at http://kafka.apache.org/documentation/#design_quotas . KafkaUserQuotas template Template to specify how Kafka User Secrets are generated. KafkaUserTemplate B.103. KafkaUserTlsClientAuthentication schema reference Used in: KafkaUserSpec The type property is a discriminator that distinguishes the use of the type KafkaUserTlsClientAuthentication from KafkaUserScramSha512ClientAuthentication . It must have the value tls for the type KafkaUserTlsClientAuthentication . Property Description type Must be tls . string B.104. KafkaUserScramSha512ClientAuthentication schema reference Used in: KafkaUserSpec The type property is a discriminator that distinguishes the use of the type KafkaUserScramSha512ClientAuthentication from KafkaUserTlsClientAuthentication . It must have the value scram-sha-512 for the type KafkaUserScramSha512ClientAuthentication . Property Description type Must be scram-sha-512 . string B.105. KafkaUserAuthorizationSimple schema reference Used in: KafkaUserSpec The type property is a discriminator that distinguishes the use of the type KafkaUserAuthorizationSimple from other subtypes which may be added in the future. It must have the value simple for the type KafkaUserAuthorizationSimple . Property Description type Must be simple . string acls List of ACL rules which should be applied to this user. AclRule array B.106. AclRule schema reference Used in: KafkaUserAuthorizationSimple Configures access control rule for a KafkaUser when brokers are using the AclAuthorizer . Example KafkaUser configuration with authorization apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: # ... authorization: type: simple acls: - resource: type: topic name: my-topic patternType: literal operation: Read - resource: type: topic name: my-topic patternType: literal operation: Describe - resource: type: group name: my-group patternType: prefix operation: Read B.106.1. resource Use the resource property to specify the resource that the rule applies to. Simple authorization supports four resource types, which are specified in the type property: Topics ( topic ) Consumer Groups ( group ) Clusters ( cluster ) Transactional IDs ( transactionalId ) For Topic, Group, and Transactional ID resources you can specify the name of the resource the rule applies to in the name property. Cluster type resources have no name. A name is specified as a literal or a prefix using the patternType property. Literal names are taken exactly as they are specified in the name field. Prefix names use the value from the name as a prefix, and will apply the rule to all resources with names starting with the value. B.106.2. type The type of rule, which is to allow or deny (not currently supported) an operation. The type field is optional. If type is unspecified, the ACL rule is treated as an allow rule. B.106.3. operation Specify an operation for the rule to allow or deny. The following operations are supported: Read Write Delete Alter Describe All IdempotentWrite ClusterAction Create AlterConfigs DescribeConfigs Only certain operations work with each resource. For more details about AclAuthorizer , ACLs and supported combinations of resources and operations, see Authorization and ACLs . B.106.4. host Use the host property to specify a remote host from which the rule is allowed or denied. Use an asterisk ( * ) to allow or deny the operation from all hosts. The host field is optional. If host is unspecified, the * value is used by default. Property Description host The host from which the action described in the ACL rule is allowed or denied. string operation Operation which will be allowed or denied. Supported operations are: Read, Write, Create, Delete, Alter, Describe, ClusterAction, AlterConfigs, DescribeConfigs, IdempotentWrite and All. string (one of [Read, Write, Delete, Alter, Describe, All, IdempotentWrite, ClusterAction, Create, AlterConfigs, DescribeConfigs]) resource Indicates the resource for which given ACL rule applies. The type depends on the value of the resource.type property within the given object, which must be one of [topic, group, cluster, transactionalId]. AclRuleTopicResource , AclRuleGroupResource , AclRuleClusterResource , AclRuleTransactionalIdResource type The type of the rule. Currently the only supported type is allow . ACL rules with type allow are used to allow user to execute the specified operations. Default value is allow . string (one of [allow, deny]) B.107. AclRuleTopicResource schema reference Used in: AclRule The type property is a discriminator that distinguishes the use of the type AclRuleTopicResource from AclRuleGroupResource , AclRuleClusterResource , AclRuleTransactionalIdResource . It must have the value topic for the type AclRuleTopicResource . Property Description type Must be topic . string name Name of resource for which given ACL rule applies. Can be combined with patternType field to use prefix pattern. string patternType Describes the pattern used in the resource field. The supported types are literal and prefix . With literal pattern type, the resource field will be used as a definition of a full topic name. With prefix pattern type, the resource name will be used only as a prefix. Default value is literal . string (one of [prefix, literal]) B.108. AclRuleGroupResource schema reference Used in: AclRule The type property is a discriminator that distinguishes the use of the type AclRuleGroupResource from AclRuleTopicResource , AclRuleClusterResource , AclRuleTransactionalIdResource . It must have the value group for the type AclRuleGroupResource . Property Description type Must be group . string name Name of resource for which given ACL rule applies. Can be combined with patternType field to use prefix pattern. string patternType Describes the pattern used in the resource field. The supported types are literal and prefix . With literal pattern type, the resource field will be used as a definition of a full topic name. With prefix pattern type, the resource name will be used only as a prefix. Default value is literal . string (one of [prefix, literal]) B.109. AclRuleClusterResource schema reference Used in: AclRule The type property is a discriminator that distinguishes the use of the type AclRuleClusterResource from AclRuleTopicResource , AclRuleGroupResource , AclRuleTransactionalIdResource . It must have the value cluster for the type AclRuleClusterResource . Property Description type Must be cluster . string B.110. AclRuleTransactionalIdResource schema reference Used in: AclRule The type property is a discriminator that distinguishes the use of the type AclRuleTransactionalIdResource from AclRuleTopicResource , AclRuleGroupResource , AclRuleClusterResource . It must have the value transactionalId for the type AclRuleTransactionalIdResource . Property Description type Must be transactionalId . string name Name of resource for which given ACL rule applies. Can be combined with patternType field to use prefix pattern. string patternType Describes the pattern used in the resource field. The supported types are literal and prefix . With literal pattern type, the resource field will be used as a definition of a full name. With prefix pattern type, the resource name will be used only as a prefix. Default value is literal . string (one of [prefix, literal]) B.111. KafkaUserQuotas schema reference Used in: KafkaUserSpec Kafka allows a user to set quotas to control the use of resources by clients. B.111.1. quotas Quotas split into two categories: Network usage quotas, which are defined as the byte rate threshold for each group of clients sharing a quota CPU utilization quotas, which are defined as the percentage of time a client can utilize on request handler I/O threads and network threads of each broker within a quota window Using quotas for Kafka clients might be useful in a number of situations. Consider a wrongly configured Kafka producer which is sending requests at too high a rate. Such misconfiguration can cause a denial of service to other clients, so the problematic client ought to be blocked. By using a network limiting quota, it is possible to prevent this situation from significantly impacting other clients. AMQ Streams supports user-level quotas, but not client-level quotas. An example Kafka user quotas spec: quotas: producerByteRate: 1048576 consumerByteRate: 2097152 requestPercentage: 55 For more info about Kafka user quotas, refer to the Apache Kafka documentation . Property Description consumerByteRate A quota on the maximum bytes per-second that each client group can fetch from a broker before the clients in the group are throttled. Defined on a per-broker basis. integer producerByteRate A quota on the maximum bytes per-second that each client group can publish to a broker before the clients in the group are throttled. Defined on a per-broker basis. integer requestPercentage A quota on the maximum CPU utilization of each client group as a percentage of network and I/O threads. integer B.112. KafkaUserTemplate schema reference Used in: KafkaUserSpec Specify additional labels and annotations for the secret created by the User Operator. An example showing the KafkaUserTemplate apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: authentication: type: tls template: secret: metadata: labels: label1: value1 annotations: anno1: value1 # ... Property Description secret Template for KafkaUser resources. The template allows users to specify how the Secret with password or TLS certificates is generated. ResourceTemplate B.113. KafkaUserStatus schema reference Used in: KafkaUser Property Description conditions List of status conditions. Condition array observedGeneration The generation of the CRD that was last reconciled by the operator. integer username Username. string secret The name of Secret where the credentials are stored. string B.114. KafkaMirrorMaker schema reference Property Description spec The specification of Kafka MirrorMaker. KafkaMirrorMakerSpec status The status of Kafka MirrorMaker. KafkaMirrorMakerStatus B.115. KafkaMirrorMakerSpec schema reference Used in: KafkaMirrorMaker Configures Kafka MirrorMaker. B.115.1. whitelist Use the whitelist property to configure a list of topics that Kafka MirrorMaker mirrors from the source to the target Kafka cluster. The property allows any regular expression from the simplest case with a single topic name to complex patterns. For example, you can mirror topics A and B using "A|B" or all topics using "*". You can also pass multiple regular expressions separated by commas to the Kafka MirrorMaker. B.115.2. KafkaMirrorMakerConsumerSpec and KafkaMirrorMakerProducerSpec Use the KafkaMirrorMakerConsumerSpec and KafkaMirrorMakerProducerSpec to configure source (consumer) and target (producer) clusters. Kafka MirrorMaker always works together with two Kafka clusters (source and target). To establish a connection, the bootstrap servers for the source and the target Kafka clusters are specified as comma-separated lists of HOSTNAME:PORT pairs. Each comma-separated list contains one or more Kafka brokers or a Service pointing to Kafka brokers specified as a HOSTNAME:PORT pair. B.115.3. logging Kafka MirrorMaker has its own configurable logger: mirrormaker.root.logger MirrorMaker uses the Apache log4j logger implementation. Use the logging property to configure loggers and logger levels. You can set the log levels by specifying the logger and level directly (inline) or use a custom (external) ConfigMap. If a ConfigMap is used, you set logging.name property to the name of the ConfigMap containing the external logging configuration. Inside the ConfigMap, the logging configuration is described using log4j.properties . For more information about log levels, see Apache logging services . Here we see examples of inline and external logging: apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaMirrorMaker spec: # ... logging: type: inline loggers: mirrormaker.root.logger: "INFO" # ... apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaMirrorMaker spec: # ... logging: type: external name: customConfigMap # ... Garbage collector (GC) Garbage collector logging can also be enabled (or disabled) using the jvmOptions property . Property Description replicas The number of pods in the Deployment . integer image The docker image for the pods. string whitelist List of topics which are included for mirroring. This option allows any regular expression using Java-style regular expressions. Mirroring two topics named A and B is achieved by using the whitelist 'A|B' . Or, as a special case, you can mirror all topics using the whitelist '*'. You can also specify multiple regular expressions separated by commas. string consumer Configuration of source cluster. KafkaMirrorMakerConsumerSpec producer Configuration of target cluster. KafkaMirrorMakerProducerSpec resources CPU and memory resources to reserve. See external documentation of core/v1 resourcerequirements . ResourceRequirements affinity The property affinity has been deprecated. This feature should now be configured at path spec.template.pod.affinity . The pod's affinity rules. See external documentation of core/v1 affinity . Affinity tolerations The property tolerations has been deprecated. This feature should now be configured at path spec.template.pod.tolerations . The pod's tolerations. See external documentation of core/v1 toleration . Toleration array jvmOptions JVM Options for pods. JvmOptions logging Logging configuration for MirrorMaker. The type depends on the value of the logging.type property within the given object, which must be one of [inline, external]. InlineLogging , ExternalLogging metrics The Prometheus JMX Exporter configuration. See JMX Exporter documentation for details of the structure of this configuration. map tracing The configuration of tracing in Kafka MirrorMaker. The type depends on the value of the tracing.type property within the given object, which must be one of [jaeger]. JaegerTracing template Template to specify how Kafka MirrorMaker resources, Deployments and Pods , are generated. KafkaMirrorMakerTemplate livenessProbe Pod liveness checking. Probe readinessProbe Pod readiness checking. Probe version The Kafka MirrorMaker version. Defaults to 2.6.0. Consult the documentation to understand the process required to upgrade or downgrade the version. string B.116. KafkaMirrorMakerConsumerSpec schema reference Used in: KafkaMirrorMakerSpec Configures a MirrorMaker consumer. B.116.1. numStreams Use the consumer.numStreams property to configure the number of streams for the consumer. You can increase the throughput in mirroring topics by increasing the number of consumer threads. Consumer threads belong to the consumer group specified for Kafka MirrorMaker. Topic partitions are assigned across the consumer threads, which consume messages in parallel. B.116.2. offsetCommitInterval Use the consumer.offsetCommitInterval property to configure an offset auto-commit interval for the consumer. You can specify the regular time interval at which an offset is committed after Kafka MirrorMaker has consumed data from the source Kafka cluster. The time interval is set in milliseconds, with a default value of 60,000. B.116.3. config Use the consumer.config properties to configure Kafka options for the consumer. The config property contains the Kafka MirrorMaker consumer configuration options as keys, with values set in one of the following JSON types: String Number Boolean For client connection using a specific cipher suite for a TLS version, you can configure allowed ssl properties . You can also configure the ssl.endpoint.identification.algorithm property to enable or disable hostname verification. Exceptions You can specify and configure the options listed in the Apache Kafka configuration documentation for consumers . However, there are exceptions for options automatically configured and managed directly by AMQ Streams related to: Kafka cluster bootstrap address Security (encryption, authentication, and authorization) Consumer group identifier Interceptors Specifically, all configuration options with keys equal to or starting with one of the following strings are forbidden: bootstrap.servers group.id interceptor.classes ssl. ( not including specific exceptions ) sasl. security. When a forbidden option is present in the config property, it is ignored and a warning message is printed to the Cluster Operator log file. All other options are passed to Kafka MirrorMaker. Important The Cluster Operator does not validate keys or values in the provided config object. When an invalid configuration is provided, the Kafka MirrorMaker might not start or might become unstable. In such cases, the configuration in the KafkaMirrorMaker.spec.consumer.config object should be fixed and the Cluster Operator will roll out the new configuration for Kafka MirrorMaker. B.116.4. groupId Use the consumer.groupId property to configure a consumer group identifier for the consumer. Kafka MirrorMaker uses a Kafka consumer to consume messages, behaving like any other Kafka consumer client. Messages consumed from the source Kafka cluster are mirrored to a target Kafka cluster. A group identifier is required, as the consumer needs to be part of a consumer group for the assignment of partitions. Property Description numStreams Specifies the number of consumer stream threads to create. integer offsetCommitInterval Specifies the offset auto-commit interval in ms. Default value is 60000. integer groupId A unique string that identifies the consumer group this consumer belongs to. string bootstrapServers A list of host:port pairs for establishing the initial connection to the Kafka cluster. string authentication Authentication configuration for connecting to the cluster. The type depends on the value of the authentication.type property within the given object, which must be one of [tls, scram-sha-512, plain, oauth]. KafkaClientAuthenticationTls , KafkaClientAuthenticationScramSha512 , KafkaClientAuthenticationPlain , KafkaClientAuthenticationOAuth config The MirrorMaker consumer config. Properties with the following prefixes cannot be set: ssl., bootstrap.servers, group.id, sasl., security., interceptor.classes (with the exception of: ssl.endpoint.identification.algorithm, ssl.cipher.suites, ssl.protocol, ssl.enabled.protocols). map tls TLS configuration for connecting MirrorMaker to the cluster. KafkaMirrorMakerTls B.117. KafkaMirrorMakerTls schema reference Used in: KafkaMirrorMakerConsumerSpec , KafkaMirrorMakerProducerSpec Configures TLS trusted certificates for connecting MirrorMaker to the cluster. B.117.1. trustedCertificates Provide a list of secrets using the trustedCertificates property . Property Description trustedCertificates Trusted certificates for TLS connection. CertSecretSource array B.118. KafkaMirrorMakerProducerSpec schema reference Used in: KafkaMirrorMakerSpec Configures a MirrorMaker producer. B.118.1. abortOnSendFailure Use the producer.abortOnSendFailure property to configure how to handle message send failure from the producer. By default, if an error occurs when sending a message from Kafka MirrorMaker to a Kafka cluster: The Kafka MirrorMaker container is terminated in OpenShift. The container is then recreated. If the abortOnSendFailure option is set to false , message sending errors are ignored. B.118.2. config Use the producer.config properties to configure Kafka options for the producer. The config property contains the Kafka MirrorMaker producer configuration options as keys, with values set in one of the following JSON types: String Number Boolean For client connection using a specific cipher suite for a TLS version, you can configure allowed ssl properties . You can also configure the ssl.endpoint.identification.algorithm property to enable or disable hostname verification. Exceptions You can specify and configure the options listed in the Apache Kafka configuration documentation for producers . However, there are exceptions for options automatically configured and managed directly by AMQ Streams related to: Kafka cluster bootstrap address Security (encryption, authentication, and authorization) Interceptors Specifically, all configuration options with keys equal to or starting with one of the following strings are forbidden: bootstrap.servers interceptor.classes ssl. ( not including specific exceptions ) sasl. security. When a forbidden option is present in the config property, it is ignored and a warning message is printed to the Cluster Operator log file. All other options are passed to Kafka MirrorMaker. Important The Cluster Operator does not validate keys or values in the provided config object. When an invalid configuration is provided, the Kafka MirrorMaker might not start or might become unstable. In such cases, the configuration in the KafkaMirrorMaker.spec.producer.config object should be fixed and the Cluster Operator will roll out the new configuration for Kafka MirrorMaker. Property Description bootstrapServers A list of host:port pairs for establishing the initial connection to the Kafka cluster. string abortOnSendFailure Flag to set the MirrorMaker to exit on a failed send. Default value is true . boolean authentication Authentication configuration for connecting to the cluster. The type depends on the value of the authentication.type property within the given object, which must be one of [tls, scram-sha-512, plain, oauth]. KafkaClientAuthenticationTls , KafkaClientAuthenticationScramSha512 , KafkaClientAuthenticationPlain , KafkaClientAuthenticationOAuth config The MirrorMaker producer config. Properties with the following prefixes cannot be set: ssl., bootstrap.servers, sasl., security., interceptor.classes (with the exception of: ssl.endpoint.identification.algorithm, ssl.cipher.suites, ssl.protocol, ssl.enabled.protocols). map tls TLS configuration for connecting MirrorMaker to the cluster. KafkaMirrorMakerTls B.119. KafkaMirrorMakerTemplate schema reference Used in: KafkaMirrorMakerSpec Property Description deployment Template for Kafka MirrorMaker Deployment . ResourceTemplate pod Template for Kafka MirrorMaker Pods . PodTemplate mirrorMakerContainer Template for Kafka MirrorMaker container. ContainerTemplate podDisruptionBudget Template for Kafka MirrorMaker PodDisruptionBudget . PodDisruptionBudgetTemplate B.120. KafkaMirrorMakerStatus schema reference Used in: KafkaMirrorMaker Property Description conditions List of status conditions. Condition array observedGeneration The generation of the CRD that was last reconciled by the operator. integer labelSelector Label selector for pods providing this resource. string replicas The current number of pods being used to provide this resource. integer B.121. KafkaBridge schema reference Property Description spec The specification of the Kafka Bridge. KafkaBridgeSpec status The status of the Kafka Bridge. KafkaBridgeStatus B.122. KafkaBridgeSpec schema reference Used in: KafkaBridge Configures a Kafka Bridge cluster. Configuration options relate to: Kafka cluster bootstrap address Security (Encryption, Authentication, and Authorization) Consumer configuration Producer configuration HTTP configuration B.122.1. logging Kafka Bridge has its own configurable loggers: logger.bridge logger. <operation-id> You can replace <operation-id> in the logger. <operation-id> logger to set log levels for specific operations: createConsumer deleteConsumer subscribe unsubscribe poll assign commit send sendToPartition seekToBeginning seekToEnd seek healthy ready openapi Each operation is defined according OpenAPI specification, and has a corresponding API endpoint through which the bridge receives requests from HTTP clients. You can change the log level on each endpoint to create fine-grained logging information about the incoming and outgoing HTTP requests. Each logger has to be configured assigning it a name as http.openapi.operation. <operation-id> . For example, configuring the logging level for the send operation logger means defining the following: Kafka Bridge uses the Apache log4j2 logger implementation. Loggers are defined in the log4j2.properties file, which has the following default configuration for healthy and ready endpoints: The log level of all other operations is set to INFO by default. Use the logging property to configure loggers and logger levels. You can set the log levels by specifying the logger and level directly (inline) or use a custom (external) ConfigMap. If a ConfigMap is used, you set logging.name property to the name of the ConfigMap containing the external logging configuration. Inside the ConfigMap, the logging configuration is described using log4j.properties . For more information about log levels, see Apache logging services . Here we see examples of inline and external logging. Inline logging apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaBridge spec: # ... logging: type: inline loggers: logger.bridge.level: "INFO" # enabling DEBUG just for send operation logger.send.name: "http.openapi.operation.send" logger.send.level: "DEBUG" # ... External logging apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaBridge spec: # ... logging: type: external name: customConfigMap # ... Any available loggers that are not configured have their level set to OFF . If the Kafka Bridge was deployed using the Cluster Operator, changes to Kafka Bridge logging levels are applied dynamically. If you use external logging, a rolling update is triggered when logging appenders are changed. Garbage collector (GC) Garbage collector logging can also be enabled (or disabled) using the jvmOptions property . Property Description replicas The number of pods in the Deployment . integer image The docker image for the pods. string bootstrapServers A list of host:port pairs for establishing the initial connection to the Kafka cluster. string tls TLS configuration for connecting Kafka Bridge to the cluster. KafkaBridgeTls authentication Authentication configuration for connecting to the cluster. The type depends on the value of the authentication.type property within the given object, which must be one of [tls, scram-sha-512, plain, oauth]. KafkaClientAuthenticationTls , KafkaClientAuthenticationScramSha512 , KafkaClientAuthenticationPlain , KafkaClientAuthenticationOAuth http The HTTP related configuration. KafkaBridgeHttpConfig consumer Kafka consumer related configuration. KafkaBridgeConsumerSpec producer Kafka producer related configuration. KafkaBridgeProducerSpec resources CPU and memory resources to reserve. See external documentation of core/v1 resourcerequirements . ResourceRequirements jvmOptions Currently not supported JVM Options for pods. JvmOptions logging Logging configuration for Kafka Bridge. The type depends on the value of the logging.type property within the given object, which must be one of [inline, external]. InlineLogging , ExternalLogging enableMetrics Enable the metrics for the Kafka Bridge. Default is false. boolean livenessProbe Pod liveness checking. Probe readinessProbe Pod readiness checking. Probe template Template for Kafka Bridge resources. The template allows users to specify how is the Deployment and Pods generated. KafkaBridgeTemplate tracing The configuration of tracing in Kafka Bridge. The type depends on the value of the tracing.type property within the given object, which must be one of [jaeger]. JaegerTracing B.123. KafkaBridgeTls schema reference Used in: KafkaBridgeSpec Property Description trustedCertificates Trusted certificates for TLS connection. CertSecretSource array B.124. KafkaBridgeHttpConfig schema reference Used in: KafkaBridgeSpec Configures HTTP access to a Kafka cluster for the Kafka Bridge. The default HTTP configuration is for the Kafka Bridge to listen on port 8080. B.124.1. cors As well as enabling HTTP access to a Kafka cluster, HTTP properties provide the capability to enable and define access control for the Kafka Bridge through Cross-Origin Resource Sharing (CORS). CORS is a HTTP mechanism that allows browser access to selected resources from more than one origin. To configure CORS, you define a list of allowed resource origins and HTTP access methods. For the origins, you can use a URL or a Java regular expression. Example Kafka Bridge HTTP configuration apiVersion: kafka.strimzi.io/v1alpha1 kind: KafkaBridge metadata: name: my-bridge spec: # ... http: port: 8080 cors: allowedOrigins: "https://strimzi.io" allowedMethods: "GET,POST,PUT,DELETE,OPTIONS,PATCH" # ... Property Description port The port which is the server listening on. integer cors CORS configuration for the HTTP Bridge. KafkaBridgeHttpCors B.125. KafkaBridgeHttpCors schema reference Used in: KafkaBridgeHttpConfig Property Description allowedOrigins List of allowed origins. Java regular expressions can be used. string array allowedMethods List of allowed HTTP methods. string array B.126. KafkaBridgeConsumerSpec schema reference Used in: KafkaBridgeSpec Configures consumer options for the Kafka Bridge as keys. The values can be one of the following JSON types: String Number Boolean You can specify and configure the options listed in the Apache Kafka configuration documentation for consumers with the exception of those options which are managed directly by AMQ Streams. Specifically, all configuration options with keys equal to or starting with one of the following strings are forbidden: ssl. sasl. security. bootstrap.servers group.id When one of the forbidden options is present in the config property, it is ignored and a warning message will be printed to the Cluster Operator log file. All other options will be passed to Kafka Important The Cluster Operator does not validate keys or values in the config object. If an invalid configuration is provided, the Kafka Bridge cluster might not start or might become unstable. Fix the configuration so that the Cluster Operator can roll out the new configuration to all Kafka Bridge nodes. There are exceptions to the forbidden options. For client connection using a specific cipher suite for a TLS version, you can configure allowed ssl properties . Example Kafka Bridge consumer configuration apiVersion: kafka.strimzi.io/v1alpha1 kind: KafkaBridge metadata: name: my-bridge spec: # ... consumer: config: auto.offset.reset: earliest enable.auto.commit: true ssl.cipher.suites: "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" ssl.enabled.protocols: "TLSv1.2" ssl.protocol: "TLSv1.2" ssl.endpoint.identification.algorithm: HTTPS # ... Property Description config The Kafka consumer configuration used for consumer instances created by the bridge. Properties with the following prefixes cannot be set: ssl., bootstrap.servers, group.id, sasl., security. (with the exception of: ssl.endpoint.identification.algorithm, ssl.cipher.suites, ssl.protocol, ssl.enabled.protocols). map B.127. KafkaBridgeProducerSpec schema reference Used in: KafkaBridgeSpec Configures producer options for the Kafka Bridge as keys. The values can be one of the following JSON types: String Number Boolean You can specify and configure the options listed in the Apache Kafka configuration documentation for producers with the exception of those options which are managed directly by AMQ Streams. Specifically, all configuration options with keys equal to or starting with one of the following strings are forbidden: ssl. sasl. security. bootstrap.servers When one of the forbidden options is present in the config property, it is ignored and a warning message will be printed to the Cluster Operator log file. All other options will be passed to Kafka Important The Cluster Operator does not validate keys or values in the config object. If an invalid configuration is provided, the Kafka Bridge cluster might not start or might become unstable. Fix the configuration so that the Cluster Operator can roll out the new configuration to all Kafka Bridge nodes. There are exceptions to the forbidden options. For client connection using a specific cipher suite for a TLS version, you can configure allowed ssl properties . Example Kafka Bridge producer configuration apiVersion: kafka.strimzi.io/v1alpha1 kind: KafkaBridge metadata: name: my-bridge spec: # ... producer: config: acks: 1 delivery.timeout.ms: 300000 ssl.cipher.suites: "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" ssl.enabled.protocols: "TLSv1.2" ssl.protocol: "TLSv1.2" ssl.endpoint.identification.algorithm: HTTPS # ... Property Description config The Kafka producer configuration used for producer instances created by the bridge. Properties with the following prefixes cannot be set: ssl., bootstrap.servers, sasl., security. (with the exception of: ssl.endpoint.identification.algorithm, ssl.cipher.suites, ssl.protocol, ssl.enabled.protocols). map B.128. KafkaBridgeTemplate schema reference Used in: KafkaBridgeSpec Property Description deployment Template for Kafka Bridge Deployment . ResourceTemplate pod Template for Kafka Bridge Pods . PodTemplate apiService Template for Kafka Bridge API Service . ResourceTemplate bridgeContainer Template for the Kafka Bridge container. ContainerTemplate podDisruptionBudget Template for Kafka Bridge PodDisruptionBudget . PodDisruptionBudgetTemplate B.129. KafkaBridgeStatus schema reference Used in: KafkaBridge Property Description conditions List of status conditions. Condition array observedGeneration The generation of the CRD that was last reconciled by the operator. integer url The URL at which external client applications can access the Kafka Bridge. string labelSelector Label selector for pods providing this resource. string replicas The current number of pods being used to provide this resource. integer B.130. KafkaConnector schema reference Property Description spec The specification of the Kafka Connector. KafkaConnectorSpec status The status of the Kafka Connector. KafkaConnectorStatus B.131. KafkaConnectorSpec schema reference Used in: KafkaConnector Property Description class The Class for the Kafka Connector. string tasksMax The maximum number of tasks for the Kafka Connector. integer config The Kafka Connector configuration. The following properties cannot be set: connector.class, tasks.max. map pause Whether the connector should be paused. Defaults to false. boolean B.132. KafkaConnectorStatus schema reference Used in: KafkaConnector Property Description conditions List of status conditions. Condition array observedGeneration The generation of the CRD that was last reconciled by the operator. integer connectorStatus The connector status, as reported by the Kafka Connect REST API. map tasksMax The maximum number of tasks for the Kafka Connector. integer B.133. KafkaMirrorMaker2 schema reference Property Description spec The specification of the Kafka MirrorMaker 2.0 cluster. KafkaMirrorMaker2Spec status The status of the Kafka MirrorMaker 2.0 cluster. KafkaMirrorMaker2Status B.134. KafkaMirrorMaker2Spec schema reference Used in: KafkaMirrorMaker2 Property Description replicas The number of pods in the Kafka Connect group. integer version The Kafka Connect version. Defaults to 2.6.0. Consult the user documentation to understand the process required to upgrade or downgrade the version. string image The docker image for the pods. string connectCluster The cluster alias used for Kafka Connect. The alias must match a cluster in the list at spec.clusters . string clusters Kafka clusters for mirroring. KafkaMirrorMaker2ClusterSpec array mirrors Configuration of the MirrorMaker 2.0 connectors. KafkaMirrorMaker2MirrorSpec array resources The maximum limits for CPU and memory resources and the requested initial resources. See external documentation of core/v1 resourcerequirements . ResourceRequirements livenessProbe Pod liveness checking. Probe readinessProbe Pod readiness checking. Probe jvmOptions JVM Options for pods. JvmOptions affinity The property affinity has been deprecated. This feature should now be configured at path spec.template.pod.affinity . The pod's affinity rules. See external documentation of core/v1 affinity . Affinity tolerations The property tolerations has been deprecated. This feature should now be configured at path spec.template.pod.tolerations . The pod's tolerations. See external documentation of core/v1 toleration . Toleration array logging Logging configuration for Kafka Connect. The type depends on the value of the logging.type property within the given object, which must be one of [inline, external]. InlineLogging , ExternalLogging metrics The Prometheus JMX Exporter configuration. See https://github.com/prometheus/jmx_exporter for details of the structure of this configuration. map tracing The configuration of tracing in Kafka Connect. The type depends on the value of the tracing.type property within the given object, which must be one of [jaeger]. JaegerTracing template Template for Kafka Connect and Kafka Connect S2I resources. The template allows users to specify how the Deployment , Pods and Service are generated. KafkaConnectTemplate externalConfiguration Pass data from Secrets or ConfigMaps to the Kafka Connect pods and use them to configure connectors. ExternalConfiguration B.135. KafkaMirrorMaker2ClusterSpec schema reference Used in: KafkaMirrorMaker2Spec Configures Kafka clusters for mirroring. B.135.1. config Use the config properties to configure Kafka options. Standard Apache Kafka configuration may be provided, restricted to those properties not managed directly by AMQ Streams. For client connection using a specific cipher suite for a TLS version, you can configure allowed ssl properties . You can also configure the ssl.endpoint.identification.algorithm property to enable or disable hostname verification. Property Description alias Alias used to reference the Kafka cluster. string bootstrapServers A comma-separated list of host:port pairs for establishing the connection to the Kafka cluster. string config The MirrorMaker 2.0 cluster config. Properties with the following prefixes cannot be set: ssl., sasl., security., listeners, plugin.path, rest., bootstrap.servers, consumer.interceptor.classes, producer.interceptor.classes (with the exception of: ssl.endpoint.identification.algorithm, ssl.cipher.suites, ssl.protocol, ssl.enabled.protocols). map tls TLS configuration for connecting MirrorMaker 2.0 connectors to a cluster. KafkaMirrorMaker2Tls authentication Authentication configuration for connecting to the cluster. The type depends on the value of the authentication.type property within the given object, which must be one of [tls, scram-sha-512, plain, oauth]. KafkaClientAuthenticationTls , KafkaClientAuthenticationScramSha512 , KafkaClientAuthenticationPlain , KafkaClientAuthenticationOAuth B.136. KafkaMirrorMaker2Tls schema reference Used in: KafkaMirrorMaker2ClusterSpec Property Description trustedCertificates Trusted certificates for TLS connection. CertSecretSource array B.137. KafkaMirrorMaker2MirrorSpec schema reference Used in: KafkaMirrorMaker2Spec Property Description sourceCluster The alias of the source cluster used by the Kafka MirrorMaker 2.0 connectors. The alias must match a cluster in the list at spec.clusters . string targetCluster The alias of the target cluster used by the Kafka MirrorMaker 2.0 connectors. The alias must match a cluster in the list at spec.clusters . string sourceConnector The specification of the Kafka MirrorMaker 2.0 source connector. KafkaMirrorMaker2ConnectorSpec checkpointConnector The specification of the Kafka MirrorMaker 2.0 checkpoint connector. KafkaMirrorMaker2ConnectorSpec heartbeatConnector The specification of the Kafka MirrorMaker 2.0 heartbeat connector. KafkaMirrorMaker2ConnectorSpec topicsPattern A regular expression matching the topics to be mirrored, for example, "topic1|topic2|topic3". Comma-separated lists are also supported. string topicsBlacklistPattern A regular expression matching the topics to exclude from mirroring. Comma-separated lists are also supported. string groupsPattern A regular expression matching the consumer groups to be mirrored. Comma-separated lists are also supported. string groupsBlacklistPattern A regular expression matching the consumer groups to exclude from mirroring. Comma-separated lists are also supported. string B.138. KafkaMirrorMaker2ConnectorSpec schema reference Used in: KafkaMirrorMaker2MirrorSpec Property Description tasksMax The maximum number of tasks for the Kafka Connector. integer config The Kafka Connector configuration. The following properties cannot be set: connector.class, tasks.max. map pause Whether the connector should be paused. Defaults to false. boolean B.139. KafkaMirrorMaker2Status schema reference Used in: KafkaMirrorMaker2 Property Description conditions List of status conditions. Condition array observedGeneration The generation of the CRD that was last reconciled by the operator. integer url The URL of the REST API endpoint for managing and monitoring Kafka Connect connectors. string connectorPlugins The list of connector plugins available in this Kafka Connect deployment. ConnectorPlugin array connectors List of MirrorMaker 2.0 connector statuses, as reported by the Kafka Connect REST API. map array labelSelector Label selector for pods providing this resource. string replicas The current number of pods being used to provide this resource. integer B.140. KafkaRebalance schema reference Property Description spec The specification of the Kafka rebalance. KafkaRebalanceSpec status The status of the Kafka rebalance. KafkaRebalanceStatus B.141. KafkaRebalanceSpec schema reference Used in: KafkaRebalance Property Description goals A list of goals, ordered by decreasing priority, to use for generating and executing the rebalance proposal. The supported goals are available at https://github.com/linkedin/cruise-control#goals . If an empty goals list is provided, the goals declared in the default.goals Cruise Control configuration parameter are used. string array skipHardGoalCheck Whether to allow the hard goals specified in the Kafka CR to be skipped in optimization proposal generation. This can be useful when some of those hard goals are preventing a balance solution being found. Default is false. boolean excludedTopics A regular expression where any matching topics will be excluded from the calculation of optimization proposals. This expression will be parsed by the java.util.regex.Pattern class; for more information on the supported formar consult the documentation for that class. string concurrentPartitionMovementsPerBroker The upper bound of ongoing partition replica movements going into/out of each broker. Default is 5. integer concurrentIntraBrokerPartitionMovements The upper bound of ongoing partition replica movements between disks within each broker. Default is 2. integer concurrentLeaderMovements The upper bound of ongoing partition leadership movements. Default is 1000. integer replicationThrottle The upper bound, in bytes per second, on the bandwidth used to move replicas. There is no limit by default. integer replicaMovementStrategies A list of strategy class names used to determine the execution order for the replica movements in the generated optimization proposal. By default BaseReplicaMovementStrategy is used, which will execute the replica movements in the order that they were generated. string array B.142. KafkaRebalanceStatus schema reference Used in: KafkaRebalance Property Description conditions List of status conditions. Condition array observedGeneration The generation of the CRD that was last reconciled by the operator. integer sessionId The session identifier for requests to Cruise Control pertaining to this KafkaRebalance resource. This is used by the Kafka Rebalance operator to track the status of ongoing rebalancing operations. string optimizationResult A JSON object describing the optimization result. map
[ "spec: config: ssl.cipher.suites: \"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\" 1 ssl.enabled.protocols: \"TLSv1.2\" 2 ssl.protocol: \"TLSv1.2\" 3 ssl.endpoint.identification.algorithm: HTTPS 4", "create secret generic MY-SECRET --from-file= MY-TLS-CERTIFICATE-FILE.crt", "tls: trustedCertificates: - secretName: my-cluster-cluster-cert certificate: ca.crt - secretName: my-cluster-cluster-cert certificate: ca2.crt", "tls: trustedCertificates: []", "apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # image: my-org/my-image:latest # zookeeper: #", "readinessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5", "jvmOptions: \"-Xmx\": \"2g\" \"-Xms\": \"2g\"", "jvmOptions: \"-server\": true", "jvmOptions: \"-XX\": \"UseG1GC\": true \"MaxGCPauseMillis\": 20 \"InitiatingHeapOccupancyPercent\": 35 \"ExplicitGCInvokesConcurrent\": true", "-XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+ExplicitGCInvokesConcurrent -XX:-UseParNewGC", "jvmOptions: gcLoggingEnabled: true", "apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka spec: kafka: # listeners: - name: plain port: 9092 type: internal tls: false # zookeeper: #", "apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # config: num.partitions: 1 num.recovery.threads.per.data.dir: 1 default.replication.factor: 3 offsets.topic.replication.factor: 3 transaction.state.log.replication.factor: 3 transaction.state.log.min.isr: 1 log.retention.hours: 168 log.segment.bytes: 1073741824 log.retention.check.interval.ms: 300000 num.network.threads: 3 num.io.threads: 8 socket.send.buffer.bytes: 102400 socket.receive.buffer.bytes: 102400 socket.request.max.bytes: 104857600 group.initial.rebalance.delay.ms: 0 ssl.cipher.suites: \"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\" ssl.enabled.protocols: \"TLSv1.2\" ssl.protocol: \"TLSv1.2\" zookeeper.connection.timeout.ms: 6000 #", "apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # listeners: - name: plain port: 9092 type: internal tls: false - name: tls port: 9093 type: internal tls: true authentication: type: tls - name: external1 port: 9094 type: route tls: true - name: external2 port: 9095 type: ingress tls: false authentication: type: tls configuration: bootstrap: host: bootstrap.myingress.com brokers: - broker: 0 host: broker-0.myingress.com - broker: 1 host: broker-1.myingress.com - broker: 2 host: broker-2.myingress.com #", "listeners: - name: plain port: 9092 type: internal tls: false", "# spec: kafka: # listeners: # - name: plain port: 9092 type: internal tls: false - name: tls port: 9093 type: internal tls: true authentication: type: tls #", "# spec: kafka: # listeners: # - name: external1 port: 9094 type: route tls: true #", "# spec: kafka: # listeners: # - name: external2 port: 9095 type: ingress tls: false authentication: type: tls configuration: bootstrap: host: bootstrap.myingress.com brokers: - broker: 0 host: broker-0.myingress.com - broker: 1 host: broker-1.myingress.com - broker: 2 host: broker-2.myingress.com #", "# spec: kafka: # listeners: - name: external3 port: 9094 type: loadbalancer tls: true configuration: loadBalancerSourceRanges: - 10.0.0.0/8 - 88.208.76.87/32 #", "# spec: kafka: # listeners: # - name: external4 port: 9095 type: nodeport tls: false configuration: preferredNodePortAddressType: InternalDNS #", "get kafka KAFKA-CLUSTER-NAME -o=jsonpath='{.status.listeners[?(@.type==\"external\")].bootstrapServers}{\"\\n\"}'", "listeners: # - name: plain port: 9092 type: internal tls: true authentication: type: scram-sha-512 networkPolicyPeers: - podSelector: matchLabels: app: kafka-sasl-consumer - podSelector: matchLabels: app: kafka-sasl-producer - name: tls port: 9093 type: internal tls: true authentication: type: tls networkPolicyPeers: - namespaceSelector: matchLabels: project: myproject - namespaceSelector: matchLabels: project: myproject2", "listeners: # - name: plain port: 9092 type: internal tls: false - name: tls port: 9093 type: internal tls: true - name: external port: 9094 type: EXTERNAL-LISTENER-TYPE 1 tls: true", "listeners: # - name: external port: 9094 type: loadbalancer tls: true authentication: type: tls configuration: brokerCertChainAndKey: secretName: my-secret certificate: my-listener-certificate.crt key: my-listener-key.key", "listeners: # - name: external port: 9094 type: loadbalancer tls: false configuration: externalTrafficPolicy: Local loadBalancerSourceRanges: - 10.0.0.0/8 - 88.208.76.87/32 #", "listeners: # - name: external port: 9094 type: ingress tls: false configuration: class: nginx-internal #", "listeners: # - name: external port: 9094 type: nodeport tls: false configuration: preferredNodePortAddressType: InternalDNS #", "listeners: # - name: plain port: 9092 type: internal tls: false configuration: useServiceDnsDomain: true #", "listeners: # - name: external port: 9094 type: route tls: true authentication: type: tls configuration: bootstrap: alternativeNames: - example.hostname1 - example.hostname2", "listeners: # - name: external port: 9094 type: ingress tls: true authentication: type: tls configuration: bootstrap: host: bootstrap.myingress.com brokers: - broker: 0 host: broker-0.myingress.com - broker: 1 host: broker-1.myingress.com - broker: 2 host: broker-2.myingress.com", "listeners: # - name: external port: 9094 type: route tls: true authentication: type: tls configuration: bootstrap: host: bootstrap.myrouter.com brokers: - broker: 0 host: broker-0.myrouter.com - broker: 1 host: broker-1.myrouter.com - broker: 2 host: broker-2.myrouter.com", "listeners: # - name: external port: 9094 type: nodeport tls: true authentication: type: tls configuration: bootstrap: nodePort: 32100 brokers: - broker: 0 nodePort: 32000 - broker: 1 nodePort: 32001 - broker: 2 nodePort: 32002", "listeners: # - name: external port: 9094 type: loadbalancer tls: true authentication: type: tls configuration: bootstrap: loadBalancerIP: 172.29.3.10 brokers: - broker: 0 loadBalancerIP: 172.29.3.1 - broker: 1 loadBalancerIP: 172.29.3.2 - broker: 2 loadBalancerIP: 172.29.3.3", "listeners: # - name: external port: 9094 type: loadbalancer tls: true authentication: type: tls configuration: bootstrap: annotations: external-dns.alpha.kubernetes.io/hostname: kafka-bootstrap.mydomain.com. external-dns.alpha.kubernetes.io/ttl: \"60\" brokers: - broker: 0 annotations: external-dns.alpha.kubernetes.io/hostname: kafka-broker-0.mydomain.com. external-dns.alpha.kubernetes.io/ttl: \"60\" - broker: 1 annotations: external-dns.alpha.kubernetes.io/hostname: kafka-broker-1.mydomain.com. external-dns.alpha.kubernetes.io/ttl: \"60\" - broker: 2 annotations: external-dns.alpha.kubernetes.io/hostname: kafka-broker-2.mydomain.com. external-dns.alpha.kubernetes.io/ttl: \"60\"", "listeners: # - name: external port: 9094 type: route tls: true authentication: type: tls configuration: brokers: - broker: 0 advertisedHost: example.hostname.0 advertisedPort: 12340 - broker: 1 advertisedHost: example.hostname.1 advertisedPort: 12341 - broker: 2 advertisedHost: example.hostname.2 advertisedPort: 12342", "authorization: type: simple superUsers: - CN=client_1 - user_2 - CN=client_3", "authorization: type: opa url: http://opa:8181/v1/data/kafka/allow allowOnError: false initialCacheCapacity: 1000 maximumCacheSize: 10000 expireAfterMs: 60000 superUsers: - CN=fred - sam - CN=edward", "template: statefulset: metadata: labels: label1: value1 label2: value2 annotations: annotation1: value1 annotation2: value2", "template: pod: metadata: labels: label1: value1 annotations: anno1: value1 imagePullSecrets: - name: my-docker-credentials securityContext: runAsUser: 1000001 fsGroup: 0 terminationGracePeriodSeconds: 120", "apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnect # spec: # template: pod: hostAliases: - ip: \"192.168.1.86\" hostnames: - \"my-host-1\" - \"my-host-2\" #", "template: externalBootstrapService: externalTrafficPolicy: Local loadBalancerSourceRanges: - 10.0.0.0/8 - 88.208.76.87/32 perPodService: externalTrafficPolicy: Local loadBalancerSourceRanges: - 10.0.0.0/8 - 88.208.76.87/32", "template: podDisruptionBudget: metadata: labels: key1: label1 key2: label2 annotations: key1: label1 key2: label2 maxUnavailable: 1", "template: kafkaContainer: env: - name: EXAMPLE_ENV_1 value: example.env.one - name: EXAMPLE_ENV_2 value: example.env.two securityContext: runAsUser: 2000", "apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnect metadata: name: my-connect spec: # config: group.id: my-connect-cluster offset.storage.topic: my-connect-cluster-offsets config.storage.topic: my-connect-cluster-configs status.storage.topic: my-connect-cluster-status key.converter: org.apache.kafka.connect.json.JsonConverter value.converter: org.apache.kafka.connect.json.JsonConverter key.converter.schemas.enable: true value.converter.schemas.enable: true config.storage.replication.factor: 3 offset.storage.replication.factor: 3 status.storage.replication.factor: 3 ssl.cipher.suites: \"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\" ssl.enabled.protocols: \"TLSv1.2\" ssl.protocol: \"TLSv1.2\" ssl.endpoint.identification.algorithm: HTTPS #", "curl -s http://<connect-cluster-name>-connect-api:8083/admin/loggers/", "apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnect spec: # logging: type: inline loggers: connect.root.logger.level: \"INFO\" #", "apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnect spec: # logging: type: external name: customConfigMap #", "create secret generic MY-SECRET --from-file= MY-PUBLIC-TLS-CERTIFICATE-FILE.crt --from-file= MY-PRIVATE.key", "authentication: type: tls certificateAndKey: secretName: my-secret certificate: my-public-tls-certificate-file.crt key: private.key", "echo -n PASSWORD > MY-PASSWORD .txt", "create secret generic MY-CONNECT-SECRET-NAME --from-file= MY-PASSWORD-FIELD-NAME =./ MY-PASSWORD .txt", "apiVersion: v1 kind: Secret metadata: name: my-connect-secret-name type: Opaque data: my-connect-password-field: LFTIyFRFlMmU2N2Tm", "authentication: type: scram-sha-512 username: my-connect-username passwordSecret: secretName: my-connect-secret-name password: my-connect-password-field", "echo -n PASSWORD > MY-PASSWORD .txt", "create secret generic MY-CONNECT-SECRET-NAME --from-file= MY-PASSWORD-FIELD-NAME =./ MY-PASSWORD .txt", "apiVersion: v1 kind: Secret metadata: name: my-connect-secret-name type: Opaque data: my-password-field-name: LFTIyFRFlMmU2N2Tm", "authentication: type: plain username: my-connect-username passwordSecret: secretName: my-connect-secret-name password: my-password-field-name", "authentication: type: oauth tokenEndpointUri: https://sso.myproject.svc:8443/auth/realms/internal/protocol/openid-connect/token clientId: my-client-id clientSecret: secretName: my-client-oauth-secret key: client-secret", "authentication: type: oauth tokenEndpointUri: https://sso.myproject.svc:8443/auth/realms/internal/protocol/openid-connect/token clientId: my-client-id refreshToken: secretName: my-refresh-token-secret key: refresh-token", "authentication: type: oauth accessToken: secretName: my-access-token-secret key: access-token", "authentication: type: oauth tokenEndpointUri: https://sso.myproject.svc:8443/auth/realms/internal/protocol/openid-connect/token clientId: my-client-id refreshToken: secretName: my-refresh-token-secret key: refresh-token tlsTrustedCertificates: - secretName: oauth-server-ca certificate: tls.crt", "authentication: type: oauth tokenEndpointUri: https://sso.myproject.svc:8443/auth/realms/internal/protocol/openid-connect/token clientId: my-client-id refreshToken: secretName: my-refresh-token-secret key: refresh-token disableTlsHostnameVerification: true", "apiVersion: v1 kind: Secret metadata: name: aws-creds type: Opaque data: awsAccessKey: QUtJQVhYWFhYWFhYWFhYWFg= awsSecretAccessKey: Ylhsd1lYTnpkMjl5WkE=", "apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnect metadata: name: my-connect spec: # externalConfiguration: env: - name: AWS_ACCESS_KEY_ID valueFrom: secretKeyRef: name: aws-creds key: awsAccessKey - name: AWS_SECRET_ACCESS_KEY valueFrom: secretKeyRef: name: aws-creds key: awsSecretAccessKey", "apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnect metadata: name: my-connect spec: # externalConfiguration: env: - name: MY_ENVIRONMENT_VARIABLE valueFrom: configMapKeyRef: name: my-config-map key: my-key", "apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque stringData: connector.properties: |- 1 dbUsername: my-user 2 dbPassword: my-password", "apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnect metadata: name: my-connect spec: # config: config.providers: file 1 config.providers.file.class: org.apache.kafka.common.config.provider.FileConfigProvider 2 # externalConfiguration: volumes: - name: connector-config 3 secret: secretName: mysecret 4", "apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: # authorization: type: simple acls: - resource: type: topic name: my-topic patternType: literal operation: Read - resource: type: topic name: my-topic patternType: literal operation: Describe - resource: type: group name: my-group patternType: prefix operation: Read", "spec: quotas: producerByteRate: 1048576 consumerByteRate: 2097152 requestPercentage: 55", "apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: authentication: type: tls template: secret: metadata: labels: label1: value1 annotations: anno1: value1 #", "apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaMirrorMaker spec: # logging: type: inline loggers: mirrormaker.root.logger: \"INFO\" #", "apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaMirrorMaker spec: # logging: type: external name: customConfigMap #", "logger.send.name = http.openapi.operation.send logger.send.level = DEBUG", "logger.healthy.name = http.openapi.operation.healthy logger.healthy.level = WARN logger.ready.name = http.openapi.operation.ready logger.ready.level = WARN", "apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaBridge spec: # logging: type: inline loggers: logger.bridge.level: \"INFO\" # enabling DEBUG just for send operation logger.send.name: \"http.openapi.operation.send\" logger.send.level: \"DEBUG\" #", "apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaBridge spec: # logging: type: external name: customConfigMap #", "apiVersion: kafka.strimzi.io/v1alpha1 kind: KafkaBridge metadata: name: my-bridge spec: # http: port: 8080 cors: allowedOrigins: \"https://strimzi.io\" allowedMethods: \"GET,POST,PUT,DELETE,OPTIONS,PATCH\" #", "apiVersion: kafka.strimzi.io/v1alpha1 kind: KafkaBridge metadata: name: my-bridge spec: # consumer: config: auto.offset.reset: earliest enable.auto.commit: true ssl.cipher.suites: \"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\" ssl.enabled.protocols: \"TLSv1.2\" ssl.protocol: \"TLSv1.2\" ssl.endpoint.identification.algorithm: HTTPS #", "apiVersion: kafka.strimzi.io/v1alpha1 kind: KafkaBridge metadata: name: my-bridge spec: # producer: config: acks: 1 delivery.timeout.ms: 300000 ssl.cipher.suites: \"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\" ssl.enabled.protocols: \"TLSv1.2\" ssl.protocol: \"TLSv1.2\" ssl.endpoint.identification.algorithm: HTTPS #" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_amq_streams_on_openshift/api_reference-str
Chapter 125. KafkaBridgeAdminClientSpec schema reference
Chapter 125. KafkaBridgeAdminClientSpec schema reference Used in: KafkaBridgeSpec Property Description config The Kafka AdminClient configuration used for AdminClient instances created by the bridge. map
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-KafkaBridgeAdminClientSpec-reference
Chapter 23. Dynamic Host Configuration Protocol (DHCP)
Chapter 23. Dynamic Host Configuration Protocol (DHCP) Dynamic Host Configuration Protocol (DHCP) is a network protocol for automatically assigning TCP/IP information to client machines. Each DHCP client connects to the centrally-located DHCP server which returns that client's network configuration, including the IP address, gateway, and DNS servers. 23.1. Why Use DHCP? DHCP is useful for automatic configuration of client network interfaces. When configuring the client system, the administrator can choose DHCP and instead of entering an IP address, netmask, gateway, or DNS servers. The client retrieves this information from the DHCP server. DHCP is also useful if an administrator wants to change the IP addresses of a large number of systems. Instead of reconfiguring all the systems, he can just edit one DHCP configuration file on the server for the new set of IP addresses. If the DNS servers for an organization changes, the changes are made on the DHCP server, not on the DHCP clients. Once the network is restarted on the clients (or the clients are rebooted), the changes take effect. Furthermore, if a laptop or any type of mobile computer is configured for DHCP, it can be moved from office to office without being reconfigured as long as each office has a DHCP server that allows it to connect to the network.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/dynamic_host_configuration_protocol_dhcp
Preface
Preface As a developer or system administrator, you can modify Red Hat Decision Manager and KIE Server settings and properties to meet your business needs. You can modify the behavior of the Red Hat Decision Manager runtime, the Business Central interface, or the KIE Server.
null
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/managing_red_hat_decision_manager_and_kie_server_settings/pr01
Chapter 7. keystone
Chapter 7. keystone The following chapter contains information about the configuration options in the keystone service. 7.1. keystone.conf This section contains options for the /etc/keystone/keystone.conf file. 7.1.1. DEFAULT The following table outlines the options available under the [DEFAULT] group in the /etc/keystone/keystone.conf file. . Configuration option = Default value Type Description admin_token = None string value Using this feature is NOT recommended. Instead, use the keystone-manage bootstrap command. The value of this option is treated as a "shared secret" that can be used to bootstrap Keystone through the API. This "token" does not represent a user (it has no identity), and carries no explicit authorization (it effectively bypasses most authorization checks). If set to None , the value is ignored and the admin_token middleware is effectively disabled. conn_pool_min_size = 2 integer value The pool size limit for connections expiration policy conn_pool_ttl = 1200 integer value The time-to-live in sec of idle connections in the pool control_exchange = keystone string value The default exchange under which topics are scoped. May be overridden by an exchange name specified in the transport_url option. debug = False boolean value If set to true, the logging level will be set to DEBUG instead of the default INFO level. default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO'] list value List of package logging levels in logger=LEVEL pairs. This option is ignored if log_config_append is set. default_publisher_id = None string value Default publisher_id for outgoing notifications. If left undefined, Keystone will default to using the server's host name. executor_thread_pool_size = 64 integer value Size of executor thread pool when executor is threading or eventlet. fatal_deprecations = False boolean value Enables or disables fatal status of deprecations. insecure_debug = False boolean value If set to true, then the server will return information in HTTP responses that may allow an unauthenticated or authenticated user to get more information than normal, such as additional details about why authentication failed. This may be useful for debugging but is insecure. `instance_format = [instance: %(uuid)s] ` string value The format for an instance that is passed with the log message. `instance_uuid_format = [instance: %(uuid)s] ` string value The format for an instance UUID that is passed with the log message. list_limit = None integer value The maximum number of entities that will be returned in a collection. This global limit may be then overridden for a specific driver, by specifying a list_limit in the appropriate section (for example, [assignment] ). No limit is set by default. In larger deployments, it is recommended that you set this to a reasonable number to prevent operations like listing all users and projects from placing an unnecessary load on the system. log-config-append = None string value The name of a logging configuration file. This file is appended to any existing logging configuration files. For details about logging configuration files, see the Python logging module documentation. Note that when logging configuration files are used then all logging configuration is set in the configuration file and other logging configuration options are ignored (for example, log-date-format). log-date-format = %Y-%m-%d %H:%M:%S string value Defines the format string for %%(asctime)s in log records. Default: %(default)s . This option is ignored if log_config_append is set. log-dir = None string value (Optional) The base directory used for relative log_file paths. This option is ignored if log_config_append is set. log-file = None string value (Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr as defined by use_stderr. This option is ignored if log_config_append is set. log_rotate_interval = 1 integer value The amount of time before the log files are rotated. This option is ignored unless log_rotation_type is setto "interval". log_rotate_interval_type = days string value Rotation interval type. The time of the last file change (or the time when the service was started) is used when scheduling the rotation. log_rotation_type = none string value Log rotation type. logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s string value Format string to use for log messages with context. Used by oslo_log.formatters.ContextFormatter logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d string value Additional data to append to log message when logging level for the message is DEBUG. Used by oslo_log.formatters.ContextFormatter logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s string value Format string to use for log messages when context is undefined. Used by oslo_log.formatters.ContextFormatter logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s string value Prefix each line of exception output with this format. Used by oslo_log.formatters.ContextFormatter logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s string value Defines the format string for %(user_identity)s that is used in logging_context_format_string. Used by oslo_log.formatters.ContextFormatter max_logfile_count = 30 integer value Maximum number of rotated log files. max_logfile_size_mb = 200 integer value Log file maximum size in MB. This option is ignored if "log_rotation_type" is not set to "size". max_param_size = 64 integer value Limit the sizes of user & project ID/names. max_project_tree_depth = 5 integer value Maximum depth of the project hierarchy, excluding the project acting as a domain at the top of the hierarchy. WARNING: Setting it to a large value may adversely impact performance. max_token_size = 255 integer value Similar to [DEFAULT] max_param_size , but provides an exception for token values. With Fernet tokens, this can be set as low as 255. With UUID tokens, this should be set to 32). notification_format = cadf string value Define the notification format for identity service events. A basic notification only has information about the resource being operated on. A cadf notification has the same information, as well as information about the initiator of the event. The cadf option is entirely backwards compatible with the basic option, but is fully CADF-compliant, and is recommended for auditing use cases. notification_opt_out = ['identity.authenticate.success', 'identity.authenticate.pending', 'identity.authenticate.failed'] multi valued You can reduce the number of notifications keystone emits by explicitly opting out. Keystone will not emit notifications that match the patterns expressed in this list. Values are expected to be in the form of identity.<resource_type>.<operation> . By default, all notifications related to authentication are automatically suppressed. This field can be set multiple times in order to opt-out of multiple notification topics. For example, the following suppresses notifications describing user creation or successful authentication events: notification_opt_out=identity.user.create notification_opt_out=identity.authenticate.success public_endpoint = None uri value The base public endpoint URL for Keystone that is advertised to clients (NOTE: this does NOT affect how Keystone listens for connections). Defaults to the base host URL of the request. For example, if keystone receives a request to http://server:5000/v3/users , then this will option will be automatically treated as http://server:5000 . You should only need to set option if either the value of the base URL contains a path that keystone does not automatically infer ( /prefix/v3 ), or if the endpoint should be found on a different host. publish_errors = False boolean value Enables or disables publication of error events. rate_limit_burst = 0 integer value Maximum number of logged messages per rate_limit_interval. rate_limit_except_level = CRITICAL string value Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or empty string. Logs with level greater or equal to rate_limit_except_level are not filtered. An empty string means that all levels are filtered. rate_limit_interval = 0 integer value Interval, number of seconds, of log rate limiting. rpc_conn_pool_size = 30 integer value Size of RPC connection pool. rpc_response_timeout = 60 integer value Seconds to wait for a response from a call. strict_password_check = False boolean value If set to true, strict password length checking is performed for password manipulation. If a password exceeds the maximum length, the operation will fail with an HTTP 403 Forbidden error. If set to false, passwords are automatically truncated to the maximum length. syslog-log-facility = LOG_USER string value Syslog facility to receive log lines. This option is ignored if log_config_append is set. transport_url = rabbit:// string value The network address and optional user credentials for connecting to the messaging backend, in URL format. The expected format is: driver://[user:pass@]host:port[,[userN:passN@]hostN:portN]/virtual_host?query Example: rabbit://rabbitmq:[email protected]:5672// For full details on the fields in the URL see the documentation of oslo_messaging.TransportURL at https://docs.openstack.org/oslo.messaging/latest/reference/transport.html use-journal = False boolean value Enable journald for logging. If running in a systemd environment you may wish to enable journal support. Doing so will use the journal native protocol which includes structured metadata in addition to log messages.This option is ignored if log_config_append is set. use-json = False boolean value Use JSON formatting for logging. This option is ignored if log_config_append is set. use-syslog = False boolean value Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to honor RFC5424. This option is ignored if log_config_append is set. use_eventlog = False boolean value Log output to Windows Event Log. use_stderr = False boolean value Log output to standard error. This option is ignored if log_config_append is set. watch-log-file = False boolean value Uses logging handler designed to watch file system. When log file is moved or removed this handler will open a new log file with specified path instantaneously. It makes sense only if log_file option is specified and Linux platform is used. This option is ignored if log_config_append is set. 7.1.2. application_credential The following table outlines the options available under the [application_credential] group in the /etc/keystone/keystone.conf file. Table 7.1. application_credential Configuration option = Default value Type Description cache_time = None integer value Time to cache application credential data in seconds. This has no effect unless global caching is enabled. caching = True boolean value Toggle for application credential caching. This has no effect unless global caching is enabled. driver = sql string value Entry point for the application credential backend driver in the keystone.application_credential namespace. Keystone only provides a sql driver, so there is no reason to change this unless you are providing a custom entry point. user_limit = -1 integer value Maximum number of application credentials a user is permitted to create. A value of -1 means unlimited. If a limit is not set, users are permitted to create application credentials at will, which could lead to bloat in the keystone database or open keystone to a DoS attack. 7.1.3. assignment The following table outlines the options available under the [assignment] group in the /etc/keystone/keystone.conf file. Table 7.2. assignment Configuration option = Default value Type Description driver = sql string value Entry point for the assignment backend driver (where role assignments are stored) in the keystone.assignment namespace. Only a SQL driver is supplied by keystone itself. Unless you are writing proprietary drivers for keystone, you do not need to set this option. prohibited_implied_role = ['admin'] list value A list of role names which are prohibited from being an implied role. 7.1.4. auth The following table outlines the options available under the [auth] group in the /etc/keystone/keystone.conf file. Table 7.3. auth Configuration option = Default value Type Description application_credential = None string value Entry point for the application_credential auth plugin module in the keystone.auth.application_credential namespace. You do not need to set this unless you are overriding keystone's own application_credential authentication plugin. external = None string value Entry point for the external ( REMOTE_USER ) auth plugin module in the keystone.auth.external namespace. Supplied drivers are DefaultDomain and Domain . The default driver is DefaultDomain , which assumes that all users identified by the username specified to keystone in the REMOTE_USER variable exist within the context of the default domain. The Domain option expects an additional environment variable be presented to keystone, REMOTE_DOMAIN , containing the domain name of the REMOTE_USER (if REMOTE_DOMAIN is not set, then the default domain will be used instead). You do not need to set this unless you are taking advantage of "external authentication", where the application server (such as Apache) is handling authentication instead of keystone. mapped = None string value Entry point for the mapped auth plugin module in the keystone.auth.mapped namespace. You do not need to set this unless you are overriding keystone's own mapped authentication plugin. methods = ['external', 'password', 'token', 'oauth1', 'mapped', 'application_credential'] list value Allowed authentication methods. Note: You should disable the external auth method if you are currently using federation. External auth and federation both use the REMOTE_USER variable. Since both the mapped and external plugin are being invoked to validate attributes in the request environment, it can cause conflicts. oauth1 = None string value Entry point for the OAuth 1.0a auth plugin module in the keystone.auth.oauth1 namespace. You do not need to set this unless you are overriding keystone's own oauth1 authentication plugin. password = None string value Entry point for the password auth plugin module in the keystone.auth.password namespace. You do not need to set this unless you are overriding keystone's own password authentication plugin. token = None string value Entry point for the token auth plugin module in the keystone.auth.token namespace. You do not need to set this unless you are overriding keystone's own token authentication plugin. 7.1.5. cache The following table outlines the options available under the [cache] group in the /etc/keystone/keystone.conf file. Table 7.4. cache Configuration option = Default value Type Description backend = dogpile.cache.null string value Cache backend module. For eventlet-based or environments with hundreds of threaded servers, Memcache with pooling (oslo_cache.memcache_pool) is recommended. For environments with less than 100 threaded servers, Memcached (dogpile.cache.memcached) or Redis (dogpile.cache.redis) is recommended. Test environments with a single instance of the server can use the dogpile.cache.memory backend. backend_argument = [] multi valued Arguments supplied to the backend module. Specify this option once per argument to be passed to the dogpile.cache backend. Example format: "<argname>:<value>". config_prefix = cache.oslo string value Prefix for building the configuration dictionary for the cache region. This should not need to be changed unless there is another dogpile.cache region with the same configuration name. debug_cache_backend = False boolean value Extra debugging from the cache backend (cache keys, get/set/delete/etc calls). This is only really useful if you need to see the specific cache-backend get/set/delete calls with the keys/values. Typically this should be left set to false. enabled = True boolean value Global toggle for caching. expiration_time = 600 integer value Default TTL, in seconds, for any cached item in the dogpile.cache region. This applies to any cached method that doesn't have an explicit cache expiration time defined for it. memcache_dead_retry = 300 integer value Number of seconds memcached server is considered dead before it is tried again. (dogpile.cache.memcache and oslo_cache.memcache_pool backends only). memcache_pool_connection_get_timeout = 10 integer value Number of seconds that an operation will wait to get a memcache client connection. memcache_pool_maxsize = 10 integer value Max total number of open connections to every memcached server. (oslo_cache.memcache_pool backend only). memcache_pool_unused_timeout = 60 integer value Number of seconds a connection to memcached is held unused in the pool before it is closed. (oslo_cache.memcache_pool backend only). memcache_servers = ['localhost:11211'] list value Memcache servers in the format of "host:port". (dogpile.cache.memcache and oslo_cache.memcache_pool backends only). memcache_socket_timeout = 1.0 floating point value Timeout in seconds for every call to a server. (dogpile.cache.memcache and oslo_cache.memcache_pool backends only). proxies = [] list value Proxy classes to import that will affect the way the dogpile.cache backend functions. See the dogpile.cache documentation on changing-backend-behavior. 7.1.6. catalog The following table outlines the options available under the [catalog] group in the /etc/keystone/keystone.conf file. Table 7.5. catalog Configuration option = Default value Type Description cache_time = None integer value Time to cache catalog data (in seconds). This has no effect unless global and catalog caching are both enabled. Catalog data (services, endpoints, etc.) typically does not change frequently, and so a longer duration than the global default may be desirable. caching = True boolean value Toggle for catalog caching. This has no effect unless global caching is enabled. In a typical deployment, there is no reason to disable this. driver = sql string value Entry point for the catalog driver in the keystone.catalog namespace. Keystone provides a sql option (which supports basic CRUD operations through SQL), a templated option (which loads the catalog from a templated catalog file on disk), and a endpoint_filter.sql option (which supports arbitrary service catalogs per project). list_limit = None integer value Maximum number of entities that will be returned in a catalog collection. There is typically no reason to set this, as it would be unusual for a deployment to have enough services or endpoints to exceed a reasonable limit. template_file = default_catalog.templates string value Absolute path to the file used for the templated catalog backend. This option is only used if the [catalog] driver is set to templated . 7.1.7. cors The following table outlines the options available under the [cors] group in the /etc/keystone/keystone.conf file. Table 7.6. cors Configuration option = Default value Type Description allow_credentials = True boolean value Indicate that the actual request can include user credentials allow_headers = ['X-Auth-Token', 'X-Openstack-Request-Id', 'X-Subject-Token', 'X-Project-Id', 'X-Project-Name', 'X-Project-Domain-Id', 'X-Project-Domain-Name', 'X-Domain-Id', 'X-Domain-Name', 'Openstack-Auth-Receipt'] list value Indicate which header field names may be used during the actual request. allow_methods = ['GET', 'PUT', 'POST', 'DELETE', 'PATCH'] list value Indicate which methods can be used during the actual request. allowed_origin = None list value Indicate whether this resource may be shared with the domain received in the requests "origin" header. Format: "<protocol>://<host>[:<port>]", no trailing slash. Example: https://horizon.example.com expose_headers = ['X-Auth-Token', 'X-Openstack-Request-Id', 'X-Subject-Token', 'Openstack-Auth-Receipt'] list value Indicate which headers are safe to expose to the API. Defaults to HTTP Simple Headers. max_age = 3600 integer value Maximum cache age of CORS preflight requests. 7.1.8. credential The following table outlines the options available under the [credential] group in the /etc/keystone/keystone.conf file. Table 7.7. credential Configuration option = Default value Type Description cache_time = None integer value Time to cache credential data in seconds. This has no effect unless global caching is enabled. caching = True boolean value Toggle for caching only on retrieval of user credentials. This has no effect unless global caching is enabled. driver = sql string value Entry point for the credential backend driver in the keystone.credential namespace. Keystone only provides a sql driver, so there's no reason to change this unless you are providing a custom entry point. key_repository = /etc/keystone/credential-keys/ string value Directory containing Fernet keys used to encrypt and decrypt credentials stored in the credential backend. Fernet keys used to encrypt credentials have no relationship to Fernet keys used to encrypt Fernet tokens. Both sets of keys should be managed separately and require different rotation policies. Do not share this repository with the repository used to manage keys for Fernet tokens. provider = fernet string value Entry point for credential encryption and decryption operations in the keystone.credential.provider namespace. Keystone only provides a fernet driver, so there's no reason to change this unless you are providing a custom entry point to encrypt and decrypt credentials. 7.1.9. database The following table outlines the options available under the [database] group in the /etc/keystone/keystone.conf file. Table 7.8. database Configuration option = Default value Type Description backend = sqlalchemy string value The back end to use for the database. connection = None string value The SQLAlchemy connection string to use to connect to the database. connection_debug = 0 integer value Verbosity of SQL debugging information: 0=None, 100=Everything. `connection_parameters = ` string value Optional URL parameters to append onto the connection URL at connect time; specify as param1=value1&param2=value2&... connection_recycle_time = 3600 integer value Connections which have been present in the connection pool longer than this number of seconds will be replaced with a new one the time they are checked out from the pool. connection_trace = False boolean value Add Python stack traces to SQL as comment strings. db_inc_retry_interval = True boolean value If True, increases the interval between retries of a database operation up to db_max_retry_interval. db_max_retries = 20 integer value Maximum retries in case of connection error or deadlock error before error is raised. Set to -1 to specify an infinite retry count. db_max_retry_interval = 10 integer value If db_inc_retry_interval is set, the maximum seconds between retries of a database operation. db_retry_interval = 1 integer value Seconds between retries of a database transaction. max_overflow = 50 integer value If set, use this value for max_overflow with SQLAlchemy. max_pool_size = 5 integer value Maximum number of SQL connections to keep open in a pool. Setting a value of 0 indicates no limit. max_retries = 10 integer value Maximum number of database connection retries during startup. Set to -1 to specify an infinite retry count. mysql_enable_ndb = False boolean value If True, transparently enables support for handling MySQL Cluster (NDB). mysql_sql_mode = TRADITIONAL string value The SQL mode to be used for MySQL sessions. This option, including the default, overrides any server-set SQL mode. To use whatever SQL mode is set by the server configuration, set this to no value. Example: mysql_sql_mode= pool_timeout = None integer value If set, use this value for pool_timeout with SQLAlchemy. retry_interval = 10 integer value Interval between retries of opening a SQL connection. slave_connection = None string value The SQLAlchemy connection string to use to connect to the slave database. sqlite_synchronous = True boolean value If True, SQLite uses synchronous mode. use_db_reconnect = False boolean value Enable the experimental use of database reconnect on connection lost. 7.1.10. domain_config The following table outlines the options available under the [domain_config] group in the /etc/keystone/keystone.conf file. Table 7.9. domain_config Configuration option = Default value Type Description cache_time = 300 integer value Time-to-live (TTL, in seconds) to cache domain-specific configuration data. This has no effect unless [domain_config] caching is enabled. caching = True boolean value Toggle for caching of the domain-specific configuration backend. This has no effect unless global caching is enabled. There is normally no reason to disable this. driver = sql string value Entry point for the domain-specific configuration driver in the keystone.resource.domain_config namespace. Only a sql option is provided by keystone, so there is no reason to set this unless you are providing a custom entry point. 7.1.11. endpoint_filter The following table outlines the options available under the [endpoint_filter] group in the /etc/keystone/keystone.conf file. Table 7.10. endpoint_filter Configuration option = Default value Type Description driver = sql string value Entry point for the endpoint filter driver in the keystone.endpoint_filter namespace. Only a sql option is provided by keystone, so there is no reason to set this unless you are providing a custom entry point. return_all_endpoints_if_no_filter = True boolean value This controls keystone's behavior if the configured endpoint filters do not result in any endpoints for a user + project pair (and therefore a potentially empty service catalog). If set to true, keystone will return the entire service catalog. If set to false, keystone will return an empty service catalog. 7.1.12. endpoint_policy The following table outlines the options available under the [endpoint_policy] group in the /etc/keystone/keystone.conf file. Table 7.11. endpoint_policy Configuration option = Default value Type Description driver = sql string value Entry point for the endpoint policy driver in the keystone.endpoint_policy namespace. Only a sql driver is provided by keystone, so there is no reason to set this unless you are providing a custom entry point. 7.1.13. eventlet_server The following table outlines the options available under the [eventlet_server] group in the /etc/keystone/keystone.conf file. Table 7.12. eventlet_server Configuration option = Default value Type Description admin_bind_host = 0.0.0.0 host address value The IP address of the network interface for the admin service to listen on. admin_port = 35357 port value The port number for the admin service to listen on. public_bind_host = 0.0.0.0 host address value The IP address of the network interface for the public service to listen on. public_port = 5000 port value The port number for the public service to listen on. 7.1.14. federation The following table outlines the options available under the [federation] group in the /etc/keystone/keystone.conf file. Table 7.13. federation Configuration option = Default value Type Description `assertion_prefix = ` string value Prefix to use when filtering environment variable names for federated assertions. Matched variables are passed into the federated mapping engine. caching = True boolean value Toggle for federation caching. This has no effect unless global caching is enabled. There is typically no reason to disable this. driver = sql string value Entry point for the federation backend driver in the keystone.federation namespace. Keystone only provides a sql driver, so there is no reason to set this option unless you are providing a custom entry point. federated_domain_name = Federated string value An arbitrary domain name that is reserved to allow federated ephemeral users to have a domain concept. Note that an admin will not be able to create a domain with this name or update an existing domain to this name. You are not advised to change this value unless you really have to. remote_id_attribute = None string value Default value for all protocols to be used to obtain the entity ID of the Identity Provider from the environment. For mod_shib , this would be Shib-Identity-Provider . For mod_auth_openidc , this could be HTTP_OIDC_ISS . For mod_auth_mellon , this could be MELLON_IDP . This can be overridden on a per-protocol basis by providing a remote_id_attribute to the federation protocol using the API. sso_callback_template = /etc/keystone/sso_callback_template.html string value Absolute path to an HTML file used as a Single Sign-On callback handler. This page is expected to redirect the user from keystone back to a trusted dashboard host, by form encoding a token in a POST request. Keystone's default value should be sufficient for most deployments. trusted_dashboard = [] multi valued A list of trusted dashboard hosts. Before accepting a Single Sign-On request to return a token, the origin host must be a member of this list. This configuration option may be repeated for multiple values. You must set this in order to use web-based SSO flows. For example: trusted_dashboard=https://acme.example.com/auth/websso trusted_dashboard=https://beta.example.com/auth/websso 7.1.15. fernet_receipts The following table outlines the options available under the [fernet_receipts] group in the /etc/keystone/keystone.conf file. Table 7.14. fernet_receipts Configuration option = Default value Type Description key_repository = /etc/keystone/fernet-keys/ string value Directory containing Fernet receipt keys. This directory must exist before using keystone-manage fernet_setup for the first time, must be writable by the user running keystone-manage fernet_setup or keystone-manage fernet_rotate , and of course must be readable by keystone's server process. The repository may contain keys in one of three states: a single staged key (always index 0) used for receipt validation, a single primary key (always the highest index) used for receipt creation and validation, and any number of secondary keys (all other index values) used for receipt validation. With multiple keystone nodes, each node must share the same key repository contents, with the exception of the staged key (index 0). It is safe to run keystone-manage fernet_rotate once on any one node to promote a staged key (index 0) to be the new primary (incremented from the highest index), and produce a new staged key (a new key with index 0); the resulting repository can then be atomically replicated to other nodes without any risk of race conditions (for example, it is safe to run keystone-manage fernet_rotate on host A, wait any amount of time, create a tarball of the directory on host A, unpack it on host B to a temporary location, and atomically move ( mv ) the directory into place on host B). Running keystone-manage fernet_rotate twice on a key repository without syncing other nodes will result in receipts that can not be validated by all nodes. max_active_keys = 3 integer value This controls how many keys are held in rotation by keystone-manage fernet_rotate before they are discarded. The default value of 3 means that keystone will maintain one staged key (always index 0), one primary key (the highest numerical index), and one secondary key (every other index). Increasing this value means that additional secondary keys will be kept in the rotation. 7.1.16. fernet_tokens The following table outlines the options available under the [fernet_tokens] group in the /etc/keystone/keystone.conf file. Table 7.15. fernet_tokens Configuration option = Default value Type Description key_repository = /etc/keystone/fernet-keys/ string value Directory containing Fernet token keys. This directory must exist before using keystone-manage fernet_setup for the first time, must be writable by the user running keystone-manage fernet_setup or keystone-manage fernet_rotate , and of course must be readable by keystone's server process. The repository may contain keys in one of three states: a single staged key (always index 0) used for token validation, a single primary key (always the highest index) used for token creation and validation, and any number of secondary keys (all other index values) used for token validation. With multiple keystone nodes, each node must share the same key repository contents, with the exception of the staged key (index 0). It is safe to run keystone-manage fernet_rotate once on any one node to promote a staged key (index 0) to be the new primary (incremented from the highest index), and produce a new staged key (a new key with index 0); the resulting repository can then be atomically replicated to other nodes without any risk of race conditions (for example, it is safe to run keystone-manage fernet_rotate on host A, wait any amount of time, create a tarball of the directory on host A, unpack it on host B to a temporary location, and atomically move ( mv ) the directory into place on host B). Running keystone-manage fernet_rotate twice on a key repository without syncing other nodes will result in tokens that can not be validated by all nodes. max_active_keys = 3 integer value This controls how many keys are held in rotation by keystone-manage fernet_rotate before they are discarded. The default value of 3 means that keystone will maintain one staged key (always index 0), one primary key (the highest numerical index), and one secondary key (every other index). Increasing this value means that additional secondary keys will be kept in the rotation. 7.1.17. healthcheck The following table outlines the options available under the [healthcheck] group in the /etc/keystone/keystone.conf file. Table 7.16. healthcheck Configuration option = Default value Type Description backends = [] list value Additional backends that can perform health checks and report that information back as part of a request. detailed = False boolean value Show more detailed information as part of the response. Security note: Enabling this option may expose sensitive details about the service being monitored. Be sure to verify that it will not violate your security policies. disable_by_file_path = None string value Check the presence of a file to determine if an application is running on a port. Used by DisableByFileHealthcheck plugin. disable_by_file_paths = [] list value Check the presence of a file based on a port to determine if an application is running on a port. Expects a "port:path" list of strings. Used by DisableByFilesPortsHealthcheck plugin. path = /healthcheck string value The path to respond to healtcheck requests on. 7.1.18. identity The following table outlines the options available under the [identity] group in the /etc/keystone/keystone.conf file. Table 7.17. identity Configuration option = Default value Type Description cache_time = 600 integer value Time to cache identity data (in seconds). This has no effect unless global and identity caching are enabled. caching = True boolean value Toggle for identity caching. This has no effect unless global caching is enabled. There is typically no reason to disable this. default_domain_id = default string value This references the domain to use for all Identity API v2 requests (which are not aware of domains). A domain with this ID can optionally be created for you by keystone-manage bootstrap . The domain referenced by this ID cannot be deleted on the v3 API, to prevent accidentally breaking the v2 API. There is nothing special about this domain, other than the fact that it must exist to order to maintain support for your v2 clients. There is typically no reason to change this value. domain_config_dir = /etc/keystone/domains string value Absolute path where keystone should locate domain-specific [identity] configuration files. This option has no effect unless [identity] domain_specific_drivers_enabled is set to true. There is typically no reason to change this value. domain_configurations_from_database = False boolean value By default, domain-specific configuration data is read from files in the directory identified by [identity] domain_config_dir . Enabling this configuration option allows you to instead manage domain-specific configurations through the API, which are then persisted in the backend (typically, a SQL database), rather than using configuration files on disk. domain_specific_drivers_enabled = False boolean value A subset (or all) of domains can have their own identity driver, each with their own partial configuration options, stored in either the resource backend or in a file in a domain configuration directory (depending on the setting of [identity] domain_configurations_from_database ). Only values specific to the domain need to be specified in this manner. This feature is disabled by default, but may be enabled by default in a future release; set to true to enable. driver = sql string value Entry point for the identity backend driver in the keystone.identity namespace. Keystone provides a sql and ldap driver. This option is also used as the default driver selection (along with the other configuration variables in this section) in the event that [identity] domain_specific_drivers_enabled is enabled, but no applicable domain-specific configuration is defined for the domain in question. Unless your deployment primarily relies on ldap AND is not using domain-specific configuration, you should typically leave this set to sql . list_limit = None integer value Maximum number of entities that will be returned in an identity collection. max_password_length = 4096 integer value Maximum allowed length for user passwords. Decrease this value to improve performance. Changing this value does not effect existing passwords. password_hash_algorithm = bcrypt string value The password hashing algorithm to use for passwords stored within keystone. password_hash_rounds = None integer value This option represents a trade off between security and performance. Higher values lead to slower performance, but higher security. Changing this option will only affect newly created passwords as existing password hashes already have a fixed number of rounds applied, so it is safe to tune this option in a running cluster. The default for bcrypt is 12, must be between 4 and 31, inclusive. The default for scrypt is 16, must be within range(1,32) . The default for pbkdf_sha512 is 60000, must be within range(1,1<<32) WARNING: If using scrypt, increasing this value increases BOTH time AND memory requirements to hash a password. salt_bytesize = None integer value Number of bytes to use in scrypt and pbkfd2_sha512 hashing salt. Default for scrypt is 16 bytes. Default for pbkfd2_sha512 is 16 bytes. Limited to a maximum of 96 bytes due to the size of the column used to store password hashes. scrypt_block_size = None integer value Optional block size to pass to scrypt hash function (the r parameter). Useful for tuning scrypt to optimal performance for your CPU architecture. This option is only used when the password_hash_algorithm option is set to scrypt . Defaults to 8. scrypt_parallelism = None integer value Optional parallelism to pass to scrypt hash function (the p parameter). This option is only used when the password_hash_algorithm option is set to scrypt . Defaults to 1. 7.1.19. identity_mapping The following table outlines the options available under the [identity_mapping] group in the /etc/keystone/keystone.conf file. Table 7.18. identity_mapping Configuration option = Default value Type Description backward_compatible_ids = True boolean value The format of user and group IDs changed in Juno for backends that do not generate UUIDs (for example, LDAP), with keystone providing a hash mapping to the underlying attribute in LDAP. By default this mapping is disabled, which ensures that existing IDs will not change. Even when the mapping is enabled by using domain-specific drivers ( [identity] domain_specific_drivers_enabled ), any users and groups from the default domain being handled by LDAP will still not be mapped to ensure their IDs remain backward compatible. Setting this value to false will enable the new mapping for all backends, including the default LDAP driver. It is only guaranteed to be safe to enable this option if you do not already have assignments for users and groups from the default LDAP domain, and you consider it to be acceptable for Keystone to provide the different IDs to clients than it did previously (existing IDs in the API will suddenly change). Typically this means that the only time you can set this value to false is when configuring a fresh installation, although that is the recommended value. driver = sql string value Entry point for the identity mapping backend driver in the keystone.identity.id_mapping namespace. Keystone only provides a sql driver, so there is no reason to change this unless you are providing a custom entry point. generator = sha256 string value Entry point for the public ID generator for user and group entities in the keystone.identity.id_generator namespace. The Keystone identity mapper only supports generators that produce 64 bytes or less. Keystone only provides a sha256 entry point, so there is no reason to change this value unless you're providing a custom entry point. 7.1.20. jwt_tokens The following table outlines the options available under the [jwt_tokens] group in the /etc/keystone/keystone.conf file. Table 7.19. jwt_tokens Configuration option = Default value Type Description jws_private_key_repository = /etc/keystone/jws-keys/private string value Directory containing private keys for signing JWS tokens. This directory must exist in order for keystone's server process to start. It must also be readable by keystone's server process. It must contain at least one private key that corresponds to a public key in keystone.conf [jwt_tokens] jws_public_key_repository . In the event there are multiple private keys in this directory, keystone will use a key named private.pem to sign tokens. In the future, keystone may support the ability to sign tokens with multiple private keys. For now, only a key named private.pem within this directory is required to issue JWS tokens. This option is only applicable in deployments issuing JWS tokens and setting keystone.conf [token] provider = jws . jws_public_key_repository = /etc/keystone/jws-keys/public string value Directory containing public keys for validating JWS token signatures. This directory must exist in order for keystone's server process to start. It must also be readable by keystone's server process. It must contain at least one public key that corresponds to a private key in keystone.conf [jwt_tokens] jws_private_key_repository . This option is only applicable in deployments issuing JWS tokens and setting keystone.conf [token] provider = jws . 7.1.21. ldap The following table outlines the options available under the [ldap] group in the /etc/keystone/keystone.conf file. Table 7.20. ldap Configuration option = Default value Type Description alias_dereferencing = default string value The LDAP dereferencing option to use for queries involving aliases. A value of default falls back to using default dereferencing behavior configured by your ldap.conf . A value of never prevents aliases from being dereferenced at all. A value of searching dereferences aliases only after name resolution. A value of finding dereferences aliases only during name resolution. A value of always dereferences aliases in all cases. auth_pool_connection_lifetime = 60 integer value The maximum end user authentication connection lifetime to the LDAP server in seconds. When this lifetime is exceeded, the connection will be unbound and removed from the connection pool. This option has no effect unless [ldap] use_auth_pool is also enabled. auth_pool_size = 100 integer value The size of the connection pool to use for end user authentication. This option has no effect unless [ldap] use_auth_pool is also enabled. chase_referrals = None boolean value Sets keystone's referral chasing behavior across directory partitions. If left unset, the system's default behavior will be used. connection_timeout = -1 integer value The connection timeout to use with the LDAP server. A value of -1 means that connections will never timeout. debug_level = None integer value Sets the LDAP debugging level for LDAP calls. A value of 0 means that debugging is not enabled. This value is a bitmask, consult your LDAP documentation for possible values. group_ad_nesting = False boolean value If enabled, group queries will use Active Directory specific filters for nested groups. group_additional_attribute_mapping = [] list value A list of LDAP attribute to keystone group attribute pairs used for mapping additional attributes to groups in keystone. The expected format is <ldap_attr>:<group_attr> , where ldap_attr is the attribute in the LDAP object and group_attr is the attribute which should appear in the identity API. group_attribute_ignore = [] list value List of group attributes to ignore on create and update. or whether a specific group attribute should be filtered for list or show group. group_desc_attribute = description string value The LDAP attribute mapped to group descriptions in keystone. group_filter = None string value The LDAP search filter to use for groups. group_id_attribute = cn string value The LDAP attribute mapped to group IDs in keystone. This must NOT be a multivalued attribute. Group IDs are expected to be globally unique across keystone domains and URL-safe. group_member_attribute = member string value The LDAP attribute used to indicate that a user is a member of the group. group_members_are_ids = False boolean value Enable this option if the members of the group object class are keystone user IDs rather than LDAP DNs. This is the case when using posixGroup as the group object class in Open Directory. group_name_attribute = ou string value The LDAP attribute mapped to group names in keystone. Group names are expected to be unique only within a keystone domain and are not expected to be URL-safe. group_objectclass = groupOfNames string value The LDAP object class to use for groups. If setting this option to posixGroup , you may also be interested in enabling the [ldap] group_members_are_ids option. group_tree_dn = None string value The search base to use for groups. Defaults to the [ldap] suffix value. page_size = 0 integer value Defines the maximum number of results per page that keystone should request from the LDAP server when listing objects. A value of zero ( 0 ) disables paging. password = None string value The password of the administrator bind DN to use when querying the LDAP server, if your LDAP server requires it. pool_connection_lifetime = 600 integer value The maximum connection lifetime to the LDAP server in seconds. When this lifetime is exceeded, the connection will be unbound and removed from the connection pool. This option has no effect unless [ldap] use_pool is also enabled. pool_connection_timeout = -1 integer value The connection timeout to use when pooling LDAP connections. A value of -1 means that connections will never timeout. This option has no effect unless [ldap] use_pool is also enabled. pool_retry_delay = 0.1 floating point value The number of seconds to wait before attempting to reconnect to the LDAP server. This option has no effect unless [ldap] use_pool is also enabled. pool_retry_max = 3 integer value The maximum number of times to attempt reconnecting to the LDAP server before aborting. A value of zero prevents retries. This option has no effect unless [ldap] use_pool is also enabled. pool_size = 10 integer value The size of the LDAP connection pool. This option has no effect unless [ldap] use_pool is also enabled. query_scope = one string value The search scope which defines how deep to search within the search base. A value of one (representing oneLevel or singleLevel ) indicates a search of objects immediately below to the base object, but does not include the base object itself. A value of sub (representing subtree or wholeSubtree ) indicates a search of both the base object itself and the entire subtree below it. suffix = cn=example,cn=com string value The default LDAP server suffix to use, if a DN is not defined via either [ldap] user_tree_dn or [ldap] group_tree_dn . tls_cacertdir = None string value An absolute path to a CA certificate directory to use when communicating with LDAP servers. There is no reason to set this option if you've also set [ldap] tls_cacertfile . tls_cacertfile = None string value An absolute path to a CA certificate file to use when communicating with LDAP servers. This option will take precedence over [ldap] tls_cacertdir , so there is no reason to set both. tls_req_cert = demand string value Specifies which checks to perform against client certificates on incoming TLS sessions. If set to demand , then a certificate will always be requested and required from the LDAP server. If set to allow , then a certificate will always be requested but not required from the LDAP server. If set to never , then a certificate will never be requested. url = ldap://localhost string value URL(s) for connecting to the LDAP server. Multiple LDAP URLs may be specified as a comma separated string. The first URL to successfully bind is used for the connection. use_auth_pool = True boolean value Enable LDAP connection pooling for end user authentication. There is typically no reason to disable this. use_pool = True boolean value Enable LDAP connection pooling for queries to the LDAP server. There is typically no reason to disable this. use_tls = False boolean value Enable TLS when communicating with LDAP servers. You should also set the [ldap] tls_cacertfile and [ldap] tls_cacertdir options when using this option. Do not set this option if you are using LDAP over SSL (LDAPS) instead of TLS. user = None string value The user name of the administrator bind DN to use when querying the LDAP server, if your LDAP server requires it. user_additional_attribute_mapping = [] list value A list of LDAP attribute to keystone user attribute pairs used for mapping additional attributes to users in keystone. The expected format is <ldap_attr>:<user_attr> , where ldap_attr is the attribute in the LDAP object and user_attr is the attribute which should appear in the identity API. user_attribute_ignore = ['default_project_id'] list value List of user attributes to ignore on create and update, or whether a specific user attribute should be filtered for list or show user. user_default_project_id_attribute = None string value The LDAP attribute mapped to a user's default_project_id in keystone. This is most commonly used when keystone has write access to LDAP. user_description_attribute = description string value The LDAP attribute mapped to user descriptions in keystone. user_enabled_attribute = enabled string value The LDAP attribute mapped to the user enabled attribute in keystone. If setting this option to userAccountControl , then you may be interested in setting [ldap] user_enabled_mask and [ldap] user_enabled_default as well. user_enabled_default = True string value The default value to enable users. This should match an appropriate integer value if the LDAP server uses non-boolean (bitmask) values to indicate if a user is enabled or disabled. If this is not set to True , then the typical value is 512 . This is typically used when [ldap] user_enabled_attribute = userAccountControl . user_enabled_emulation = False boolean value If enabled, keystone uses an alternative method to determine if a user is enabled or not by checking if they are a member of the group defined by the [ldap] user_enabled_emulation_dn option. Enabling this option causes keystone to ignore the value of [ldap] user_enabled_invert . user_enabled_emulation_dn = None string value DN of the group entry to hold enabled users when using enabled emulation. Setting this option has no effect unless [ldap] user_enabled_emulation is also enabled. user_enabled_emulation_use_group_config = False boolean value Use the [ldap] group_member_attribute and [ldap] group_objectclass settings to determine membership in the emulated enabled group. Enabling this option has no effect unless [ldap] user_enabled_emulation is also enabled. user_enabled_invert = False boolean value Logically negate the boolean value of the enabled attribute obtained from the LDAP server. Some LDAP servers use a boolean lock attribute where "true" means an account is disabled. Setting [ldap] user_enabled_invert = true will allow these lock attributes to be used. This option will have no effect if either the [ldap] user_enabled_mask or [ldap] user_enabled_emulation options are in use. user_enabled_mask = 0 integer value Bitmask integer to select which bit indicates the enabled value if the LDAP server represents "enabled" as a bit on an integer rather than as a discrete boolean. A value of 0 indicates that the mask is not used. If this is not set to 0 the typical value is 2 . This is typically used when [ldap] user_enabled_attribute = userAccountControl . Setting this option causes keystone to ignore the value of [ldap] user_enabled_invert . user_filter = None string value The LDAP search filter to use for users. user_id_attribute = cn string value The LDAP attribute mapped to user IDs in keystone. This must NOT be a multivalued attribute. User IDs are expected to be globally unique across keystone domains and URL-safe. user_mail_attribute = mail string value The LDAP attribute mapped to user emails in keystone. user_name_attribute = sn string value The LDAP attribute mapped to user names in keystone. User names are expected to be unique only within a keystone domain and are not expected to be URL-safe. user_objectclass = inetOrgPerson string value The LDAP object class to use for users. user_pass_attribute = userPassword string value The LDAP attribute mapped to user passwords in keystone. user_tree_dn = None string value The search base to use for users. Defaults to the [ldap] suffix value. 7.1.22. memcache The following table outlines the options available under the [memcache] group in the /etc/keystone/keystone.conf file. Table 7.21. memcache Configuration option = Default value Type Description dead_retry = 300 integer value Number of seconds memcached server is considered dead before it is tried again. This is used by the key value store system. pool_connection_get_timeout = 10 integer value Number of seconds that an operation will wait to get a memcache client connection. This is used by the key value store system. pool_maxsize = 10 integer value Max total number of open connections to every memcached server. This is used by the key value store system. pool_unused_timeout = 60 integer value Number of seconds a connection to memcached is held unused in the pool before it is closed. This is used by the key value store system. socket_timeout = 3 integer value Timeout in seconds for every call to a server. This is used by the key value store system. 7.1.23. oauth1 The following table outlines the options available under the [oauth1] group in the /etc/keystone/keystone.conf file. Table 7.22. oauth1 Configuration option = Default value Type Description access_token_duration = 86400 integer value Number of seconds for the OAuth Access Token to remain valid after being created. This is the amount of time the consumer has to interact with the service provider (which is typically keystone). Setting this option to zero means that access tokens will last forever. driver = sql string value Entry point for the OAuth backend driver in the keystone.oauth1 namespace. Typically, there is no reason to set this option unless you are providing a custom entry point. request_token_duration = 28800 integer value Number of seconds for the OAuth Request Token to remain valid after being created. This is the amount of time the user has to authorize the token. Setting this option to zero means that request tokens will last forever. 7.1.24. oslo_messaging_amqp The following table outlines the options available under the [oslo_messaging_amqp] group in the /etc/keystone/keystone.conf file. Table 7.23. oslo_messaging_amqp Configuration option = Default value Type Description addressing_mode = dynamic string value Indicates the addressing mode used by the driver. Permitted values: legacy - use legacy non-routable addressing routable - use routable addresses dynamic - use legacy addresses if the message bus does not support routing otherwise use routable addressing anycast_address = anycast string value Appended to the address prefix when sending to a group of consumers. Used by the message bus to identify messages that should be delivered in a round-robin fashion across consumers. broadcast_prefix = broadcast string value address prefix used when broadcasting to all servers connection_retry_backoff = 2 integer value Increase the connection_retry_interval by this many seconds after each unsuccessful failover attempt. connection_retry_interval = 1 integer value Seconds to pause before attempting to re-connect. connection_retry_interval_max = 30 integer value Maximum limit for connection_retry_interval + connection_retry_backoff container_name = None string value Name for the AMQP container. must be globally unique. Defaults to a generated UUID default_notification_exchange = None string value Exchange name used in notification addresses. Exchange name resolution precedence: Target.exchange if set else default_notification_exchange if set else control_exchange if set else notify default_notify_timeout = 30 integer value The deadline for a sent notification message delivery. Only used when caller does not provide a timeout expiry. default_reply_retry = 0 integer value The maximum number of attempts to re-send a reply message which failed due to a recoverable error. default_reply_timeout = 30 integer value The deadline for an rpc reply message delivery. default_rpc_exchange = None string value Exchange name used in RPC addresses. Exchange name resolution precedence: Target.exchange if set else default_rpc_exchange if set else control_exchange if set else rpc default_send_timeout = 30 integer value The deadline for an rpc cast or call message delivery. Only used when caller does not provide a timeout expiry. default_sender_link_timeout = 600 integer value The duration to schedule a purge of idle sender links. Detach link after expiry. group_request_prefix = unicast string value address prefix when sending to any server in group idle_timeout = 0 integer value Timeout for inactive connections (in seconds) link_retry_delay = 10 integer value Time to pause between re-connecting an AMQP 1.0 link that failed due to a recoverable error. multicast_address = multicast string value Appended to the address prefix when sending a fanout message. Used by the message bus to identify fanout messages. notify_address_prefix = openstack.org/om/notify string value Address prefix for all generated Notification addresses notify_server_credit = 100 integer value Window size for incoming Notification messages pre_settled = ['rpc-cast', 'rpc-reply'] multi valued Send messages of this type pre-settled. Pre-settled messages will not receive acknowledgement from the peer. Note well: pre-settled messages may be silently discarded if the delivery fails. Permitted values: rpc-call - send RPC Calls pre-settled rpc-reply - send RPC Replies pre-settled rpc-cast - Send RPC Casts pre-settled notify - Send Notifications pre-settled pseudo_vhost = True boolean value Enable virtual host support for those message buses that do not natively support virtual hosting (such as qpidd). When set to true the virtual host name will be added to all message bus addresses, effectively creating a private subnet per virtual host. Set to False if the message bus supports virtual hosting using the hostname field in the AMQP 1.0 Open performative as the name of the virtual host. reply_link_credit = 200 integer value Window size for incoming RPC Reply messages. rpc_address_prefix = openstack.org/om/rpc string value Address prefix for all generated RPC addresses rpc_server_credit = 100 integer value Window size for incoming RPC Request messages `sasl_config_dir = ` string value Path to directory that contains the SASL configuration `sasl_config_name = ` string value Name of configuration file (without .conf suffix) `sasl_default_realm = ` string value SASL realm to use if no realm present in username `sasl_mechanisms = ` string value Space separated list of acceptable SASL mechanisms server_request_prefix = exclusive string value address prefix used when sending to a specific server ssl = False boolean value Attempt to connect via SSL. If no other ssl-related parameters are given, it will use the system's CA-bundle to verify the server's certificate. `ssl_ca_file = ` string value CA certificate PEM file used to verify the server's certificate `ssl_cert_file = ` string value Self-identifying certificate PEM file for client authentication `ssl_key_file = ` string value Private key PEM file used to sign ssl_cert_file certificate (optional) ssl_key_password = None string value Password for decrypting ssl_key_file (if encrypted) ssl_verify_vhost = False boolean value By default SSL checks that the name in the server's certificate matches the hostname in the transport_url. In some configurations it may be preferable to use the virtual hostname instead, for example if the server uses the Server Name Indication TLS extension (rfc6066) to provide a certificate per virtual host. Set ssl_verify_vhost to True if the server's SSL certificate uses the virtual host name instead of the DNS name. trace = False boolean value Debug: dump AMQP frames to stdout unicast_address = unicast string value Appended to the address prefix when sending to a particular RPC/Notification server. Used by the message bus to identify messages sent to a single destination. 7.1.25. oslo_messaging_kafka The following table outlines the options available under the [oslo_messaging_kafka] group in the /etc/keystone/keystone.conf file. Table 7.24. oslo_messaging_kafka Configuration option = Default value Type Description compression_codec = none string value The compression codec for all data generated by the producer. If not set, compression will not be used. Note that the allowed values of this depend on the kafka version conn_pool_min_size = 2 integer value The pool size limit for connections expiration policy conn_pool_ttl = 1200 integer value The time-to-live in sec of idle connections in the pool consumer_group = oslo_messaging_consumer string value Group id for Kafka consumer. Consumers in one group will coordinate message consumption enable_auto_commit = False boolean value Enable asynchronous consumer commits kafka_consumer_timeout = 1.0 floating point value Default timeout(s) for Kafka consumers kafka_max_fetch_bytes = 1048576 integer value Max fetch bytes of Kafka consumer max_poll_records = 500 integer value The maximum number of records returned in a poll call pool_size = 10 integer value Pool Size for Kafka Consumers producer_batch_size = 16384 integer value Size of batch for the producer async send producer_batch_timeout = 0.0 floating point value Upper bound on the delay for KafkaProducer batching in seconds sasl_mechanism = PLAIN string value Mechanism when security protocol is SASL security_protocol = PLAINTEXT string value Protocol used to communicate with brokers `ssl_cafile = ` string value CA certificate PEM file used to verify the server certificate 7.1.26. oslo_messaging_notifications The following table outlines the options available under the [oslo_messaging_notifications] group in the /etc/keystone/keystone.conf file. Table 7.25. oslo_messaging_notifications Configuration option = Default value Type Description driver = [] multi valued The Drivers(s) to handle sending notifications. Possible values are messaging, messagingv2, routing, log, test, noop retry = -1 integer value The maximum number of attempts to re-send a notification message which failed to be delivered due to a recoverable error. 0 - No retry, -1 - indefinite topics = ['notifications'] list value AMQP topic used for OpenStack notifications. transport_url = None string value A URL representing the messaging driver to use for notifications. If not set, we fall back to the same configuration used for RPC. 7.1.27. oslo_messaging_rabbit The following table outlines the options available under the [oslo_messaging_rabbit] group in the /etc/keystone/keystone.conf file. Table 7.26. oslo_messaging_rabbit Configuration option = Default value Type Description amqp_auto_delete = False boolean value Auto-delete queues in AMQP. amqp_durable_queues = False boolean value Use durable queues in AMQP. direct_mandatory_flag = True integer value Enable/Disable the RabbitMQ mandatory flag for direct send. The direct send is used as reply,so the MessageUndeliverable exception is raised in case the client queue does not exist. heartbeat_in_pthread = False boolean value EXPERIMENTAL: Run the health check heartbeat threadthrough a native python thread. By default if thisoption isn't provided the health check heartbeat willinherit the execution model from the parent process. Byexample if the parent process have monkey patched thestdlib by using eventlet/greenlet then the heartbeatwill be run through a green thread. heartbeat_rate = 2 integer value How often times during the heartbeat_timeout_threshold we check the heartbeat. heartbeat_timeout_threshold = 60 integer value Number of seconds after which the Rabbit broker is considered down if heartbeat's keep-alive fails (0 disables heartbeat). kombu_compression = None string value EXPERIMENTAL: Possible values are: gzip, bz2. If not set compression will not be used. This option may not be available in future versions. kombu_failover_strategy = round-robin string value Determines how the RabbitMQ node is chosen in case the one we are currently connected to becomes unavailable. Takes effect only if more than one RabbitMQ node is provided in config. kombu_missing_consumer_retry_timeout = 60 integer value How long to wait a missing client before abandoning to send it its replies. This value should not be longer than rpc_response_timeout. kombu_reconnect_delay = 1.0 floating point value How long to wait before reconnecting in response to an AMQP consumer cancel notification. rabbit_ha_queues = False boolean value Try to use HA queues in RabbitMQ (x-ha-policy: all). If you change this option, you must wipe the RabbitMQ database. In RabbitMQ 3.0, queue mirroring is no longer controlled by the x-ha-policy argument when declaring a queue. If you just want to make sure that all queues (except those with auto-generated names) are mirrored across all nodes, run: "rabbitmqctl set_policy HA ^(?!amq\.).* {"ha-mode": "all"} " rabbit_interval_max = 30 integer value Maximum interval of RabbitMQ connection retries. Default is 30 seconds. rabbit_login_method = AMQPLAIN string value The RabbitMQ login method. rabbit_qos_prefetch_count = 0 integer value Specifies the number of messages to prefetch. Setting to zero allows unlimited messages. rabbit_retry_backoff = 2 integer value How long to backoff for between retries when connecting to RabbitMQ. rabbit_retry_interval = 1 integer value How frequently to retry connecting with RabbitMQ. rabbit_transient_queues_ttl = 1800 integer value Positive integer representing duration in seconds for queue TTL (x-expires). Queues which are unused for the duration of the TTL are automatically deleted. The parameter affects only reply and fanout queues. ssl = False boolean value Connect over SSL. `ssl_ca_file = ` string value SSL certification authority file (valid only if SSL enabled). `ssl_cert_file = ` string value SSL cert file (valid only if SSL enabled). `ssl_key_file = ` string value SSL key file (valid only if SSL enabled). `ssl_version = ` string value SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some distributions. 7.1.28. oslo_middleware The following table outlines the options available under the [oslo_middleware] group in the /etc/keystone/keystone.conf file. Table 7.27. oslo_middleware Configuration option = Default value Type Description enable_proxy_headers_parsing = False boolean value Whether the application is behind a proxy or not. This determines if the middleware should parse the headers or not. max_request_body_size = 114688 integer value The maximum body size for each request, in bytes. secure_proxy_ssl_header = X-Forwarded-Proto string value The HTTP Header that will be used to determine what the original request protocol scheme was, even if it was hidden by a SSL termination proxy. 7.1.29. oslo_policy The following table outlines the options available under the [oslo_policy] group in the /etc/keystone/keystone.conf file. Table 7.28. oslo_policy Configuration option = Default value Type Description enforce_scope = False boolean value This option controls whether or not to enforce scope when evaluating policies. If True , the scope of the token used in the request is compared to the scope_types of the policy being enforced. If the scopes do not match, an InvalidScope exception will be raised. If False , a message will be logged informing operators that policies are being invoked with mismatching scope. policy_default_rule = default string value Default rule. Enforced when a requested rule is not found. policy_dirs = ['policy.d'] multi valued Directories where policy configuration files are stored. They can be relative to any directory in the search path defined by the config_dir option, or absolute paths. The file defined by policy_file must exist for these directories to be searched. Missing or empty directories are ignored. policy_file = policy.json string value The relative or absolute path of a file that maps roles to permissions for a given service. Relative paths must be specified in relation to the configuration file setting this option. remote_content_type = application/x-www-form-urlencoded string value Content Type to send and receive data for REST based policy check remote_ssl_ca_crt_file = None string value Absolute path to ca cert file for REST based policy check remote_ssl_client_crt_file = None string value Absolute path to client cert for REST based policy check remote_ssl_client_key_file = None string value Absolute path client key file REST based policy check remote_ssl_verify_server_crt = False boolean value server identity verification for REST based policy check 7.1.30. policy The following table outlines the options available under the [policy] group in the /etc/keystone/keystone.conf file. Table 7.29. policy Configuration option = Default value Type Description driver = sql string value Entry point for the policy backend driver in the keystone.policy namespace. Supplied drivers are rules (which does not support any CRUD operations for the v3 policy API) and sql . Typically, there is no reason to set this option unless you are providing a custom entry point. list_limit = None integer value Maximum number of entities that will be returned in a policy collection. 7.1.31. profiler The following table outlines the options available under the [profiler] group in the /etc/keystone/keystone.conf file. Table 7.30. profiler Configuration option = Default value Type Description connection_string = messaging:// string value Connection string for a notifier backend. Default value is messaging:// which sets the notifier to oslo_messaging. Examples of possible values: messaging:// - use oslo_messaging driver for sending spans. redis://127.0.0.1:6379 - use redis driver for sending spans. mongodb://127.0.0.1:27017 - use mongodb driver for sending spans. elasticsearch://127.0.0.1:9200 - use elasticsearch driver for sending spans. jaeger://127.0.0.1:6831 - use jaeger tracing as driver for sending spans. enabled = False boolean value Enable the profiling for all services on this node. Default value is False (fully disable the profiling feature). Possible values: True: Enables the feature False: Disables the feature. The profiling cannot be started via this project operations. If the profiling is triggered by another project, this project part will be empty. es_doc_type = notification string value Document type for notification indexing in elasticsearch. es_scroll_size = 10000 integer value Elasticsearch splits large requests in batches. This parameter defines maximum size of each batch (for example: es_scroll_size=10000). es_scroll_time = 2m string value This parameter is a time value parameter (for example: es_scroll_time=2m), indicating for how long the nodes that participate in the search will maintain relevant resources in order to continue and support it. filter_error_trace = False boolean value Enable filter traces that contain error/exception to a separated place. Default value is set to False. Possible values: True: Enable filter traces that contain error/exception. False: Disable the filter. hmac_keys = SECRET_KEY string value Secret key(s) to use for encrypting context data for performance profiling. This string value should have the following format: <key1>[,<key2>,... <keyn>], where each key is some random string. A user who triggers the profiling via the REST API has to set one of these keys in the headers of the REST API call to include profiling results of this node for this particular project. Both "enabled" flag and "hmac_keys" config options should be set to enable profiling. Also, to generate correct profiling information across all services at least one key needs to be consistent between OpenStack projects. This ensures it can be used from client side to generate the trace, containing information from all possible resources. sentinel_service_name = mymaster string value Redissentinel uses a service name to identify a master redis service. This parameter defines the name (for example: sentinal_service_name=mymaster ). socket_timeout = 0.1 floating point value Redissentinel provides a timeout option on the connections. This parameter defines that timeout (for example: socket_timeout=0.1). trace_sqlalchemy = False boolean value Enable SQL requests profiling in services. Default value is False (SQL requests won't be traced). Possible values: True: Enables SQL requests profiling. Each SQL query will be part of the trace and can the be analyzed by how much time was spent for that. False: Disables SQL requests profiling. The spent time is only shown on a higher level of operations. Single SQL queries cannot be analyzed this way. 7.1.32. receipt The following table outlines the options available under the [receipt] group in the /etc/keystone/keystone.conf file. Table 7.31. receipt Configuration option = Default value Type Description cache_on_issue = True boolean value Enable storing issued receipt data to receipt validation cache so that first receipt validation doesn't actually cause full validation cycle. This option has no effect unless global caching and receipt caching are enabled. cache_time = 300 integer value The number of seconds to cache receipt creation and validation data. This has no effect unless both global and [receipt] caching are enabled. caching = True boolean value Toggle for caching receipt creation and validation data. This has no effect unless global caching is enabled, or if cache_on_issue is disabled as we only cache receipts on issue. expiration = 300 integer value The amount of time that a receipt should remain valid (in seconds). This value should always be very short, as it represents how long a user has to reattempt auth with the missing auth methods. provider = fernet string value Entry point for the receipt provider in the keystone.receipt.provider namespace. The receipt provider controls the receipt construction and validation operations. Keystone includes just the fernet receipt provider for now. fernet receipts do not need to be persisted at all, but require that you run keystone-manage fernet_setup (also see the keystone-manage fernet_rotate command). 7.1.33. resource The following table outlines the options available under the [resource] group in the /etc/keystone/keystone.conf file. Table 7.32. resource Configuration option = Default value Type Description admin_project_domain_name = None string value Name of the domain that owns the admin_project_name . If left unset, then there is no admin project. [resource] admin_project_name must also be set to use this option. admin_project_name = None string value This is a special project which represents cloud-level administrator privileges across services. Tokens scoped to this project will contain a true is_admin_project attribute to indicate to policy systems that the role assignments on that specific project should apply equally across every project. If left unset, then there is no admin project, and thus no explicit means of cross-project role assignments. [resource] admin_project_domain_name must also be set to use this option. cache_time = None integer value Time to cache resource data in seconds. This has no effect unless global caching is enabled. caching = True boolean value Toggle for resource caching. This has no effect unless global caching is enabled. domain_name_url_safe = off string value This controls whether the names of domains are restricted from containing URL-reserved characters. If set to new , attempts to create or update a domain with a URL-unsafe name will fail. If set to strict , attempts to scope a token with a URL-unsafe domain name will fail, thereby forcing all domain names to be updated to be URL-safe. driver = sql string value Entry point for the resource driver in the keystone.resource namespace. Only a sql driver is supplied by keystone. Unless you are writing proprietary drivers for keystone, you do not need to set this option. list_limit = None integer value Maximum number of entities that will be returned in a resource collection. project_name_url_safe = off string value This controls whether the names of projects are restricted from containing URL-reserved characters. If set to new , attempts to create or update a project with a URL-unsafe name will fail. If set to strict , attempts to scope a token with a URL-unsafe project name will fail, thereby forcing all project names to be updated to be URL-safe. 7.1.34. revoke The following table outlines the options available under the [revoke] group in the /etc/keystone/keystone.conf file. Table 7.33. revoke Configuration option = Default value Type Description cache_time = 3600 integer value Time to cache the revocation list and the revocation events (in seconds). This has no effect unless global and [revoke] caching are both enabled. caching = True boolean value Toggle for revocation event caching. This has no effect unless global caching is enabled. driver = sql string value Entry point for the token revocation backend driver in the keystone.revoke namespace. Keystone only provides a sql driver, so there is no reason to set this option unless you are providing a custom entry point. expiration_buffer = 1800 integer value The number of seconds after a token has expired before a corresponding revocation event may be purged from the backend. 7.1.35. role The following table outlines the options available under the [role] group in the /etc/keystone/keystone.conf file. Table 7.34. role Configuration option = Default value Type Description cache_time = None integer value Time to cache role data, in seconds. This has no effect unless both global caching and [role] caching are enabled. caching = True boolean value Toggle for role caching. This has no effect unless global caching is enabled. In a typical deployment, there is no reason to disable this. driver = None string value Entry point for the role backend driver in the keystone.role namespace. Keystone only provides a sql driver, so there's no reason to change this unless you are providing a custom entry point. list_limit = None integer value Maximum number of entities that will be returned in a role collection. This may be useful to tune if you have a large number of discrete roles in your deployment. 7.1.36. saml The following table outlines the options available under the [saml] group in the /etc/keystone/keystone.conf file. Table 7.35. saml Configuration option = Default value Type Description assertion_expiration_time = 3600 integer value Determines the lifetime for any SAML assertions generated by keystone, using NotOnOrAfter attributes. certfile = /etc/keystone/ssl/certs/signing_cert.pem string value Absolute path to the public certificate file to use for SAML signing. The value cannot contain a comma ( , ). idp_contact_company = Example, Inc. string value This is the company name of the identity provider's contact person. idp_contact_email = [email protected] string value This is the email address of the identity provider's contact person. idp_contact_name = SAML Identity Provider Support string value This is the given name of the identity provider's contact person. idp_contact_surname = Support string value This is the surname of the identity provider's contact person. idp_contact_telephone = +1 800 555 0100 string value This is the telephone number of the identity provider's contact person. idp_contact_type = other string value This is the type of contact that best describes the identity provider's contact person. idp_entity_id = None uri value This is the unique entity identifier of the identity provider (keystone) to use when generating SAML assertions. This value is required to generate identity provider metadata and must be a URI (a URL is recommended). For example: https://keystone.example.com/v3/OS-FEDERATION/saml2/idp . idp_lang = en string value This is the language used by the identity provider's organization. idp_metadata_path = /etc/keystone/saml2_idp_metadata.xml string value Absolute path to the identity provider metadata file. This file should be generated with the keystone-manage saml_idp_metadata command. There is typically no reason to change this value. idp_organization_display_name = OpenStack SAML Identity Provider string value This is the name of the identity provider's organization to be displayed. idp_organization_name = SAML Identity Provider string value This is the name of the identity provider's organization. idp_organization_url = https://example.com/ uri value This is the URL of the identity provider's organization. The URL referenced here should be useful to humans. idp_sso_endpoint = None uri value This is the single sign-on (SSO) service location of the identity provider which accepts HTTP POST requests. A value is required to generate identity provider metadata. For example: https://keystone.example.com/v3/OS-FEDERATION/saml2/sso . keyfile = /etc/keystone/ssl/private/signing_key.pem string value Absolute path to the private key file to use for SAML signing. The value cannot contain a comma ( , ). relay_state_prefix = ss:mem: string value The prefix of the RelayState SAML attribute to use when generating enhanced client and proxy (ECP) assertions. In a typical deployment, there is no reason to change this value. xmlsec1_binary = xmlsec1 string value Name of, or absolute path to, the binary to be used for XML signing. Although only the XML Security Library ( xmlsec1 ) is supported, it may have a non-standard name or path on your system. If keystone cannot find the binary itself, you may need to install the appropriate package, use this option to specify an absolute path, or adjust keystone's PATH environment variable. 7.1.37. security_compliance The following table outlines the options available under the [security_compliance] group in the /etc/keystone/keystone.conf file. Table 7.36. security_compliance Configuration option = Default value Type Description change_password_upon_first_use = False boolean value Enabling this option requires users to change their password when the user is created, or upon administrative reset. Before accessing any services, affected users will have to change their password. To ignore this requirement for specific users, such as service users, set the options attribute ignore_change_password_upon_first_use to True for the desired user via the update user API. This feature is disabled by default. This feature is only applicable with the sql backend for the [identity] driver . disable_user_account_days_inactive = None integer value The maximum number of days a user can go without authenticating before being considered "inactive" and automatically disabled (locked). This feature is disabled by default; set any value to enable it. This feature depends on the sql backend for the [identity] driver . When a user exceeds this threshold and is considered "inactive", the user's enabled attribute in the HTTP API may not match the value of the user's enabled column in the user table. lockout_duration = 1800 integer value The number of seconds a user account will be locked when the maximum number of failed authentication attempts (as specified by [security_compliance] lockout_failure_attempts ) is exceeded. Setting this option will have no effect unless you also set [security_compliance] lockout_failure_attempts to a non-zero value. This feature depends on the sql backend for the [identity] driver . lockout_failure_attempts = None integer value The maximum number of times that a user can fail to authenticate before the user account is locked for the number of seconds specified by [security_compliance] lockout_duration . This feature is disabled by default. If this feature is enabled and [security_compliance] lockout_duration is not set, then users may be locked out indefinitely until the user is explicitly enabled via the API. This feature depends on the sql backend for the [identity] driver . minimum_password_age = 0 integer value The number of days that a password must be used before the user can change it. This prevents users from changing their passwords immediately in order to wipe out their password history and reuse an old password. This feature does not prevent administrators from manually resetting passwords. It is disabled by default and allows for immediate password changes. This feature depends on the sql backend for the [identity] driver . Note: If [security_compliance] password_expires_days is set, then the value for this option should be less than the password_expires_days . password_expires_days = None integer value The number of days for which a password will be considered valid before requiring it to be changed. This feature is disabled by default. If enabled, new password changes will have an expiration date, however existing passwords would not be impacted. This feature depends on the sql backend for the [identity] driver . password_regex = None string value The regular expression used to validate password strength requirements. By default, the regular expression will match any password. The following is an example of a pattern which requires at least 1 letter, 1 digit, and have a minimum length of 7 characters: ^(?=. \d)(?=. [a-zA-Z]).{7,}USD This feature depends on the sql backend for the [identity] driver . password_regex_description = None string value Describe your password regular expression here in language for humans. If a password fails to match the regular expression, the contents of this configuration variable will be returned to users to explain why their requested password was insufficient. unique_last_password_count = 0 integer value This controls the number of user password iterations to keep in history, in order to enforce that newly created passwords are unique. The total number which includes the new password should not be greater or equal to this value. Setting the value to zero (the default) disables this feature. Thus, to enable this feature, values must be greater than 0. This feature depends on the sql backend for the [identity] driver . 7.1.38. shadow_users The following table outlines the options available under the [shadow_users] group in the /etc/keystone/keystone.conf file. Table 7.37. shadow_users Configuration option = Default value Type Description driver = sql string value Entry point for the shadow users backend driver in the keystone.identity.shadow_users namespace. This driver is used for persisting local user references to externally-managed identities (via federation, LDAP, etc). Keystone only provides a sql driver, so there is no reason to change this option unless you are providing a custom entry point. 7.1.39. token The following table outlines the options available under the [token] group in the /etc/keystone/keystone.conf file. Table 7.38. token Configuration option = Default value Type Description allow_expired_window = 172800 integer value This controls the number of seconds that a token can be retrieved for beyond the built-in expiry time. This allows long running operations to succeed. Defaults to two days. allow_rescope_scoped_token = True boolean value This toggles whether scoped tokens may be re-scoped to a new project or domain, thereby preventing users from exchanging a scoped token (including those with a default project scope) for any other token. This forces users to either authenticate for unscoped tokens (and later exchange that unscoped token for tokens with a more specific scope) or to provide their credentials in every request for a scoped token to avoid re-scoping altogether. cache_on_issue = True boolean value Enable storing issued token data to token validation cache so that first token validation doesn't actually cause full validation cycle. This option has no effect unless global caching is enabled and will still cache tokens even if [token] caching = False . cache_time = None integer value The number of seconds to cache token creation and validation data. This has no effect unless both global and [token] caching are enabled. caching = True boolean value Toggle for caching token creation and validation data. This has no effect unless global caching is enabled. expiration = 3600 integer value The amount of time that a token should remain valid (in seconds). Drastically reducing this value may break "long-running" operations that involve multiple services to coordinate together, and will force users to authenticate with keystone more frequently. Drastically increasing this value will increase the number of tokens that will be simultaneously valid. Keystone tokens are also bearer tokens, so a shorter duration will also reduce the potential security impact of a compromised token. provider = fernet string value Entry point for the token provider in the keystone.token.provider namespace. The token provider controls the token construction, validation, and revocation operations. Supported upstream providers are fernet and jws . Neither fernet or jws tokens require persistence and both require additional setup. If using fernet , you're required to run keystone-manage fernet_setup , which creates symmetric keys used to encrypt tokens. If using jws , you're required to generate an ECDSA keypair using a SHA-256 hash algorithm for signing and validating token, which can be done with keystone-manage create_jws_keypair . Note that fernet tokens are encrypted and jws tokens are only signed. Please be sure to consider this if your deployment has security requirements regarding payload contents used to generate token IDs. revoke_by_id = True boolean value This toggles support for revoking individual tokens by the token identifier and thus various token enumeration operations (such as listing all tokens issued to a specific user). These operations are used to determine the list of tokens to consider revoked. Do not disable this option if you're using the kvs [revoke] driver . 7.1.40. tokenless_auth The following table outlines the options available under the [tokenless_auth] group in the /etc/keystone/keystone.conf file. Table 7.39. tokenless_auth Configuration option = Default value Type Description issuer_attribute = SSL_CLIENT_I_DN string value The name of the WSGI environment variable used to pass the issuer of the client certificate to keystone. This attribute is used as an identity provider ID for the X.509 tokenless authorization along with the protocol to look up its corresponding mapping. In a typical deployment, there is no reason to change this value. protocol = x509 string value The federated protocol ID used to represent X.509 tokenless authorization. This is used in combination with the value of [tokenless_auth] issuer_attribute to find a corresponding federated mapping. In a typical deployment, there is no reason to change this value. trusted_issuer = [] multi valued The list of distinguished names which identify trusted issuers of client certificates allowed to use X.509 tokenless authorization. If the option is absent then no certificates will be allowed. The format for the values of a distinguished name (DN) must be separated by a comma and contain no spaces. Furthermore, because an individual DN may contain commas, this configuration option may be repeated multiple times to represent multiple values. For example, keystone.conf would include two consecutive lines in order to trust two different DNs, such as trusted_issuer = CN=john,OU=keystone,O=openstack and trusted_issuer = CN=mary,OU=eng,O=abc . 7.1.41. totp The following table outlines the options available under the [totp] group in the /etc/keystone/keystone.conf file. Table 7.40. totp Configuration option = Default value Type Description included_previous_windows = 1 integer value The number of windows to check when processing TOTP passcodes. 7.1.42. trust The following table outlines the options available under the [trust] group in the /etc/keystone/keystone.conf file. Table 7.41. trust Configuration option = Default value Type Description allow_redelegation = False boolean value Allows authorization to be redelegated from one user to another, effectively chaining trusts together. When disabled, the remaining_uses attribute of a trust is constrained to be zero. driver = sql string value Entry point for the trust backend driver in the keystone.trust namespace. Keystone only provides a sql driver, so there is no reason to change this unless you are providing a custom entry point. max_redelegation_count = 3 integer value Maximum number of times that authorization can be redelegated from one user to another in a chain of trusts. This number may be reduced further for a specific trust. 7.1.43. unified_limit The following table outlines the options available under the [unified_limit] group in the /etc/keystone/keystone.conf file. Table 7.42. unified_limit Configuration option = Default value Type Description cache_time = None integer value Time to cache unified limit data, in seconds. This has no effect unless both global caching and [unified_limit] caching are enabled. caching = True boolean value Toggle for unified limit caching. This has no effect unless global caching is enabled. In a typical deployment, there is no reason to disable this. driver = sql string value Entry point for the unified limit backend driver in the keystone.unified_limit namespace. Keystone only provides a sql driver, so there's no reason to change this unless you are providing a custom entry point. enforcement_model = flat string value The enforcement model to use when validating limits associated to projects. Enforcement models will behave differently depending on the existing limits, which may result in backwards incompatible changes if a model is switched in a running deployment. list_limit = None integer value Maximum number of entities that will be returned in a role collection. This may be useful to tune if you have a large number of unified limits in your deployment. 7.1.44. wsgi The following table outlines the options available under the [wsgi] group in the /etc/keystone/keystone.conf file. Table 7.43. wsgi Configuration option = Default value Type Description debug_middleware = False boolean value If set to true, this enables the oslo debug middleware in Keystone. This Middleware prints a lot of information about the request and the response. It is useful for getting information about the data on the wire (decoded) and passed to the WSGI application pipeline. This middleware has no effect on the "debug" setting in the [DEFAULT] section of the config file or setting Keystone's log-level to "DEBUG"; it is specific to debugging the WSGI data as it enters and leaves Keystone (specific request-related data). This option is used for introspection on the request and response data between the web server (apache, nginx, etc) and Keystone. This middleware is inserted as the first element in the middleware chain and will show the data closest to the wire. WARNING: NOT INTENDED FOR USE IN PRODUCTION. THIS MIDDLEWARE CAN AND WILL EMIT SENSITIVE/PRIVILEGED DATA.
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/configuration_reference/keystone
Chapter 33. Jira Add Issue Sink
Chapter 33. Jira Add Issue Sink Add a new issue to Jira. The Kamelet expects the following headers to be set: projectKey / ce-projectKey : as the Jira project key. issueTypeName / ce-issueTypeName : as the name of the issue type (example: Bug, Enhancement). issueSummary / ce-issueSummary : as the title or summary of the issue. issueAssignee / ce-issueAssignee : as the user assigned to the issue (Optional). issuePriorityName / ce-issuePriorityName : as the priority name of the issue (example: Critical, Blocker, Trivial) (Optional). issueComponents / ce-issueComponents : as list of string with the valid component names (Optional). issueDescription / ce-issueDescription : as the issue description (Optional). The issue description can be set from the body of the message or the issueDescription / ce-issueDescription in the header, however the body takes precedence. 33.1. Configuration Options The following table summarizes the configuration options available for the jira-add-issue-sink Kamelet: Property Name Description Type Default Example jiraUrl * Jira URL The URL of your instance of Jira string "http://my_jira.com:8081" password * Password The password or the API Token to access Jira string username * Username The username to access Jira string Note Fields marked with an asterisk (*) are mandatory. 33.2. Dependencies At runtime, the jira-add-issue-sink Kamelet relies upon the presence of the following dependencies: camel:core camel:jackson camel:jira camel:kamelet mvn:com.fasterxml.jackson.datatype:jackson-datatype-joda:2.12.4.redhat-00001 33.3. Usage This section describes how you can use the jira-add-issue-sink . 33.3.1. Knative Sink You can use the jira-add-issue-sink Kamelet as a Knative sink by binding it to a Knative object. jira-add-issue-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: jira-add-issue-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: "projectKey" value: "MYP" - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: "issueTypeName" value: "Bug" - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: "issueSummary" value: "The issue summary" - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: "issuePriorityName" value: "Low" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel properties: jiraUrl: "jira server url" username: "username" password: "password" 33.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 33.3.1.2. Procedure for using the cluster CLI Save the jira-add-issue-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f jira-add-issue-sink-binding.yaml 33.3.1.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind --name jira-add-issue-sink-binding timer-source?message="The new comment"\&period=60000 --step insert-header-action -p step-0.name=projectKey -p step-0.value=MYP --step insert-header-action -p step-1.name=issueTypeName -p step-1.value=Bug --step insert-header-action -p step-2.name=issueSummary -p step-2.value="This is a bug" --step insert-header-action -p step-3.name=issuePriorityName -p step-3.value=Low jira-add-issue-sink?jiraUrl="jira url"\&username="username"\&password="password" This command creates the KameletBinding in the current namespace on the cluster. 33.3.2. Kafka Sink You can use the jira-add-issue-sink Kamelet as a Kafka sink by binding it to a Kafka topic. jira-add-issue-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: jira-add-issue-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: "projectKey" value: "MYP" - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: "issueTypeName" value: "Bug" - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: "issueSummary" value: "The issue summary" - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: "issuePriorityName" value: "Low" sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: jira-add-issue-sink properties: jiraUrl: "jira server url" username: "username" password: "password" 33.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 33.3.2.2. Procedure for using the cluster CLI Save the jira-add-issue-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f jira-add-issue-sink-binding.yaml 33.3.2.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind --name jira-add-issue-sink-binding timer-source?message="The new comment"\&period=60000 --step insert-header-action -p step-0.name=projectKey -p step-0.value=MYP --step insert-header-action -p step-1.name=issueTypeName -p step-1.value=Bug --step insert-header-action -p step-2.name=issueSummary -p step-2.value="This is a bug" --step insert-header-action -p step-3.name=issuePriorityName -p step-3.value=Low jira-add-issue-sink?jiraUrl="jira url"\&username="username"\&password="password" This command creates the KameletBinding in the current namespace on the cluster. 33.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/jira-add-issue-sink.kamelet.yaml
[ "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: jira-add-issue-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: \"projectKey\" value: \"MYP\" - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: \"issueTypeName\" value: \"Bug\" - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: \"issueSummary\" value: \"The issue summary\" - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: \"issuePriorityName\" value: \"Low\" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel properties: jiraUrl: \"jira server url\" username: \"username\" password: \"password\"", "apply -f jira-add-issue-sink-binding.yaml", "kamel bind --name jira-add-issue-sink-binding timer-source?message=\"The new comment\"\\&period=60000 --step insert-header-action -p step-0.name=projectKey -p step-0.value=MYP --step insert-header-action -p step-1.name=issueTypeName -p step-1.value=Bug --step insert-header-action -p step-2.name=issueSummary -p step-2.value=\"This is a bug\" --step insert-header-action -p step-3.name=issuePriorityName -p step-3.value=Low jira-add-issue-sink?jiraUrl=\"jira url\"\\&username=\"username\"\\&password=\"password\"", "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: jira-add-issue-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: \"projectKey\" value: \"MYP\" - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: \"issueTypeName\" value: \"Bug\" - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: \"issueSummary\" value: \"The issue summary\" - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: \"issuePriorityName\" value: \"Low\" sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: jira-add-issue-sink properties: jiraUrl: \"jira server url\" username: \"username\" password: \"password\"", "apply -f jira-add-issue-sink-binding.yaml", "kamel bind --name jira-add-issue-sink-binding timer-source?message=\"The new comment\"\\&period=60000 --step insert-header-action -p step-0.name=projectKey -p step-0.value=MYP --step insert-header-action -p step-1.name=issueTypeName -p step-1.value=Bug --step insert-header-action -p step-2.name=issueSummary -p step-2.value=\"This is a bug\" --step insert-header-action -p step-3.name=issuePriorityName -p step-3.value=Low jira-add-issue-sink?jiraUrl=\"jira url\"\\&username=\"username\"\\&password=\"password\"" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.5/html/kamelets_reference/jira-add-issue-sink
Appendix A. Using your subscription
Appendix A. Using your subscription AMQ is provided through a software subscription. To manage your subscriptions, access your account at the Red Hat Customer Portal. A.1. Accessing your account Procedure Go to access.redhat.com . If you do not already have an account, create one. Log in to your account. A.2. Activating a subscription Procedure Go to access.redhat.com . Navigate to My Subscriptions . Navigate to Activate a subscription and enter your 16-digit activation number. A.3. Downloading release files To access .zip, .tar.gz, and other release files, use the customer portal to find the relevant files for download. If you are using RPM packages or the Red Hat Maven repository, this step is not required. Procedure Open a browser and log in to the Red Hat Customer Portal Product Downloads page at access.redhat.com/downloads . Locate the Red Hat AMQ entries in the INTEGRATION AND AUTOMATION category. Select the desired AMQ product. The Software Downloads page opens. Click the Download link for your component. A.4. Registering your system for packages To install RPM packages for this product on Red Hat Enterprise Linux, your system must be registered. If you are using downloaded release files, this step is not required. Procedure Go to access.redhat.com . Navigate to Registration Assistant . Select your OS version and continue to the page. Use the listed command in your system terminal to complete the registration. For more information about registering your system, see one of the following resources: Red Hat Enterprise Linux 7 - Registering the system and managing subscriptions Red Hat Enterprise Linux 8 - Registering the system and managing subscriptions Revised on 2023-05-17 11:27:00 UTC
null
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/using_amq_interconnect/using_your_subscription
Chapter 111. Demoting or promoting hidden replicas
Chapter 111. Demoting or promoting hidden replicas After a replica has been installed, you can configure whether the replica is hidden or visible. For details about hidden replicas, see The hidden replica mode . Prerequisites Ensure that the replica is not the DNSSEC key master. If it is, move the service to another replica before making this replica hidden. Ensure that the replica is not a CA renewal server. If it is, move the service to another replica before making this replica hidden. For details, see Changing and resetting IdM CA renewal server . Procedure To hide a replica: To make a replica visible again: To view a list of all the hidden replicas in your topology: If all of your replicas are enabled, the command output does not mention hidden replicas.
[ "ipa server-state replica.idm.example.com --state=hidden", "ipa server-state replica.idm.example.com --state=enabled", "ipa config-show" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_identity_management/demoting-or-promoting-hidden-replicas_configuring-and-managing-idm
Red Hat Data Grid
Red Hat Data Grid Data Grid is a high-performance, distributed in-memory data store. Schemaless data structure Flexibility to store different objects as key-value pairs. Grid-based data storage Designed to distribute and replicate data across clusters. Elastic scaling Dynamically adjust the number of nodes to meet demand without service disruption. Data interoperability Store, retrieve, and query data in the grid from different endpoints.
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/data_grid_cross-site_replication/red-hat-data-grid
Chapter 31. Using Ansible to manage DNS records in IdM
Chapter 31. Using Ansible to manage DNS records in IdM This chapter describes how to manage DNS records in Identity Management (IdM) using an Ansible playbook. As an IdM administrator, you can add, modify, and delete DNS records in IdM. The chapter contains the following sections: Ensuring the presence of A and AAAA DNS records in IdM using Ansible Ensuring the presence of A and PTR DNS records in IdM using Ansible Ensuring the presence of multiple DNS records in IdM using Ansible Ensuring the presence of multiple CNAME records in IdM using Ansible Ensuring the presence of an SRV record in IdM using Ansible 31.1. DNS records in IdM Identity Management (IdM) supports many different DNS record types. The following four are used most frequently: A This is a basic map for a host name and an IPv4 address. The record name of an A record is a host name, such as www . The IP Address value of an A record is an IPv4 address, such as 192.0.2.1 . For more information about A records, see RFC 1035 . AAAA This is a basic map for a host name and an IPv6 address. The record name of an AAAA record is a host name, such as www . The IP Address value is an IPv6 address, such as 2001:DB8::1111 . For more information about AAAA records, see RFC 3596 . SRV Service (SRV) resource records map service names to the DNS name of the server that is providing that particular service. For example, this record type can map a service like an LDAP directory to the server which manages it. The record name of an SRV record has the format _service . _protocol , such as _ldap._tcp . The configuration options for SRV records include priority, weight, port number, and host name for the target service. For more information about SRV records, see RFC 2782 . PTR A pointer record (PTR) adds a reverse DNS record, which maps an IP address to a domain name. Note All reverse DNS lookups for IPv4 addresses use reverse entries that are defined in the in-addr.arpa. domain. The reverse address, in human-readable form, is the exact reverse of the regular IP address, with the in-addr.arpa. domain appended to it. For example, for the network address 192.0.2.0/24 , the reverse zone is 2.0.192.in-addr.arpa . The record name of a PTR must be in the standard format specified in RFC 1035 , extended in RFC 2317 , and RFC 3596 . The host name value must be a canonical host name of the host for which you want to create the record. Note Reverse zones can also be configured for IPv6 addresses, with zones in the .ip6.arpa. domain. For more information about IPv6 reverse zones, see RFC 3596 . When adding DNS resource records, note that many of the records require different data. For example, a CNAME record requires a host name, while an A record requires an IP address. In the IdM Web UI, the fields in the form for adding a new record are updated automatically to reflect what data is required for the currently selected type of record. 31.2. Common ipa dnsrecord-* options You can use the following options when adding, modifying and deleting the most common DNS resource record types in Identity Management (IdM): A (IPv4) AAAA (IPv6) SRV PTR In Bash , you can define multiple entries by listing the values in a comma-separated list inside curly braces, such as --option={val1,val2,val3} . Table 31.1. General Record Options Option Description --ttl = number Sets the time to live for the record. --structured Parses the raw DNS records and returns them in a structured format. Table 31.2. "A" record options Option Description Examples --a-rec = ARECORD Passes a single A record or a list of A records. ipa dnsrecord-add idm.example.com host1 --a-rec=192.168.122.123 Can create a wildcard A record with a given IP address. ipa dnsrecord-add idm.example.com "*" --a-rec=192.168.122.123 [a] --a-ip-address = string Gives the IP address for the record. When creating a record, the option to specify the A record value is --a-rec . However, when modifying an A record, the --a-rec option is used to specify the current value for the A record. The new value is set with the --a-ip-address option. ipa dnsrecord-mod idm.example.com --a-rec 192.168.122.123 --a-ip-address 192.168.122.124 [a] The example creates a wildcard A record with the IP address of 192.0.2.123. Table 31.3. "AAAA" record options Option Description Example --aaaa-rec = AAAARECORD Passes a single AAAA (IPv6) record or a list of AAAA records. ipa dnsrecord-add idm.example.com www --aaaa-rec 2001:db8::1231:5675 --aaaa-ip-address = string Gives the IPv6 address for the record. When creating a record, the option to specify the A record value is --aaaa-rec . However, when modifying an A record, the --aaaa-rec option is used to specify the current value for the A record. The new value is set with the --a-ip-address option. ipa dnsrecord-mod idm.example.com --aaaa-rec 2001:db8::1231:5675 --aaaa-ip-address 2001:db8::1231:5676 Table 31.4. "PTR" record options Option Description Example --ptr-rec = PTRRECORD Passes a single PTR record or a list of PTR records. When adding the reverse DNS record, the zone name used with the ipa dnsrecord-add command is reversed, compared to the usage for adding other DNS records. Typically, the host IP address is the last octet of the IP address in a given network. The first example on the right adds a PTR record for server4.idm.example.com with IPv4 address 192.168.122.4. The second example adds a reverse DNS entry to the 0.0.0.0.0.0.0.0.8.b.d.0.1.0.0.2.ip6.arpa. IPv6 reverse zone for the host server2.example.com with the IP address 2001:DB8::1111 . ipa dnsrecord-add 122.168.192.in-addr.arpa 4 --ptr-rec server4.idm.example.com. USD ipa dnsrecord-add 0.0.0.0.0.0.0.0.8.b.d.0.1.0.0.2.ip6.arpa. 1.1.1.0.0.0.0.0.0.0.0.0.0.0.0 --ptr-rec server2.idm.example.com. --ptr-hostname = string Gives the host name for the record. Table 31.5. "SRV" Record Options Option Description Example --srv-rec = SRVRECORD Passes a single SRV record or a list of SRV records. In the examples on the right, _ldap._tcp defines the service type and the connection protocol for the SRV record. The --srv-rec option defines the priority, weight, port, and target values. The weight values of 51 and 49 in the examples add up to 100 and represent the probability, in percentages, that a particular record is used. # ipa dnsrecord-add idm.example.com _ldap._tcp --srv-rec="0 51 389 server1.idm.example.com." # ipa dnsrecord-add server.idm.example.com _ldap._tcp --srv-rec="1 49 389 server2.idm.example.com." --srv-priority = number Sets the priority of the record. There can be multiple SRV records for a service type. The priority (0 - 65535) sets the rank of the record; the lower the number, the higher the priority. A service has to use the record with the highest priority first. # ipa dnsrecord-mod server.idm.example.com _ldap._tcp --srv-rec="1 49 389 server2.idm.example.com." --srv-priority=0 --srv-weight = number Sets the weight of the record. This helps determine the order of SRV records with the same priority. The set weights should add up to 100, representing the probability (in percentages) that a particular record is used. # ipa dnsrecord-mod server.idm.example.com _ldap._tcp --srv-rec="0 49 389 server2.idm.example.com." --srv-weight=60 --srv-port = number Gives the port for the service on the target host. # ipa dnsrecord-mod server.idm.example.com _ldap._tcp --srv-rec="0 60 389 server2.idm.example.com." --srv-port=636 --srv-target = string Gives the domain name of the target host. This can be a single period (.) if the service is not available in the domain. Additional resources Run ipa dnsrecord-add --help . 31.3. Ensuring the presence of A and AAAA DNS records in IdM using Ansible Follow this procedure to use an Ansible playbook to ensure that A and AAAA records for a particular IdM host are present. In the example used in the procedure below, an IdM administrator ensures the presence of A and AAAA records for host1 in the idm.example.com DNS zone. Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. You know the IdM administrator password. The idm.example.com zone exists and is managed by IdM DNS. For more information about adding a primary DNS zone in IdM DNS, see Using Ansible playbooks to manage IdM DNS zones . Procedure Navigate to the /usr/share/doc/ansible-freeipa/playbooks/dnsrecord directory: Open your inventory file and ensure that the IdM server that you want to configure is listed in the [ipaserver] section. For example, to instruct Ansible to configure server.idm.example.com , enter: Make a copy of the ensure-A-and-AAAA-records-are-present.yml Ansible playbook file. For example: Open the ensure-A-and-AAAA-records-are-present-copy.yml file for editing. Adapt the file by setting the following variables in the ipadnsrecord task section: Set the ipaadmin_password variable to your IdM administrator password. Set the zone_name variable to idm.example.com . In the records variable, set the name variable to host1 , and the a_ip_address variable to 192.168.122.123 . In the records variable, set the name variable to host1 , and the aaaa_ip_address variable to ::1 . This is the modified Ansible playbook file for the current example: Save the file. Run the playbook: Additional resources DNS records in IdM The README-dnsrecord.md file in the /usr/share/doc/ansible-freeipa/ directory Sample Ansible playbooks in the /usr/share/doc/ansible-freeipa/playbooks/dnsrecord directory 31.4. Ensuring the presence of A and PTR DNS records in IdM using Ansible Follow this procedure to use an Ansible playbook to ensure that an A record for a particular IdM host is present, with a corresponding PTR record. In the example used in the procedure below, an IdM administrator ensures the presence of A and PTR records for host1 with an IP address of 192.168.122.45 in the idm.example.com zone. Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. You know the IdM administrator password. The idm.example.com DNS zone exists and is managed by IdM DNS. For more information about adding a primary DNS zone in IdM DNS, see Using Ansible playbooks to manage IdM DNS zones . Procedure Navigate to the /usr/share/doc/ansible-freeipa/playbooks/dnsrecord directory: Open your inventory file and ensure that the IdM server that you want to configure is listed in the [ipaserver] section. For example, to instruct Ansible to configure server.idm.example.com , enter: Make a copy of the ensure-dnsrecord-with-reverse-is-present.yml Ansible playbook file. For example: Open the ensure-dnsrecord-with-reverse-is-present-copy.yml file for editing. Adapt the file by setting the following variables in the ipadnsrecord task section: Set the ipaadmin_password variable to your IdM administrator password. Set the name variable to host1 . Set the zone_name variable to idm.example.com . Set the ip_address variable to 192.168.122.45 . Set the create_reverse variable to true . This is the modified Ansible playbook file for the current example: Save the file. Run the playbook: Additional resources DNS records in IdM The README-dnsrecord.md file in the /usr/share/doc/ansible-freeipa/ directory Sample Ansible playbooks in the /usr/share/doc/ansible-freeipa/playbooks/dnsrecord directory 31.5. Ensuring the presence of multiple DNS records in IdM using Ansible Follow this procedure to use an Ansible playbook to ensure that multiple values are associated with a particular IdM DNS record. In the example used in the procedure below, an IdM administrator ensures the presence of multiple A records for host1 in the idm.example.com DNS zone. Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. You know the IdM administrator password. The idm.example.com zone exists and is managed by IdM DNS. For more information about adding a primary DNS zone in IdM DNS, see Using Ansible playbooks to manage IdM DNS zones . Procedure Navigate to the /usr/share/doc/ansible-freeipa/playbooks/dnsrecord directory: Open your inventory file and ensure that the IdM server that you want to configure is listed in the [ipaserver] section. For example, to instruct Ansible to configure server.idm.example.com , enter: Make a copy of the ensure-presence-multiple-records.yml Ansible playbook file. For example: Open the ensure-presence-multiple-records-copy.yml file for editing. Adapt the file by setting the following variables in the ipadnsrecord task section: Set the ipaadmin_password variable to your IdM administrator password. In the records section, set the name variable to host1 . In the records section, set the zone_name variable to idm.example.com . In the records section, set the a_rec variable to 192.168.122.112 and to 192.168.122.122 . Define a second record in the records section: Set the name variable to host1 . Set the zone_name variable to idm.example.com . Set the aaaa_rec variable to ::1 . This is the modified Ansible playbook file for the current example: Save the file. Run the playbook: Additional resources DNS records in IdM The README-dnsrecord.md file in the /usr/share/doc/ansible-freeipa/ directory Sample Ansible playbooks in the /usr/share/doc/ansible-freeipa/playbooks/dnsrecord directory 31.6. Ensuring the presence of multiple CNAME records in IdM using Ansible A Canonical Name record (CNAME record) is a type of resource record in the Domain Name System (DNS) that maps one domain name, an alias, to another name, the canonical name. You may find CNAME records useful when running multiple services from a single IP address: for example, an FTP service and a web service, each running on a different port. Follow this procedure to use an Ansible playbook to ensure that multiple CNAME records are present in IdM DNS. In the example used in the procedure below, host03 is both an HTTP server and an FTP server. The IdM administrator ensures the presence of the www and ftp CNAME records for the host03 A record in the idm.example.com zone. Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. You know the IdM administrator password. The idm.example.com zone exists and is managed by IdM DNS. For more information about adding a primary DNS zone in IdM DNS, see Using Ansible playbooks to manage IdM DNS zones . The host03 A record exists in the idm.example.com zone. Procedure Navigate to the /usr/share/doc/ansible-freeipa/playbooks/dnsrecord directory: Open your inventory file and ensure that the IdM server that you want to configure is listed in the [ipaserver] section. For example, to instruct Ansible to configure server.idm.example.com , enter: Make a copy of the ensure-CNAME-record-is-present.yml Ansible playbook file. For example: Open the ensure-CNAME-record-is-present-copy.yml file for editing. Adapt the file by setting the following variables in the ipadnsrecord task section: Optional: Adapt the description provided by the name of the play. Set the ipaadmin_password variable to your IdM administrator password. Set the zone_name variable to idm.example.com . In the records variable section, set the following variables and values: Set the name variable to www . Set the cname_hostname variable to host03 . Set the name variable to ftp . Set the cname_hostname variable to host03 . This is the modified Ansible playbook file for the current example: Save the file. Run the playbook: Additional resources See the README-dnsrecord.md file in the /usr/share/doc/ansible-freeipa/ directory. See sample Ansible playbooks in the /usr/share/doc/ansible-freeipa/playbooks/dnsrecord directory. 31.7. Ensuring the presence of an SRV record in IdM using Ansible A DNS service (SRV) record defines the hostname, port number, transport protocol, priority and weight of a service available in a domain. In Identity Management (IdM), you can use SRV records to locate IdM servers and replicas. Follow this procedure to use an Ansible playbook to ensure that an SRV record is present in IdM DNS. In the example used in the procedure below, an IdM administrator ensures the presence of the _kerberos._udp.idm.example.com SRV record with the value of 10 50 88 idm.example.com . This sets the following values: It sets the priority of the service to 10. It sets the weight of the service to 50. It sets the port to be used by the service to 88. Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. You know the IdM administrator password. The idm.example.com zone exists and is managed by IdM DNS. For more information about adding a primary DNS zone in IdM DNS, see Using Ansible playbooks to manage IdM DNS zones . Procedure Navigate to the /usr/share/doc/ansible-freeipa/playbooks/dnsrecord directory: Open your inventory file and ensure that the IdM server that you want to configure is listed in the [ipaserver] section. For example, to instruct Ansible to configure server.idm.example.com , enter: Make a copy of the ensure-SRV-record-is-present.yml Ansible playbook file. For example: Open the ensure-SRV-record-is-present-copy.yml file for editing. Adapt the file by setting the following variables in the ipadnsrecord task section: Set the ipaadmin_password variable to your IdM administrator password. Set the name variable to _kerberos._udp.idm.example.com . Set the srv_rec variable to '10 50 88 idm.example.com' . Set the zone_name variable to idm.example.com . This the modified Ansible playbook file for the current example: Save the file. Run the playbook: Additional resources DNS records in IdM The README-dnsrecord.md file in the /usr/share/doc/ansible-freeipa/ directory Sample Ansible playbooks in the /usr/share/doc/ansible-freeipa/playbooks/dnsrecord directory
[ "cd /usr/share/doc/ansible-freeipa/playbooks/dnsrecord", "[ipaserver] server.idm.example.com", "cp ensure-A-and-AAAA-records-are-present.yml ensure-A-and-AAAA-records-are-present-copy.yml", "--- - name: Ensure A and AAAA records are present hosts: ipaserver become: true gather_facts: false tasks: # Ensure A and AAAA records are present - name: Ensure that 'host1' has A and AAAA records. ipadnsrecord: ipaadmin_password: \"{{ ipaadmin_password }}\" zone_name: idm.example.com records: - name: host1 a_ip_address: 192.168.122.123 - name: host1 aaaa_ip_address: ::1", "ansible-playbook --vault-password-file=password_file -v -i inventory.file ensure-A-and-AAAA-records-are-present-copy.yml", "cd /usr/share/doc/ansible-freeipa/playbooks/dnsrecord", "[ipaserver] server.idm.example.com", "cp ensure-dnsrecord-with-reverse-is-present.yml ensure-dnsrecord-with-reverse-is-present-copy.yml", "--- - name: Ensure DNS Record is present. hosts: ipaserver become: true gather_facts: false tasks: # Ensure that dns record is present - ipadnsrecord: ipaadmin_password: \"{{ ipaadmin_password }}\" name: host1 zone_name: idm.example.com ip_address: 192.168.122.45 create_reverse: true state: present", "ansible-playbook --vault-password-file=password_file -v -i inventory.file ensure-dnsrecord-with-reverse-is-present-copy.yml", "cd /usr/share/doc/ansible-freeipa/playbooks/dnsrecord", "[ipaserver] server.idm.example.com", "cp ensure-presence-multiple-records.yml ensure-presence-multiple-records-copy.yml", "--- - name: Test multiple DNS Records are present. hosts: ipaserver become: true gather_facts: false tasks: # Ensure that multiple dns records are present - ipadnsrecord: ipaadmin_password: \"{{ ipaadmin_password }}\" records: - name: host1 zone_name: idm.example.com a_rec: 192.168.122.112 a_rec: 192.168.122.122 - name: host1 zone_name: idm.example.com aaaa_rec: ::1", "ansible-playbook --vault-password-file=password_file -v -i inventory.file ensure-presence-multiple-records-copy.yml", "cd /usr/share/doc/ansible-freeipa/playbooks/dnsrecord", "[ipaserver] server.idm.example.com", "cp ensure-CNAME-record-is-present.yml ensure-CNAME-record-is-present-copy.yml", "--- - name: Ensure that 'www.idm.example.com' and 'ftp.idm.example.com' CNAME records point to 'host03.idm.example.com'. hosts: ipaserver become: true gather_facts: false tasks: - ipadnsrecord: ipaadmin_password: \"{{ ipaadmin_password }}\" zone_name: idm.example.com records: - name: www cname_hostname: host03 - name: ftp cname_hostname: host03", "ansible-playbook --vault-password-file=password_file -v -i inventory.file ensure-CNAME-record-is-present.yml", "cd /usr/share/doc/ansible-freeipa/playbooks/dnsrecord", "[ipaserver] server.idm.example.com", "cp ensure-SRV-record-is-present.yml ensure-SRV-record-is-present-copy.yml", "--- - name: Test multiple DNS Records are present. hosts: ipaserver become: true gather_facts: false tasks: # Ensure a SRV record is present - ipadnsrecord: ipaadmin_password: \"{{ ipaadmin_password }}\" name: _kerberos._udp.idm.example.com srv_rec: '10 50 88 idm.example.com' zone_name: idm.example.com state: present", "ansible-playbook --vault-password-file=password_file -v -i inventory.file ensure-SRV-record-is-present.yml" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/using_ansible_to_install_and_manage_identity_management/using-ansible-to-manage-dns-records-in-idm_using-ansible-to-install-and-manage-idm
Chapter 28. Automatic Bug Reporting Tool (ABRT)
Chapter 28. Automatic Bug Reporting Tool (ABRT) The Automatic Bug Reporting Tool , commonly abbreviated as ABRT , consists of the abrtd daemon and a number of system services and utilities to process, analyze, and report detected problems. The daemon runs silently in the background most of the time, and springs into action when an application crashes or a kernel oops is detected. The daemon then collects the relevant problem data such as a core file if there is one, the crashing application's command-line parameters, and other data of forensic utility. For a brief overview of the most important ABRT components, see Table 28.1, "Basic ABRT components" . Important For Red Hat Enterprise Linux 6.2, the Automatic Bug Reporting Tool has been upgraded to version 2.0. The ABRT 2-series brings major improvements to automatic bug detection and reporting. Table 28.1. Basic ABRT components Component Package Description abrtd abrt The ABRT daemon which runs under the root user as a background service. abrt-applet abrt-gui The program that receives messages from abrtd and informs you whenever a new problem occurs. abrt-gui abrt-gui The GUI application that shows collected problem data and allows you to further process it. abrt-cli abrt-cli The command-line interface that provides similar functionality to the GUI. abrt-ccpp abrt-addon-ccpp The ABRT service that provides the C/C++ problems analyzer. abrt-oops abrt-addon-kerneloops The ABRT service that provides the kernel oopses analyzer. abrt-vmcore abrt-addon-vmcore [a] The ABRT service that provides the kernel panic analyzer and reporter. [a] The abrt-addon-vmcore package is provided by the Optional subscription channel. See Section 8.4.8, "Adding the Optional and Supplementary Repositories" for more information on Red Hat additional channels. ABRT currently supports detection of crashes in applications written in the C/C++ and Python languages, as well as kernel oopses. With Red Hat Enterprise Linux 6.3, ABRT can also detect kernel panics if the additional abrt-addon-vmcore package is installed and the kdump crash dumping mechanism is enabled and configured on the system accordingly. ABRT is capable of reporting problems to a remote issue tracker. Reporting can be configured to happen automatically whenever an issue is detected, or problem data can be stored locally, reviewed, reported, and deleted manually by a user. The reporting tools can send problem data to a Bugzilla database, a Red Hat Technical Support (RHTSupport) site, upload it using FTP / SCP , email it, or write it to a file. The part of ABRT which handles already-existing problem data (as opposed to, for example, creation of new problem data) has been factored out into a separate project, libreport . The libreport library provides a generic mechanism for analyzing and reporting problems, and it is used by applications other than ABRT . However, ABRT and libreport operation and configuration is closely integrated. They are therefore discussed as one in this document. Whenever a problem is detected, ABRT compares it with all existing problem data and determines whether that same problem has been recorded. If it has been, the existing problem data is updated and the most recent (duplicate) problem is not recorded again. If this problem is not recognized by ABRT , a problem data directory is created. A problem data directory typically consists of files such as: analyzer , architecture , coredump , cmdline , executable , kernel , os_release , reason , time and uid . Other files, such as backtrace , can be created during analysis depending on which analyzer method is used and its configuration settings. Each of these files holds specific information about the system and the problem itself. For example, the kernel file records the version of the crashed kernel. After the problem directory is created and problem data gathered, you can further process, analyze and report the problem using either the ABRT GUI, or the abrt-cli utility for the command line. For more information about these tools, see Section 28.2, "Using the Graphical User Interface" and Section 28.3, "Using the Command-Line Interface" respectively. Note If you do not use ABRT to further analyze and report the detected problems but instead you report problems using a legacy problem reporting tool, report , note that you can no longer file new bugs. The report utility can now only be used to attach new content to the already existing bugs in the RHTSupport or Bugzilla database. Use the following command to do so: report [ -v ] --target target --ticket ID file ...where target is either strata for reporting to RHTSupport or bugzilla for reporting to Bugzilla. ID stands for number identifying an existing problem case in the respective database, and file is a file containing information to be added to the problem case. If you want to report new problems and you do not want to use abrt-cli , you can now use the report-cli utility instead of report . Issue the following command to let report-cli to guide you through the problem reporting process: report-cli -r dump_directory ...where dump_directory is a problem data directory created by ABRT or some other application using libreport . For more information on report-cli , see man report-cli . 28.1. Installing ABRT and Starting its Services As a prerequisite for its use, the abrtd daemon requires the abrt user to exist for file system operations in the /var/spool/abrt directory. When the abrt package is installed, it automatically creates the abrt user whose UID and GID is 173, if such user does not already exist. Otherwise, the abrt user can be created manually. In that case, any UID and GID can be chosen, because abrtd does not require a specific UID and GID. As the first step in order to use ABRT , you should ensure that the abrt-desktop package is installed on your system by running the following command as the root user: With abrt-desktop installed, you will be able to use ABRT only in its graphical interface. If you intend to use ABRT on the command line, install the abrt-cli package: See Section 8.2.4, "Installing Packages" for more information on how to install packages with the Yum package manager. Your step should be to verify that abrtd is running. The daemon is typically configured to start up at boot time. You can use the following command as root to verify its current status: If the service command returns the abrt is stopped message, the daemon is not running. It can be started for the current session by entering this command: Similarly, you can follow the same steps to check and start up the abrt-ccpp service if you want ABRT to catch C/C++ crashes. To set ABRT to detect kernel oopses, use the same steps for the abrt-oops service. Note that this service cannot catch kernel oopses which cause the system to fail, to become unresponsive or to reboot immediately. To be able to detect such kernel oopses with ABRT , you need to install the abrt-vmcore service. If you require this functionality, see Section 28.4.5, "Configuring ABRT to Detect a Kernel Panic" for more information. When installing ABRT packages, all respective ABRT services are automatically enabled for runlevels 3 and 5 . You can disable or enable any ABRT service for the desired runlevels using the chkconfig utility. See Section 12.2.3, "Using the chkconfig Utility" for more information. Warning Please note that installing ABRT packages overwrites the /proc/sys/kernel/core_pattern file which can contain a template used to name core dump files. The content of this file will be overwritten to: |/usr/libexec/abrt-hook-ccpp %s %c %p %u %g %t e Finally, if you run ABRT in a graphical desktop environment, you can verify that the ABRT notification applet is running: If the ABRT notification applet is not running, you can start it manually in your current desktop session by running the abrt-applet program: The applet can be configured to start automatically when your graphical desktop session starts. You can ensure that the ABRT notification applet is added to the list of programs and selected to run at system startup by selecting the System Preferences Startup Applications menu in the top panel. Figure 28.1. Setting ABRT notification applet to run automatically.
[ "~]# yum install abrt-desktop", "~]# yum install abrt-cli", "~]# service abrtd status abrtd (pid 1535) is running", "~]# service abrtd start Starting abrt daemon: [ OK ]", "|/usr/libexec/abrt-hook-ccpp %s %c %p %u %g %t e", "~]USD ps -el | grep abrt-applet 0 S 500 2036 1824 0 80 0 - 61604 poll_s ? 00:00:00 abrt-applet", "~]USD abrt-applet & [1] 2261" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/ch-abrt
Appendix A. Using your subscription
Appendix A. Using your subscription AMQ is provided through a software subscription. To manage your subscriptions, access your account at the Red Hat Customer Portal. A.1. Accessing your account Procedure Go to access.redhat.com . If you do not already have an account, create one. Log in to your account. A.2. Activating a subscription Procedure Go to access.redhat.com . Navigate to My Subscriptions . Navigate to Activate a subscription and enter your 16-digit activation number. A.3. Downloading release files To access .zip, .tar.gz, and other release files, use the customer portal to find the relevant files for download. If you are using RPM packages or the Red Hat Maven repository, this step is not required. Procedure Open a browser and log in to the Red Hat Customer Portal Product Downloads page at access.redhat.com/downloads . Locate the Red Hat AMQ entries in the INTEGRATION AND AUTOMATION category. Select the desired AMQ product. The Software Downloads page opens. Click the Download link for your component. A.4. Registering your system for packages To install RPM packages for this product on Red Hat Enterprise Linux, your system must be registered. If you are using downloaded release files, this step is not required. Procedure Go to access.redhat.com . Navigate to Registration Assistant . Select your OS version and continue to the page. Use the listed command in your system terminal to complete the registration. For more information about registering your system, see one of the following resources: Red Hat Enterprise Linux 7 - Registering the system and managing subscriptions Red Hat Enterprise Linux 8 - Registering the system and managing subscriptions
null
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_the_amq_ruby_client/using_your_subscription