title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
listlengths 1
5.62k
β | url
stringlengths 79
342
|
---|---|---|---|
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_single_sign-on/7.6/html/server_developer_guide/making-open-source-more-inclusive |
5.167. libxklavier | 5.167. libxklavier 5.167.1. RHBA-2012:0923 - libxklavier bug fix update Updated libxklavier packages that fix three bugs are now available for Red Hat Enterprise Linux 6. The libxklavier library provides a high-level API for the X Keyboard Extension (XKB) that allows extended keyboard control. This library supports X.Org and other commercial implementations of the X Window system. The library is useful for creating XKB-related software, such as layout indicators. Bug Fixes BZ# 657726 , BZ# 766645 Prior to this update, an attempt to log into the server using an NX or VNC client triggered an XInput error that was handled incorrectly by the libxklavier library due to the way how the NoMachine NX Free Edition server implements XInput support. As a consequence, the gnome-settings-daemon aborted unexpectedly. This update modifies the XInput error handling routine in the libxklavier library. Now, the library ignores this error and the gnome-settings-daemon runs as expected. BZ# 726885 Prior to this update, the keyboard layout indicator did not show if the layout was changed for the first time. As a consequence, users could, under certain circumstances, not log in. This update modifies the gnome-settings-daemon so that the indicator now shows the correct layout. All users of libxklavier are advised to upgrade to these updated packages, which fix these bugs. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/libxklavier |
Chapter 2. Decision-authoring assets in Red Hat Process Automation Manager | Chapter 2. Decision-authoring assets in Red Hat Process Automation Manager Red Hat Process Automation Manager supports several assets that you can use to define business decisions for your decision service. Each decision-authoring asset has different advantages, and you might prefer to use one or a combination of multiple assets depending on your goals and needs. The following table highlights the main decision-authoring assets supported in Red Hat Process Automation Manager projects to help you decide or confirm the best method for defining decisions in your decision service. Table 2.1. Decision-authoring assets supported in Red Hat Process Automation Manager Asset Highlights Authoring tools Documentation Decision Model and Notation (DMN) models Are decision models based on a notation standard defined by the Object Management Group (OMG) Use graphical decision requirements diagrams (DRDs) that represent part or all of the overall decision requirements graph (DRG) to trace business decision flows Use an XML schema that allows the DMN models to be shared between DMN-compliant platforms Support Friendly Enough Expression Language (FEEL) to define decision logic in DMN decision tables and other DMN boxed expressions Can be integrated efficiently with Business Process Model and Notation (BPMN) process models Are optimal for creating comprehensive, illustrative, and stable decision flows Business Central or other DMN-compliant editor Designing a decision service using DMN models Guided decision tables Are tables of rules that you create in a UI-based table designer in Business Central Are a wizard-led alternative to spreadsheet decision tables Provide fields and options for acceptable input Support template keys and values for creating rule templates Support hit policies, real-time validation, and other additional features not supported in other assets Are optimal for creating rules in a controlled tabular format to minimize compilation errors Business Central Designing a decision service using guided decision tables Spreadsheet decision tables Are XLS or XLSX spreadsheet decision tables that you can upload into Business Central Support template keys and values for creating rule templates Are optimal for creating rules in decision tables already managed outside of Business Central Have strict syntax requirements for rules to be compiled properly when uploaded Spreadsheet editor Designing a decision service using spreadsheet decision tables Guided rules Are individual rules that you create in a UI-based rule designer in Business Central Provide fields and options for acceptable input Are optimal for creating single rules in a controlled format to minimize compilation errors Business Central Designing a decision service using guided rules Guided rule templates Are reusable rule structures that you create in a UI-based template designer in Business Central Provide fields and options for acceptable input Support template keys and values for creating rule templates (fundamental to the purpose of this asset) Are optimal for creating many rules with the same rule structure but with different defined field values Business Central Designing a decision service using guided rule templates DRL rules Are individual rules that you define directly in .drl text files Provide the most flexibility for defining rules and other technicalities of rule behavior Can be created in certain standalone environments and integrated with Red Hat Process Automation Manager Are optimal for creating rules that require advanced DRL options Have strict syntax requirements for rules to be compiled properly Business Central or integrated development environment (IDE) Designing a decision service using DRL rules Predictive Model Markup Language (PMML) models Are predictive data-analytic models based on a notation standard defined by the Data Mining Group (DMG) Use an XML schema that allows the PMML models to be shared between PMML-compliant platforms Support Regression, Scorecard, Tree, Mining, and other model types Can be included with a standalone Red Hat Process Automation Manager project or imported into a project in Business Central Are optimal for incorporating predictive data into decision services in Red Hat Process Automation Manager PMML or XML editor Designing a decision service using PMML models When you define business decisions, you can also consider using Red Hat build of Kogito for your cloud-native decision services. For more information about getting started with Red Hat build of Kogito microservices, see Getting started with Red Hat build of Kogito in Red Hat Process Automation Manager . | null | https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/designing_your_decision_management_architecture_for_red_hat_process_automation_manager/decision-authoring-assets-ref_decision-management-architecture |
Chapter 4. An active/active Samba Server in a Red Hat High Availability Cluster (Red Hat Enterprise Linux 7.4 and Later) | Chapter 4. An active/active Samba Server in a Red Hat High Availability Cluster (Red Hat Enterprise Linux 7.4 and Later) As of the Red Hat Enterprise Linux 7.4 release, the Red Hat Resilient Storage Add-On provides support for running Samba in an active/active cluster configuration using Pacemaker. The Red Hat Resilient Storage Add-On includes the High Availability Add-On. Note For further information on support policies for Samba, see Support Policies for RHEL Resilient Storage - ctdb General Policies and Support Policies for RHEL Resilient Storage - Exporting gfs2 contents via other protocols on the Red Hat Customer Portal. This chapter describes how to configure a highly available active/active Samba server on a two-node Red Hat Enterprise Linux High Availability Add-On cluster using shared storage. The procedure uses pcs to configure Pacemaker cluster resources. This use case requires that your system include the following components: Two nodes, which will be used to create the cluster running Clustered Samba. In this example, the nodes used are z1.example.com and z2.example.com which have IP address of 192.168.1.151 and 192.168.1.152 . A power fencing device for each node of the cluster. This example uses two ports of the APC power switch with a host name of zapc.example.com . Shared storage for the nodes in the cluster, using iSCSI or Fibre Channel. Configuring a highly available active/active Samba server on a two-node Red Hat Enterprise Linux High Availability Add-On cluster requires that you perform the following steps. Create the cluster that will export the Samba shares and configure fencing for each node in the cluster, as described in Section 4.1, "Creating the Cluster" . Configure a gfs2 file system mounted on the clustered LVM logical volume my_clv on the shared storage for the nodes in the cluster, as described in Section 4.2, "Configuring a Clustered LVM Volume with a GFS2 File System" . Configure Samba on each node in the cluster, Section 4.3, "Configuring Samba" . Create the Samba cluster resources as described in Section 4.4, "Configuring the Samba Cluster Resources" . Test the Samba share you have configured, as described in Section 4.5, "Testing the Resource Configuration" . 4.1. Creating the Cluster Use the following procedure to install and create the cluster to use for the Samba service: Install the cluster software on nodes z1.example.com and z2.example.com , using the procedure provided in Section 1.1, "Cluster Software Installation" . Create the two-node cluster that consists of z1.example.com and z2.example.com , using the procedure provided in Section 1.2, "Cluster Creation" . As in that example procedure, this use case names the cluster my_cluster . Configure fencing devices for each node of the cluster, using the procedure provided in Section 1.3, "Fencing Configuration" . This example configures fencing using two ports of the APC power switch with a host name of zapc.example.com . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_administration/ch-hasamba-HAAA |
Chapter 5. LimitRange [v1] | Chapter 5. LimitRange [v1] Description LimitRange sets resource usage limits for each kind of resource in a Namespace. Type object 5.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object LimitRangeSpec defines a min/max usage limit for resources that match on kind. 5.1.1. .spec Description LimitRangeSpec defines a min/max usage limit for resources that match on kind. Type object Required limits Property Type Description limits array Limits is the list of LimitRangeItem objects that are enforced. limits[] object LimitRangeItem defines a min/max usage limit for any resource that matches on kind. 5.1.2. .spec.limits Description Limits is the list of LimitRangeItem objects that are enforced. Type array 5.1.3. .spec.limits[] Description LimitRangeItem defines a min/max usage limit for any resource that matches on kind. Type object Required type Property Type Description default object (Quantity) Default resource requirement limit value by resource name if resource limit is omitted. defaultRequest object (Quantity) DefaultRequest is the default resource requirement request value by resource name if resource request is omitted. max object (Quantity) Max usage constraints on this kind by resource name. maxLimitRequestRatio object (Quantity) MaxLimitRequestRatio if specified, the named resource must have a request and limit that are both non-zero where limit divided by request is less than or equal to the enumerated value; this represents the max burst for the named resource. min object (Quantity) Min usage constraints on this kind by resource name. type string Type of resource that this limit applies to. 5.2. API endpoints The following API endpoints are available: /api/v1/limitranges GET : list or watch objects of kind LimitRange /api/v1/watch/limitranges GET : watch individual changes to a list of LimitRange. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/namespaces/{namespace}/limitranges DELETE : delete collection of LimitRange GET : list or watch objects of kind LimitRange POST : create a LimitRange /api/v1/watch/namespaces/{namespace}/limitranges GET : watch individual changes to a list of LimitRange. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/namespaces/{namespace}/limitranges/{name} DELETE : delete a LimitRange GET : read the specified LimitRange PATCH : partially update the specified LimitRange PUT : replace the specified LimitRange /api/v1/watch/namespaces/{namespace}/limitranges/{name} GET : watch changes to an object of kind LimitRange. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 5.2.1. /api/v1/limitranges Table 5.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list or watch objects of kind LimitRange Table 5.2. HTTP responses HTTP code Reponse body 200 - OK LimitRangeList schema 401 - Unauthorized Empty 5.2.2. /api/v1/watch/limitranges Table 5.3. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of LimitRange. deprecated: use the 'watch' parameter with a list operation instead. Table 5.4. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 5.2.3. /api/v1/namespaces/{namespace}/limitranges Table 5.5. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 5.6. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of LimitRange Table 5.7. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 5.8. Body parameters Parameter Type Description body DeleteOptions schema Table 5.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind LimitRange Table 5.10. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 5.11. HTTP responses HTTP code Reponse body 200 - OK LimitRangeList schema 401 - Unauthorized Empty HTTP method POST Description create a LimitRange Table 5.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.13. Body parameters Parameter Type Description body LimitRange schema Table 5.14. HTTP responses HTTP code Reponse body 200 - OK LimitRange schema 201 - Created LimitRange schema 202 - Accepted LimitRange schema 401 - Unauthorized Empty 5.2.4. /api/v1/watch/namespaces/{namespace}/limitranges Table 5.15. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 5.16. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of LimitRange. deprecated: use the 'watch' parameter with a list operation instead. Table 5.17. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 5.2.5. /api/v1/namespaces/{namespace}/limitranges/{name} Table 5.18. Global path parameters Parameter Type Description name string name of the LimitRange namespace string object name and auth scope, such as for teams and projects Table 5.19. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a LimitRange Table 5.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 5.21. Body parameters Parameter Type Description body DeleteOptions schema Table 5.22. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified LimitRange Table 5.23. HTTP responses HTTP code Reponse body 200 - OK LimitRange schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified LimitRange Table 5.24. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 5.25. Body parameters Parameter Type Description body Patch schema Table 5.26. HTTP responses HTTP code Reponse body 200 - OK LimitRange schema 201 - Created LimitRange schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified LimitRange Table 5.27. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.28. Body parameters Parameter Type Description body LimitRange schema Table 5.29. HTTP responses HTTP code Reponse body 200 - OK LimitRange schema 201 - Created LimitRange schema 401 - Unauthorized Empty 5.2.6. /api/v1/watch/namespaces/{namespace}/limitranges/{name} Table 5.30. Global path parameters Parameter Type Description name string name of the LimitRange namespace string object name and auth scope, such as for teams and projects Table 5.31. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind LimitRange. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 5.32. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/schedule_and_quota_apis/limitrange-v1 |
Using Ansible plug-ins for Red Hat Developer Hub | Using Ansible plug-ins for Red Hat Developer Hub Red Hat Ansible Automation Platform 2.5 Use Ansible plug-ins for Red Hat Developer Hub Red Hat Customer Content Services | [
"--- - name: Open HTTPS and SSH on firewall hosts: rhel become: true tasks: - name: Use rhel system roles to allow https and ssh traffic vars: firewall: - service: https state: enabled permanent: true immediate: true zone: public - service: ssh state: enabled permanent: true immediate: true zone: public ansible.builtin.include_role: name: redhat.rhel_system_roles.firewall"
]
| https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html-single/using_ansible_plug-ins_for_red_hat_developer_hub/index |
Chapter 12. Configuring the vSphere connection settings after an installation | Chapter 12. Configuring the vSphere connection settings after an installation After installing an OpenShift Container Platform cluster on vSphere with the platform integration feature enabled, you might need to update the vSphere connection settings manually, depending on the installation method. For installations using the Assisted Installer, you must update the connection settings. This is because the Assisted Installer adds default connection settings to the vSphere connection configuration wizard as placeholders during the installation. For installer-provisioned or user-provisioned infrastructure installations, you should have entered valid connection settings during the installation. You can use the vSphere connection configuration wizard at any time to validate or modify the connection settings, but this is not mandatory for completing the installation. 12.1. Configuring the vSphere connection settings Modify the following vSphere configuration settings as required: vCenter address vCenter cluster vCenter username vCenter password vCenter address vSphere data center vSphere datastore Virtual machine folder Prerequisites The Assisted Installer has finished installing the cluster successfully. The cluster is connected to https://console.redhat.com . Procedure In the Administrator perspective, navigate to Home Overview . Under Status , click vSphere connection to open the vSphere connection configuration wizard. In the vCenter field, enter the network address of the vSphere vCenter server. This can be either a domain name or an IP address. It appears in the vSphere web client URL; for example https://[your_vCenter_address]/ui . In the vCenter cluster field, enter the name of the vSphere vCenter cluster where OpenShift Container Platform is installed. Important This step is mandatory if you installed OpenShift Container Platform 4.13 or later. In the Username field, enter your vSphere vCenter username. In the Password field, enter your vSphere vCenter password. Warning The system stores the username and password in the vsphere-creds secret in the kube-system namespace of the cluster. An incorrect vCenter username or password makes the cluster nodes unschedulable. In the Datacenter field, enter the name of the vSphere data center that contains the virtual machines used to host the cluster; for example, SDDC-Datacenter . In the Default data store field, enter the path and name of the vSphere data store that stores the persistent data volumes; for example, /SDDC-Datacenter/datastore/datastorename . Warning Updating the vSphere data center or default data store after the configuration has been saved detaches any active vSphere PersistentVolumes . In the Virtual Machine Folder field, enter the data center folder that contains the virtual machine of the cluster; for example, /SDDC-Datacenter/vm/ci-ln-hjg4vg2-c61657-t2gzr . For the OpenShift Container Platform installation to succeed, all virtual machines comprising the cluster must be located in a single data center folder. Click Save Configuration . This updates the cloud-provider-config ConfigMap resource in the openshift-config namespace, and starts the configuration process. Reopen the vSphere connection configuration wizard and expand the Monitored operators panel. Check that the status of the operators is either Progressing or Healthy . 12.2. Verifying the configuration The connection configuration process updates operator statuses and control plane nodes. It takes approximately an hour to complete. During the configuration process, the nodes will reboot. Previously bound PersistentVolumeClaims objects might become disconnected. Prerequisites You have saved the configuration settings in the vSphere connection configuration wizard. Procedure Check that the configuration process completed successfully: In the OpenShift Container Platform Administrator perspective, navigate to Home Overview . Under Status click Operators . Wait for all operator statuses to change from Progressing to All succeeded . A Failed status indicates that the configuration failed. Under Status , click Control Plane . Wait for the response rate of all Control Pane components to return to 100%. A Failed control plane component indicates that the configuration failed. A failure indicates that at least one of the connection settings is incorrect. Change the settings in the vSphere connection configuration wizard and save the configuration again. Check that you are able to bind PersistentVolumeClaims objects by performing the following steps: Create a StorageClass object using the following YAML: kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: vsphere-sc provisioner: kubernetes.io/vsphere-volume parameters: datastore: YOURVCENTERDATASTORE diskformat: thin reclaimPolicy: Delete volumeBindingMode: Immediate Create a PersistentVolumeClaims object using the following YAML: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: test-pvc namespace: openshift-config annotations: volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/vsphere-volume finalizers: - kubernetes.io/pvc-protection spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: vsphere-sc volumeMode: Filesystem If you are unable to create a PersistentVolumeClaims object, you can troubleshoot by navigating to Storage PersistentVolumeClaims in the Administrator perspective of the OpenShift Container Platform web console. For instructions on creating storage objects, see Dynamic provisioning . | [
"kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: vsphere-sc provisioner: kubernetes.io/vsphere-volume parameters: datastore: YOURVCENTERDATASTORE diskformat: thin reclaimPolicy: Delete volumeBindingMode: Immediate",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: test-pvc namespace: openshift-config annotations: volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/vsphere-volume finalizers: - kubernetes.io/pvc-protection spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: vsphere-sc volumeMode: Filesystem"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/installing_on_vsphere/installing-vsphere-post-installation-configuration |
Chapter 1. Support policy for Red Hat build of OpenJDK | Chapter 1. Support policy for Red Hat build of OpenJDK Red Hat will support select major versions of Red Hat build of OpenJDK in its products. For consistency, these versions remain similar to Oracle JDK versions that are designated as long-term support (LTS). A major version of Red Hat build of OpenJDK will be supported for a minimum of six years from the time that version is first introduced. For more information, see the OpenJDK Life Cycle and Support Policy . Note RHEL 6 reached the end of life in November 2020. Because of this, Red Hat build of OpenJDK is not supporting RHEL 6 as a supported configuration.. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/21/html/release_notes_for_red_hat_build_of_openjdk_21.0.2/rn-openjdk-support-policy |
Lightspeed | Lightspeed OpenShift Container Platform 4.17 About Lightspeed Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/lightspeed/index |
Web console | Web console Red Hat Advanced Cluster Management for Kubernetes 2.12 Console | null | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.12/html/web_console/index |
Part III. Post-installation tasks | Part III. Post-installation tasks Managing subscriptions and securing a Red Hat Enterprise Linux (RHEL) system are essential steps for maintaining system compliance and functionality. Registering RHEL ensures access to software updates and services. Additionally, setting a system purpose aligns the system's usage with the appropriate subscriptions, while adjusting security settings helps safeguard critical infrastructure. When needed, subscription services can be updated or changed to meet evolving system requirements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/interactively_installing_rhel_from_installation_media/post-installation-tasks |
Chapter 4. Configuration | Chapter 4. Configuration Camel Quarkus automatically configures and deploys a Camel Context bean which by default is started/stopped according to the Quarkus Application lifecycle. The configuration step happens at build time during Quarkus' augmentation phase and it is driven by the Camel Quarkus extensions which can be tuned using Camel Quarkus specific quarkus.camel.* properties. Note quarkus.camel.* configuration properties are documented on the individual extension pages - for example see Camel Quarkus Core . After the configuration is done, a minimal Camel Runtime is assembled and started in the RUNTIME_INIT phase. 4.1. Configuring Camel components 4.1.1. application.properties To configure components and other aspects of Apache Camel through properties, make sure that your application depends on camel-quarkus-core directly or transitively. Because most Camel Quarkus extensions depend on camel-quarkus-core , you typically do not need to add it explicitly. camel-quarkus-core brings functionalities from Camel Main to Camel Quarkus. In the example below, you set a specific ExchangeFormatter configuration on the LogComponent via application.properties : camel.component.log.exchange-formatter = #class:org.apache.camel.support.processor.DefaultExchangeFormatter camel.component.log.exchange-formatter.show-exchange-pattern = false camel.component.log.exchange-formatter.show-body-type = false 4.1.2. CDI You can also configure a component programmatically using CDI. The recommended method is to observe the ComponentAddEvent and configure the component before the routes and the CamelContext are started: import javax.enterprise.context.ApplicationScoped; import javax.enterprise.event.Observes; import org.apache.camel.quarkus.core.events.ComponentAddEvent; import org.apache.camel.component.log.LogComponent; import org.apache.camel.support.processor.DefaultExchangeFormatter; @ApplicationScoped public static class EventHandler { public void onComponentAdd(@Observes ComponentAddEvent event) { if (event.getComponent() instanceof LogComponent) { /* Perform some custom configuration of the component */ LogComponent logComponent = ((LogComponent) event.getComponent()); DefaultExchangeFormatter formatter = new DefaultExchangeFormatter(); formatter.setShowExchangePattern(false); formatter.setShowBodyType(false); logComponent.setExchangeFormatter(formatter); } } } 4.1.2.1. Producing a @Named component instance Alternatively, you can create and configure the component yourself in a @Named producer method. This works as Camel uses the component URI scheme to look-up components from its registry. For example, in the case of a LogComponent Camel looks for a log named bean. Warning Please note that while producing a @Named component bean will usually work, it may cause subtle issues with some components. Camel Quarkus extensions may do one or more of the following: Pass custom subtype of the default Camel component type. See the Vert.x WebSocket extension example. Perform some Quarkus specific customization of the component. See the JPA extension example. These actions are not performed when you produce your own component instance, therefore, configuring components in an observer method is the recommended method. import javax.enterprise.context.ApplicationScoped; import javax.inject.Named; import org.apache.camel.component.log.LogComponent; import org.apache.camel.support.processor.DefaultExchangeFormatter; @ApplicationScoped public class Configurations { /** * Produces a {@link LogComponent} instance with a custom exchange formatter set-up. */ @Named("log") 1 LogComponent log() { DefaultExchangeFormatter formatter = new DefaultExchangeFormatter(); formatter.setShowExchangePattern(false); formatter.setShowBodyType(false); LogComponent component = new LogComponent(); component.setExchangeFormatter(formatter); return component; } } 1 The "log" argument of the @Named annotation can be omitted if the name of the method is the same. 4.2. Configuration by convention In addition to support configuring Camel through properties, camel-quarkus-core allows you to use conventions to configure the Camel behavior. For example, if there is a single ExchangeFormatter instance in the CDI container, then it will automatically wire that bean to the LogComponent . Additional resources Configuring and using Metering in OpenShift Container Platform | [
"camel.component.log.exchange-formatter = #class:org.apache.camel.support.processor.DefaultExchangeFormatter camel.component.log.exchange-formatter.show-exchange-pattern = false camel.component.log.exchange-formatter.show-body-type = false",
"import javax.enterprise.context.ApplicationScoped; import javax.enterprise.event.Observes; import org.apache.camel.quarkus.core.events.ComponentAddEvent; import org.apache.camel.component.log.LogComponent; import org.apache.camel.support.processor.DefaultExchangeFormatter; @ApplicationScoped public static class EventHandler { public void onComponentAdd(@Observes ComponentAddEvent event) { if (event.getComponent() instanceof LogComponent) { /* Perform some custom configuration of the component */ LogComponent logComponent = ((LogComponent) event.getComponent()); DefaultExchangeFormatter formatter = new DefaultExchangeFormatter(); formatter.setShowExchangePattern(false); formatter.setShowBodyType(false); logComponent.setExchangeFormatter(formatter); } } }",
"import javax.enterprise.context.ApplicationScoped; import javax.inject.Named; import org.apache.camel.component.log.LogComponent; import org.apache.camel.support.processor.DefaultExchangeFormatter; @ApplicationScoped public class Configurations { /** * Produces a {@link LogComponent} instance with a custom exchange formatter set-up. */ @Named(\"log\") 1 LogComponent log() { DefaultExchangeFormatter formatter = new DefaultExchangeFormatter(); formatter.setShowExchangePattern(false); formatter.setShowBodyType(false); LogComponent component = new LogComponent(); component.setExchangeFormatter(formatter); return component; } }"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_extensions_for_quarkus/2.13/html/developing_applications_with_camel_extensions_for_quarkus/camel-quarkus-extensions-configuration |
3.4. RAID and Other Disk Devices | 3.4. RAID and Other Disk Devices Important Red Hat Enterprise Linux 6 uses mdraid instead of dmraid for installation onto Intel BIOS RAID sets. These sets are detected automatically, and devices with Intel ISW metadata are recognized as mdraid instead of dmraid. Note that the device node names of any such devices under mdraid are different from their device node names under dmraid . Therefore, special precautions are necessary when you migrate systems with Intel BIOS RAID sets. Local modifications to /etc/fstab , /etc/crypttab or other configuration files which refer to devices by their device node names will not work in Red Hat Enterprise Linux 6. Before migrating these files, you must therefore edit them to replace device node paths with device UUIDs instead. You can find the UUIDs of devices with the blkid command. 3.4.1. Hardware RAID RAID, or Redundant Array of Independent Disks, allows a group, or array, of drives to act as a single device. Configure any RAID functions provided by the mainboard of your computer, or attached controller cards, before you begin the installation process. Each active RAID array appears as one drive within Red Hat Enterprise Linux. On systems with more than one hard drive you may configure Red Hat Enterprise Linux to operate several of the drives as a Linux RAID array without requiring any additional hardware. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/sn-partitioning-raid-x86 |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation If you have a suggestion to improve this documentation, or find an error, you can contact technical support at https://access.redhat.com to open a request. | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/planning_your_automation_jobs_using_the_automation_savings_planner/providing-feedback |
Configuring sidecar containers on Cryostat | Configuring sidecar containers on Cryostat Red Hat build of Cryostat 3 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/3/html/configuring_sidecar_containers_on_cryostat/index |
Chapter 6. How to use dedicated worker nodes for Red Hat OpenShift Container Storage | Chapter 6. How to use dedicated worker nodes for Red Hat OpenShift Container Storage Using infrastructure nodes to schedule Red Hat OpenShift Container Storage resources saves on Red Hat OpenShift Container Platform subscription costs. Any Red Hat OpenShift Container Platform (RHOCP) node that has an infra node-role label requires an OpenShift Container Storage subscription, but not an RHOCP subscription. It is important to maintain consistency across environments with or without Machine API support. Because of this, it is highly recommended in all cases to have a special category of nodes labeled as either worker or infra or have both roles. See the Section 6.3, "Manual creation of infrastructure nodes" section for more information. 6.1. Anatomy of an Infrastructure node Infrastructure nodes for use with OpenShift Container Storage have a few attributes. The infra node-role label is required to ensure the node does not consume RHOCP entitlements. The infra node-role label is responsible for ensuring only OpenShift Container Storage entitlements are necessary for the nodes running OpenShift Container Storage. Labeled with node-role.kubernetes.io/infra Adding an OpenShift Container Storage taint with a NoSchedule effect is also required so that the infra node will only schedule OpenShift Container Storage resources. Tainted with node.ocs.openshift.io/storage="true" The label identifies the RHOCP node as an infra node so that RHOCP subscription cost is not applied. The taint prevents non OpenShift Container Storage resources to be scheduled on the tainted nodes. Example of the taint and labels required on infrastructure node that will be used to run OpenShift Container Storage services: 6.2. Machine sets for creating Infrastructure nodes If the Machine API is supported in the environment, then labels should be added to the templates for the Machine Sets that will be provisioning the infrastructure nodes. Avoid the anti-pattern of adding labels manually to nodes created by the machine API. Doing so is analogous to adding labels to pods created by a deployment. In both cases, when the pod/node fails, the replacement pod/node will not have the appropriate labels. Note In EC2 environments, you will need three machine sets, each configured to provision infrastructure nodes in a distinct availability zone (such as us-east-2a, us-east-2b, us-east-2c). Currently, OpenShift Container Storage does not support deploying in more than three availability zones. The following Machine Set template example creates nodes with the appropriate taint and labels required for infrastructure nodes. This will be used to run OpenShift Container Storage services. 6.3. Manual creation of infrastructure nodes Only when the Machine API is not supported in the environment should labels be directly applied to nodes. Manual creation requires that at least 3 RHOCP worker nodes are available to schedule OpenShift Container Storage services, and that these nodes have sufficient CPU and memory resources. To avoid the RHOCP subscription cost, the following is required: Adding a NoSchedule OpenShift Container Storage taint is also required so that the infra node will only schedule OpenShift Container Storage resources and repel any other non-OpenShift Container Storage workloads. Warning Do not remove the node-role node-role.kubernetes.io/worker="" The removal of the node-role.kubernetes.io/worker="" can cause issues unless changes are made both to the OpenShift scheduler and to MachineConfig resources. If already removed, it should be added again to each infra node. Adding node-role node-role.kubernetes.io/infra="" and OpenShift Container Storage taint is sufficient to conform to entitlement exemption requirements. | [
"spec: taints: - effect: NoSchedule key: node.ocs.openshift.io/storage value: \"true\" metadata: creationTimestamp: null labels: node-role.kubernetes.io/worker: \"\" node-role.kubernetes.io/infra: \"\" cluster.ocs.openshift.io/openshift-storage: \"\"",
"template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: kb-s25vf machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: kb-s25vf-infra-us-west-2a spec: taints: - effect: NoSchedule key: node.ocs.openshift.io/storage value: \"true\" metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: \"\" cluster.ocs.openshift.io/openshift-storage: \"\"",
"label node <node> node-role.kubernetes.io/infra=\"\" label node <node> cluster.ocs.openshift.io/openshift-storage=\"\"",
"adm taint node <node> node.ocs.openshift.io/storage=\"true\":NoSchedule"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/deploying_and_managing_openshift_container_storage_using_red_hat_openstack_platform/how-to-use-dedicated-worker-nodes-for-openshift-container-storage_osp |
10.2. Automated Enrollment | 10.2. Automated Enrollment In automated enrollment, an end-entity enrollment request is processed as soon as the user successfully authenticates by the method set in the authentication plug-in module; no agent approval is necessary. The following authentication plug-in modules are provided: Directory-based enrollment. End entities are authenticated against an LDAP directory using their user ID and password or their DN and password. See Section 10.2.1, "Setting up Directory-Based Authentication" . PIN-based enrollment. End entities are authenticated against an LDAP directory using their user ID, password, and a PIN set in their directory entry. See Section 10.2.2, "Setting up PIN-Based Enrollment" . Certificate-based authentication . Entities of some kind - both end users and other entities, like servers or tokens - are authenticated to the CA using a certificate issued by the CA which proves their identity. This is most commonly used for renewal, where the original certificate is presented to authenticate the renewal process. See Section 10.2.3, "Using Certificate-Based Authentication" . AgentCertAuth . This method automatically approves a certificate request if the entity submitting the request is authenticated as a subsystem agent. A user authenticates as an agent by presenting an agent certificate. If the presented certificate is recognized by the subsystem as an agent certificate, then the CA automatically processes the certificate request. This form of automatic authentication can be associated with the certificate profile for enrolling for server certificates. This plug-in is enabled by default and has no parameters. Flat file-based enrollment . Used exclusively for router (SCEP) enrollments, a text file is used which contains a list of IP addresses, hostnames, or other identifier and a password, which is usually a random PIN. A router authenticates to the CA using its ID and PIN, and then the CA compares the presented credentials to the list of identities in the text file. See Section 10.2.4, "Configuring Flat File Authentication" . 10.2.1. Setting up Directory-Based Authentication The UidPwdDirAuth and the UdnPwdDirAuth plug-in modules implement directory-based authentication. End users enroll for a certificate by providing their user IDs or DN and password to authenticate to an LDAP directory. Create an instance of either the UidPwdDirAuth or UdnPwdDirAuth authentication plug-in module and configure the instance. Open the CA Console. In the Configuration tab, select Authentication in the navigation tree. The right pane shows the Authentication Instance tab, which lists the currently configured authentication instances. Note The UidPwdDirAuth plug-in is enabled by default. Click Add . The Select Authentication Plug-in Implementation window appears. Select UidPwdDirAuth for user ID and password authentication, or select UdnPwdDirAuth for DN and password authentication. Fill in the following fields in the Authentication Instance Editor window: Authentication Instance ID. Accept the default instance name, or enter a new name. dnpattern. Specifies a string representing a subject name pattern to formulate from the directory attributes and entry DN. ldapStringAttributes. Specifies the list of LDAP string attributes that should be considered authentic for the end entity. If specified, the values corresponding to these attributes are copied from the authentication directory into the authentication token and used by the certificate profile to generate the subject name. Entering values for this parameter is optional. ldapByteAttributes. Specifies the list of LDAP byte (binary) attributes that should be considered authentic for the end entity. If specified, the values corresponding to these attributes will be copied from the authentication directory into the authentication token for use by other modules, such as adding additional information to users' certificates. Entering values for this parameter is optional. ldap.ldapconn.host. Specifies the fully-qualified DNS hostname of the authentication directory. ldap.ldapconn.port. Specifies the TCP/IP port on which the authentication directory listens to requests; if the ldap.ldapconn.secureConn. checkbox is selected, this should be the SSL port number. ldap.ldapconn.secureConn. Specifies the type, SSL or non-SSL, of the port on which the authentication directory listens to requests from the Certificate System. Select if this is an SSL port. ldap.ldapconn.version. Specifies the LDAP protocol version, either 2 or 3 . The default is 3 , since all Directory Servers later than version 3.x are LDAPv3. ldap.basedn. Specifies the base DN for searching the authentication directory. The server uses the value of the uid field from the HTTP input (what a user enters in the enrollment form) and the base DN to construct an LDAP search filter. ldap.minConns. Specifies the minimum number of connections permitted to the authentication directory. The permissible values are 1 to 3 . ldap.maxConns. Specifies the maximum number of connections permitted to the authentication directory. The permissible values are 3 to 10 . Click OK . The authentication instance is set up and enabled. Set the certificate profiles to use to enroll users by setting policies for specific certificates. Customize the enrollment forms by configuring the inputs in the certificate profiles, and include inputs for the information needed by the plug-in to authenticate the user. If the default inputs do not contain all of the information that needs to be collected, submit a request created with a third-party tool. For information on configuring the profiles, see Section 3.7.2, "Inserting LDAP Directory Attribute Values and Other Information into the Subject Alt Name" . Note pkiconsole is being deprecated. Setting up Bound LDAP Connection Some environments require disallowing an anonymous bind for the LDAP server that is used for authentication. To create a bound connection between a CA and the LDAP server, you need to make the following configuration changes: Set up directory-based authentication according to the following example in CS.cfg : auths.instance.UserDirEnrollment.ldap.ldapBoundConn=true auths.instance.UserDirEnrollment.ldap.ldapauth.authtype=BasicAuth auths.instance.UserDirEnrollment.ldap.ldapauth.bindDN=cn=Directory Manager auths.instance.UserDirEnrollment.ldap.ldapauth.bindPWPrompt=externalLDAP externalLDAP.authPrefix=auths.instance.UserDirEnrollment cms.passwordlist=internaldb,replicationdb,externalLDAP where bindPWPrompt is the tag or prompt that is used in the password.conf file; it is also the name used under the cms.passwordlist and authPrefix options. Add the tag or prompt from CS.cfg with its password in password.conf : externalLDAP= your_password Setting up External Authorization A directory-based authentication plug-in can also be configured to evaluate the group membership of the user for authentication. To set up the plug-in this way, the following options has to be configured in CS.cfg : groupsEnable is a boolean option that enables retrieval of groups. The default value is false . groupsBasedn is the base DN of groups. It needs to be specified when it differs from the default basedn . groups is the DN component for groups. The default value is ou=groups . groupObjectClass is one of the following group object classes: groupofuniquenames , groupofnames . The default value is groupofuniquenames . groupUseridName is the name of the user ID attribute in the group object member attribute. The default value is cn . useridName is the name of the user ID DN component. The default value is uid . searchGroupUserByUserdn is a boolean option that determines whether to search the group object member attribute for the userdn or USD{groupUserIdName}=USD{uid} attributes. The default value is true . For example: auths.instance.UserDirEnrollment.pluginName=UidPwdDirAuth auths.instance.UserDirEnrollment.ldap.basedn=cn=users,cn=accounts,dc=local auths.instance.UserDirEnrollment.ldap.groupObjectClass=groupofnames auths.instance.UserDirEnrollment.ldap.groups=cn=groups auths.instance.UserDirEnrollment.ldap.groupsBasedn=cn=accounts,dc=local auths.instance.UserDirEnrollment.ldap.groupsEnable=true auths.instance.UserDirEnrollment.ldap.ldapconn.host=local auths.instance.UserDirEnrollment.ldap.ldapconn.port=636 auths.instance.UserDirEnrollment.ldap.ldapconn.secureConn=true Finally, you have to modify the / instance_path /ca/profiles/ca/ profile_id .cfg file to configure the profile to use the UserDirEnrollment auth instance defined in CS.cfg , and if appropriate, provide an ACL for authorization based on groups. For example: auth.instance_id=UserDirEnrollment auths.acl=group="cn=devlab-access,ou=engineering,dc=example,dc=com" 10.2.2. Setting up PIN-Based Enrollment PIN-based authentication involves setting up PINs for each user in the LDAP directory, distributing those PINs to the users, and then having the users provide the PIN along with their user ID and password when filling out a certificate request. Users are then authenticated both against an LDAP directory using their user ID and password and against the PIN in their LDAP entry. When the user successfully authenticates, the request is automatically processed, and a new certificate is issued. The Certificate System provides a tool, setpin , that adds the necessary schema for PINs to the Directory Server and generates the PINs for each user. The PIN tool performs the following functions: Adds the necessary schema for PINs to the LDAP directory. Adds a PIN manager user who has read-write permissions to the PINs that are set up. Sets up ACIs to allow for PIN removal once the PIN has been used, giving read-write permissions for PINs to the PIN manager, and preventing users from creating or changing PINs. Creates PINs in each user entry. Note This tool is documented in the Certificate System Command-Line Tools Guide . Use the PIN tool to add schema needed for PINs, add PINs to the user entries, and then distribute the PINs to users. Open the /usr/share/pki/native-tools/ directory. Open the setpin.conf file in a text editor. Follow the instructions outlined in the file and make the appropriate changes. Usually, the parameters which need updated are the Directory Server's host name, Directory Manager's bind password, and PIN manager's password. Run the setpin command with its optfile option pointing to the setpin.conf file. The tool modifies the schema with a new attribute (by default, pin ) and a new object class (by default, pinPerson ), creates a pinmanager user, and sets the ACI to allow only the pinmanager user to modify the pin attribute. To generate PINs for specific user entries or to provide user-defined PINs, create an input file with the DNs of those entries listed. For ezample: For information on constructing an input file, see the PIN generator chapter in the Certificate System Command-Line Tools Guide . Disable setup mode for the setpin command. Either comment out the setup line or change the value to no. Setup mode creates the required uers and object classes, but the tool will not generate PINs while in setup mode. Run the setpin command to create PINs in the directory. Note Test-run the tool first without the write option to generate a list of PINs without actually changing the directory. For example: Warning Do not set the hash argument to none . Running the setpin command with hash=none results in the pin being stored in the user LDAP entry as plain text. Use the output file for delivering PINs to users after completing setting up the required authentication method. After confirming that the PIN-based enrollment works, deliver the PINs to users so they can use them during enrollment. To protect the privacy of PINs, use a secure, out-of-band delivery method. Set the policies for specific certificates in the certificate profiles to enroll users. See Chapter 3, Making Rules for Issuing Certificates (Certificate Profiles) for information about certificate profile policies. Create and configure an instance of the UidPwdPinDirAuth authentication plug-in. Open the CA Console. In the Configuration tab, select Authentication in the navigation tree. The right pane shows the Authentication Instance tab, which lists the currently configured authentication instances. Click Add . The Select Authentication Plug-in Implementation window appears. Select the UidPwdPinDirAuth plug-in module. Fill in the following fields in the Authentication Instance Editor window: Authentication Instance ID. Accept the default instance name or enter a new name. removePin. Sets whether to remove PINs from the authentication directory after end users successfully authenticate. Removing PINs from the directory restricts users from enrolling more than once, and thus prevents them from getting more than one certificate. pinAttr. Specifies the authentication directory attribute for PINs. The PIN Generator utility sets the attribute to the value of the objectclass parameter in the setpin.conf file; the default value for this parameter is pin . dnpattern. Specifies a string representing a subject name pattern to formulate from the directory attributes and entry DN. ldapStringAttributes. Specifies the list of LDAP string attributes that should be considered authentic for the end entity. Entering values for this parameter is optional. ldapByteAttributes. Specifies the list of LDAP byte (binary) attributes that should be considered authentic for the end entity. If specified, the values corresponding to these attributes will be copied from the authentication directory into the authentication token for use by other modules, such as adding additional information to users' certificates. Entering values for this parameter is optional. ldap.ldapconn.host. Specifies the fully-qualified DNS host name of the authentication directory. ldap.ldapconn.port. Specifies the TCP/IP port on which the authentication directory listens to requests from the Certificate System. ldap.ldapconn.secureConn. Specifies the type, SSL or non-SSL, of the port on which the authentication directory listens to requests. Select if this is an SSL port. ldap.ldapconn.version. Specifies the LDAP protocol version, either 2 or 3 . By default, this is 3 , since all Directory Server versions later than 3.x are LDAPv3. ldap.ldapAuthentication.bindDN. Specifies the user entry as whom to bind when removing PINs from the authentication directory. Specify this parameter only if the removePin checkbox is selected. It is recommended that a separate user entry that has permission to modify only the PIN attribute in the directory be created and used. For example, do not use the Directory Manager's entry because it has privileges to modify the entire directory content. password. Gives the password associated with the DN specified by the ldap.ldapauthbindDN parameter. When saving changes, the server stores the password in the single sign-on password cache and uses it for subsequent start ups. This parameter needs set only if the removePin checkbox is selected. ldap.ldapAuthentication.clientCertNickname. Specifies the nickname of the certificate to use for SSL client authentication to the authentication directory to remove PINs. Make sure that the certificate is valid and has been signed by a CA that is trusted in the authentication directory's certificate database and that the authentication directory's certmap.conf file has been configured to map the certificate correctly to a DN in the directory. This is needed for PIN removal only. ldap.ldapAuthentication.authtype. Specifies the authentication type, basic authentication or SSL client authentication, required in order to remove PINs from the authentication directory. BasicAuth specifies basic authentication. With this option, enter the correct values for ldap.ldapAuthentication.bindDN and password parameters; the server uses the DN from the ldap.ldapAuthentication.bindDN attribute to bind to the directory. SslClientAuth specifies SSL client authentication. With this option, set the value of the ldap.ldapconn.secureConn parameter to true and the value of the ldap.ldapAuthentication.clientCertNickname parameter to the nickname of the certificate to use for SSL client authentication. ldap.basedn. Specifies the base DN for searching the authentication directory; the server uses the value of the uid field from the HTTP input (what a user enters in the enrollment form) and the base DN to construct an LDAP search filter. ldap.minConns. Specifies the minimum number of connections permitted to the authentication directory. The permissible values are 1 to 3 . ldap.maxConns. Specifies the maximum number of connections permitted to the authentication directory. The permissible values are 3 to 10 . Click OK . Customize the enrollment forms by configuring the inputs in the certificate profiles. Include the information that will be needed by the plug-in to authenticate the user. If the default inputs do not contain all of the information that needs to be collected, submit a request created with a third-party tool. Note pkiconsole is being deprecated. 10.2.3. Using Certificate-Based Authentication Certificate-based authentication is when a certificate is presented that verifies the identity of the requester and automatically validates and authenticates the request being submitted. This is most commonly used for renewal processes, when the original certificate is presented by the user, server, and application and that certificate is used to authenticate the request. There are other circumstances when it may be useful to use certificate-based authentication for initially requesting a certificate. For example, tokens may be bulk-loaded with generic certificates which are then used to authenticate the users when they enroll for their user certificates or, alternatively, users can be issued signing certificates which they then use to authenticate their requests for encryption certificates. The certificate-based authentication module, SSLclientCertAuth , is enabled by default, and this authentication method can be referenced in any custom certificate profile. 10.2.4. Configuring Flat File Authentication A router certificate is enrolled and authenticated using a randomly-generated PIN. The CA uses the flatFileAuth authentication module to process a text file which contains the router's authentication credentials. 10.2.4.1. Configuring the flatFileAuth Module Flat file authentication is already configured for SCEP enrollments, but the location of the flat file and its authentication parameters can be edited. Open the CA Console. In the Configuration tab, select Authentication in the navigation tree. Select the flatFileAuth authentication module. Click Edit/View . To change the file location and name, reset the fileName field. To change the authentication name parameter, reset the keyAttributes value to another value submitted in the SCEP enrollment form, like CN. It is also possible to use multiple name parameters by separating them by commas, like UID,CN . To change the password parameter name, reset the authAttributes field. Save the edits. Note pkiconsole is being deprecated. 10.2.4.2. Editing flatfile.txt The same flatfile.txt file is used to authenticate every SCEP enrollment. This file must be manually updated every time a new PIN is issued to a router. By default, this file is in /var/lib/pki/pki-ca/ca/conf/ and specifies two parameters per authentication entry, the UID of the site (usually its IP address, either IPv4 or IPv6) and the PIN issued by the router. Each entry must be followed by a blank line. For example: If the authentication entries are not separated by an empty line, then when the router attempts to authenticate to the CA, it will fail. For example: | [
"pkiconsole https://server.example.com:8443/ca",
"auths.instance.UserDirEnrollment.ldap.ldapBoundConn=true auths.instance.UserDirEnrollment.ldap.ldapauth.authtype=BasicAuth auths.instance.UserDirEnrollment.ldap.ldapauth.bindDN=cn=Directory Manager auths.instance.UserDirEnrollment.ldap.ldapauth.bindPWPrompt=externalLDAP externalLDAP.authPrefix=auths.instance.UserDirEnrollment cms.passwordlist=internaldb,replicationdb,externalLDAP",
"externalLDAP= your_password",
"auths.instance.UserDirEnrollment.pluginName=UidPwdDirAuth auths.instance.UserDirEnrollment.ldap.basedn=cn=users,cn=accounts,dc=local auths.instance.UserDirEnrollment.ldap.groupObjectClass=groupofnames auths.instance.UserDirEnrollment.ldap.groups=cn=groups auths.instance.UserDirEnrollment.ldap.groupsBasedn=cn=accounts,dc=local auths.instance.UserDirEnrollment.ldap.groupsEnable=true auths.instance.UserDirEnrollment.ldap.ldapconn.host=local auths.instance.UserDirEnrollment.ldap.ldapconn.port=636 auths.instance.UserDirEnrollment.ldap.ldapconn.secureConn=true",
"auth.instance_id=UserDirEnrollment auths.acl=group=\"cn=devlab-access,ou=engineering,dc=example,dc=com\"",
"setpin optfile=/usr/share/pki/native-tools/setpin.conf",
"dn:uid=bjensen,ou=people,dc=example,dc=com dn:uid=jsmith,ou=people,dc=example,dc=com dn:jtyler,ou=people,dc=example,dc=com",
"vim /usr/share/pki/native-tools/setpin.conf setup=no",
"setpin host=yourhost port=9446 length=11 input=infile output=outfile write \"binddn=cn=pinmanager,o=example.com\" bindpw=\"password\" basedn=o=example.com \"filter=(uid=u*)\" hash=sha256",
"pkiconsole https://server.example.com:8443/ca",
"pkiconsole https://server.example.com:8443/ca",
"UID:192.168.123.123 PIN:HU89dj",
"UID:192.168.123.123 PIN:HU89dj UID:12.255.80.13 PIN:fiowIO89 UID:0.100.0.100 PIN:GRIOjisf",
"... flatfile.txt entry UID:192.168.123.123 PIN:HU89dj UID:12.255.80.13 PIN:fiowIO89 ... error log entry [13/Jun/2020:13:03:09][http-9180-Processor24]: FlatFileAuth: authenticating user: finding user from key: 192.168.123.123 [13/Jun/2020:13:03:09][http-9180-Processor24]: FlatFileAuth: User not found in password file."
]
| https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/Automated_Enrollment |
Providing Feedback on Red Hat Documentation | Providing Feedback on Red Hat Documentation We appreciate your input on our documentation. Please let us know how we could make it better. You can submit feedback by filing a ticket in Bugzilla: Navigate to the Bugzilla website. In the Component field, use Documentation . In the Description field, enter your suggestion for improvement. Include a link to the relevant parts of the documentation. Click Submit Bug . | null | https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/managing_hosts/providing-feedback-on-red-hat-documentation_managing-hosts |
Chapter 6. Updating Logging | Chapter 6. Updating Logging There are two types of logging updates: minor release updates (5.y.z) and major release updates (5.y). 6.1. Minor release updates If you installed the logging Operators using the Automatic update approval option, your Operators receive minor version updates automatically. You do not need to complete any manual update steps. If you installed the logging Operators using the Manual update approval option, you must manually approve minor version updates. For more information, see Manually approving a pending Operator update . 6.2. Major release updates For major version updates you must complete some manual steps. For major release version compatibility and support information, see OpenShift Operator Life Cycles . 6.3. Upgrading the Red Hat OpenShift Logging Operator to watch all namespaces In logging 5.7 and older versions, the Red Hat OpenShift Logging Operator only watches the openshift-logging namespace. If you want the Red Hat OpenShift Logging Operator to watch all namespaces on your cluster, you must redeploy the Operator. You can complete the following procedure to redeploy the Operator without deleting your logging components. Prerequisites You have installed the OpenShift CLI ( oc ). You have administrator permissions. Procedure Delete the subscription by running the following command: USD oc -n openshift-logging delete subscription <subscription> Delete the Operator group by running the following command: USD oc -n openshift-logging delete operatorgroup <operator_group_name> Delete the cluster service version (CSV) by running the following command: USD oc delete clusterserviceversion cluster-logging.<version> Redeploy the Red Hat OpenShift Logging Operator by following the "Installing Logging" documentation. Verification Check that the targetNamespaces field in the OperatorGroup resource is not present or is set to an empty string. To do this, run the following command and inspect the output: USD oc get operatorgroup <operator_group_name> -o yaml Example output apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-logging-f52cn namespace: openshift-logging spec: upgradeStrategy: Default status: namespaces: - "" # ... 6.4. Updating the Red Hat OpenShift Logging Operator To update the Red Hat OpenShift Logging Operator to a new major release version, you must modify the update channel for the Operator subscription. Prerequisites You have installed the Red Hat OpenShift Logging Operator. You have administrator permissions. You have access to the OpenShift Dedicated web console and are viewing the Administrator perspective. Procedure Navigate to Operators Installed Operators . Select the openshift-logging project. Click the Red Hat OpenShift Logging Operator. Click Subscription . In the Subscription details section, click the Update channel link. This link text might be stable or stable-5.y , depending on your current update channel. In the Change Subscription Update Channel window, select the latest major version update channel, stable-5.y , and click Save . Note the cluster-logging.v5.y.z version. Verification Wait for a few seconds, then click Operators Installed Operators . Verify that the Red Hat OpenShift Logging Operator version matches the latest cluster-logging.v5.y.z version. On the Operators Installed Operators page, wait for the Status field to report Succeeded . 6.5. Updating the Loki Operator To update the Loki Operator to a new major release version, you must modify the update channel for the Operator subscription. Prerequisites You have installed the Loki Operator. You have administrator permissions. You have access to the OpenShift Dedicated web console and are viewing the Administrator perspective. Procedure Navigate to Operators Installed Operators . Select the openshift-operators-redhat project. Click the Loki Operator . Click Subscription . In the Subscription details section, click the Update channel link. This link text might be stable or stable-5.y , depending on your current update channel. In the Change Subscription Update Channel window, select the latest major version update channel, stable-5.y , and click Save . Note the loki-operator.v5.y.z version. Verification Wait for a few seconds, then click Operators Installed Operators . Verify that the Loki Operator version matches the latest loki-operator.v5.y.z version. On the Operators Installed Operators page, wait for the Status field to report Succeeded . 6.6. Updating the OpenShift Elasticsearch Operator To update the OpenShift Elasticsearch Operator to the current version, you must modify the subscription. Note The Logging 5.9 release does not contain an updated version of the OpenShift Elasticsearch Operator. If you currently use the OpenShift Elasticsearch Operator released with Logging 5.8, it will continue to work with Logging until the EOL of Logging 5.8. As an alternative to using the OpenShift Elasticsearch Operator to manage the default log storage, you can use the Loki Operator. For more information on the Logging lifecycle dates, see Platform Agnostic Operators . Prerequisites If you are using Elasticsearch as the default log store, and Kibana as the UI, update the OpenShift Elasticsearch Operator before you update the Red Hat OpenShift Logging Operator. Important If you update the Operators in the wrong order, Kibana does not update and the Kibana custom resource (CR) is not created. To fix this issue, delete the Red Hat OpenShift Logging Operator pod. When the Red Hat OpenShift Logging Operator pod redeploys, it creates the Kibana CR and Kibana becomes available again. The Logging status is healthy: All pods have a ready status. The Elasticsearch cluster is healthy. Your Elasticsearch and Kibana data is backed up . You have administrator permissions. You have installed the OpenShift CLI ( oc ) for the verification steps. Procedure In the Red Hat Hybrid Cloud Console, click Operators Installed Operators . Select the openshift-operators-redhat project. Click OpenShift Elasticsearch Operator . Click Subscription Channel . In the Change Subscription Update Channel window, select stable-5.y and click Save . Note the elasticsearch-operator.v5.y.z version. Wait for a few seconds, then click Operators Installed Operators . Verify that the OpenShift Elasticsearch Operator version matches the latest elasticsearch-operator.v5.y.z version. On the Operators Installed Operators page, wait for the Status field to report Succeeded . Verification Verify that all Elasticsearch pods have a Ready status by entering the following command and observing the output: USD oc get pod -n openshift-logging --selector component=elasticsearch Example output NAME READY STATUS RESTARTS AGE elasticsearch-cdm-1pbrl44l-1-55b7546f4c-mshhk 2/2 Running 0 31m elasticsearch-cdm-1pbrl44l-2-5c6d87589f-gx5hk 2/2 Running 0 30m elasticsearch-cdm-1pbrl44l-3-88df5d47-m45jc 2/2 Running 0 29m Verify that the Elasticsearch cluster status is green by entering the following command and observing the output: USD oc exec -n openshift-logging -c elasticsearch elasticsearch-cdm-1pbrl44l-1-55b7546f4c-mshhk -- health Example output { "cluster_name" : "elasticsearch", "status" : "green", } Verify that the Elasticsearch cron jobs are created by entering the following commands and observing the output: USD oc project openshift-logging USD oc get cronjob Example output NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE elasticsearch-im-app */15 * * * * False 0 <none> 56s elasticsearch-im-audit */15 * * * * False 0 <none> 56s elasticsearch-im-infra */15 * * * * False 0 <none> 56s Verify that the log store is updated to the correct version and the indices are green by entering the following command and observing the output: USD oc exec -c elasticsearch <any_es_pod_in_the_cluster> -- indices Verify that the output includes the app-00000x , infra-00000x , audit-00000x , .security indices: Example 6.1. Sample output with indices in a green status Tue Jun 30 14:30:54 UTC 2020 health status index uuid pri rep docs.count docs.deleted store.size pri.store.size green open infra-000008 bnBvUFEXTWi92z3zWAzieQ 3 1 222195 0 289 144 green open infra-000004 rtDSzoqsSl6saisSK7Au1Q 3 1 226717 0 297 148 green open infra-000012 RSf_kUwDSR2xEuKRZMPqZQ 3 1 227623 0 295 147 green open .kibana_7 1SJdCqlZTPWlIAaOUd78yg 1 1 4 0 0 0 green open infra-000010 iXwL3bnqTuGEABbUDa6OVw 3 1 248368 0 317 158 green open infra-000009 YN9EsULWSNaxWeeNvOs0RA 3 1 258799 0 337 168 green open infra-000014 YP0U6R7FQ_GVQVQZ6Yh9Ig 3 1 223788 0 292 146 green open infra-000015 JRBbAbEmSMqK5X40df9HbQ 3 1 224371 0 291 145 green open .orphaned.2020.06.30 n_xQC2dWQzConkvQqei3YA 3 1 9 0 0 0 green open infra-000007 llkkAVSzSOmosWTSAJM_hg 3 1 228584 0 296 148 green open infra-000005 d9BoGQdiQASsS3BBFm2iRA 3 1 227987 0 297 148 green open infra-000003 1-goREK1QUKlQPAIVkWVaQ 3 1 226719 0 295 147 green open .security zeT65uOuRTKZMjg_bbUc1g 1 1 5 0 0 0 green open .kibana-377444158_kubeadmin wvMhDwJkR-mRZQO84K0gUQ 3 1 1 0 0 0 green open infra-000006 5H-KBSXGQKiO7hdapDE23g 3 1 226676 0 295 147 green open infra-000001 eH53BQ-bSxSWR5xYZB6lVg 3 1 341800 0 443 220 green open .kibana-6 RVp7TemSSemGJcsSUmuf3A 1 1 4 0 0 0 green open infra-000011 J7XWBauWSTe0jnzX02fU6A 3 1 226100 0 293 146 green open app-000001 axSAFfONQDmKwatkjPXdtw 3 1 103186 0 126 57 green open infra-000016 m9c1iRLtStWSF1GopaRyCg 3 1 13685 0 19 9 green open infra-000002 Hz6WvINtTvKcQzw-ewmbYg 3 1 228994 0 296 148 green open infra-000013 KR9mMFUpQl-jraYtanyIGw 3 1 228166 0 298 148 green open audit-000001 eERqLdLmQOiQDFES1LBATQ 3 1 0 0 0 0 Verify that the log visualizer is updated to the correct version by entering the following command and observing the output: USD oc get kibana kibana -o json Verify that the output includes a Kibana pod with the ready status: Example 6.2. Sample output with a ready Kibana pod [ { "clusterCondition": { "kibana-5fdd766ffd-nb2jj": [ { "lastTransitionTime": "2020-06-30T14:11:07Z", "reason": "ContainerCreating", "status": "True", "type": "" }, { "lastTransitionTime": "2020-06-30T14:11:07Z", "reason": "ContainerCreating", "status": "True", "type": "" } ] }, "deployment": "kibana", "pods": { "failed": [], "notReady": [] "ready": [] }, "replicaSets": [ "kibana-5fdd766ffd" ], "replicas": 1 } ] | [
"oc -n openshift-logging delete subscription <subscription>",
"oc -n openshift-logging delete operatorgroup <operator_group_name>",
"oc delete clusterserviceversion cluster-logging.<version>",
"oc get operatorgroup <operator_group_name> -o yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-logging-f52cn namespace: openshift-logging spec: upgradeStrategy: Default status: namespaces: - \"\"",
"oc get pod -n openshift-logging --selector component=elasticsearch",
"NAME READY STATUS RESTARTS AGE elasticsearch-cdm-1pbrl44l-1-55b7546f4c-mshhk 2/2 Running 0 31m elasticsearch-cdm-1pbrl44l-2-5c6d87589f-gx5hk 2/2 Running 0 30m elasticsearch-cdm-1pbrl44l-3-88df5d47-m45jc 2/2 Running 0 29m",
"oc exec -n openshift-logging -c elasticsearch elasticsearch-cdm-1pbrl44l-1-55b7546f4c-mshhk -- health",
"{ \"cluster_name\" : \"elasticsearch\", \"status\" : \"green\", }",
"oc project openshift-logging",
"oc get cronjob",
"NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE elasticsearch-im-app */15 * * * * False 0 <none> 56s elasticsearch-im-audit */15 * * * * False 0 <none> 56s elasticsearch-im-infra */15 * * * * False 0 <none> 56s",
"oc exec -c elasticsearch <any_es_pod_in_the_cluster> -- indices",
"Tue Jun 30 14:30:54 UTC 2020 health status index uuid pri rep docs.count docs.deleted store.size pri.store.size green open infra-000008 bnBvUFEXTWi92z3zWAzieQ 3 1 222195 0 289 144 green open infra-000004 rtDSzoqsSl6saisSK7Au1Q 3 1 226717 0 297 148 green open infra-000012 RSf_kUwDSR2xEuKRZMPqZQ 3 1 227623 0 295 147 green open .kibana_7 1SJdCqlZTPWlIAaOUd78yg 1 1 4 0 0 0 green open infra-000010 iXwL3bnqTuGEABbUDa6OVw 3 1 248368 0 317 158 green open infra-000009 YN9EsULWSNaxWeeNvOs0RA 3 1 258799 0 337 168 green open infra-000014 YP0U6R7FQ_GVQVQZ6Yh9Ig 3 1 223788 0 292 146 green open infra-000015 JRBbAbEmSMqK5X40df9HbQ 3 1 224371 0 291 145 green open .orphaned.2020.06.30 n_xQC2dWQzConkvQqei3YA 3 1 9 0 0 0 green open infra-000007 llkkAVSzSOmosWTSAJM_hg 3 1 228584 0 296 148 green open infra-000005 d9BoGQdiQASsS3BBFm2iRA 3 1 227987 0 297 148 green open infra-000003 1-goREK1QUKlQPAIVkWVaQ 3 1 226719 0 295 147 green open .security zeT65uOuRTKZMjg_bbUc1g 1 1 5 0 0 0 green open .kibana-377444158_kubeadmin wvMhDwJkR-mRZQO84K0gUQ 3 1 1 0 0 0 green open infra-000006 5H-KBSXGQKiO7hdapDE23g 3 1 226676 0 295 147 green open infra-000001 eH53BQ-bSxSWR5xYZB6lVg 3 1 341800 0 443 220 green open .kibana-6 RVp7TemSSemGJcsSUmuf3A 1 1 4 0 0 0 green open infra-000011 J7XWBauWSTe0jnzX02fU6A 3 1 226100 0 293 146 green open app-000001 axSAFfONQDmKwatkjPXdtw 3 1 103186 0 126 57 green open infra-000016 m9c1iRLtStWSF1GopaRyCg 3 1 13685 0 19 9 green open infra-000002 Hz6WvINtTvKcQzw-ewmbYg 3 1 228994 0 296 148 green open infra-000013 KR9mMFUpQl-jraYtanyIGw 3 1 228166 0 298 148 green open audit-000001 eERqLdLmQOiQDFES1LBATQ 3 1 0 0 0 0",
"oc get kibana kibana -o json",
"[ { \"clusterCondition\": { \"kibana-5fdd766ffd-nb2jj\": [ { \"lastTransitionTime\": \"2020-06-30T14:11:07Z\", \"reason\": \"ContainerCreating\", \"status\": \"True\", \"type\": \"\" }, { \"lastTransitionTime\": \"2020-06-30T14:11:07Z\", \"reason\": \"ContainerCreating\", \"status\": \"True\", \"type\": \"\" } ] }, \"deployment\": \"kibana\", \"pods\": { \"failed\": [], \"notReady\": [] \"ready\": [] }, \"replicaSets\": [ \"kibana-5fdd766ffd\" ], \"replicas\": 1 } ]"
]
| https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/logging/cluster-logging-upgrading |
probe::ioscheduler.elv_add_request.kp | probe::ioscheduler.elv_add_request.kp Name probe::ioscheduler.elv_add_request.kp - kprobe based probe to indicate that a request was added to the request queue Synopsis ioscheduler.elv_add_request.kp Values disk_major Disk major number of the request disk_minor Disk minor number of the request rq_flags Request flags elevator_name The type of I/O elevator currently enabled q pointer to request queue rq Address of the request name Name of the probe point | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-ioscheduler-elv-add-request-kp |
function::cpu_clock_us | function::cpu_clock_us Name function::cpu_clock_us - Number of microseconds on the given cpu's clock Synopsis Arguments cpu Which processor's clock to read Description This function returns the number of microseconds on the given cpu's clock. This is always monotonic comparing on the same cpu, but may have some drift between cpus (within about a jiffy). | [
"cpu_clock_us:long(cpu:long)"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-cpu-clock-us |
Chapter 8. Memory | Chapter 8. Memory This chapter covers memory optimization options for virtualized environments. 8.1. Memory Tuning Tips To optimize memory performance in a virtualized environment, consider the following: Do not allocate more resources to guest than it will use. If possible, assign a guest to a single NUMA node, providing that resources are sufficient on that NUMA node. For more information on using NUMA, see Chapter 9, NUMA . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_tuning_and_optimization_guide/chap-virtualization_tuning_optimization_guide-memory |
Chapter 6. Listing available SCAP contents | Chapter 6. Listing available SCAP contents Use this procedure to view what SCAP contents are already loaded in Satellite. To use the CLI instead of the Satellite web UI, see the CLI procedure . Prerequisites Your user account has a role assigned that has the view_scap_contents permission. Procedure In the Satellite web UI, navigate to Hosts > Compliance > SCAP contents . CLI procedure Run the following Hammer command on Satellite Server: | [
"hammer scap-content list --location \" My_Location \" --organization \" My_Organization \""
]
| https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/managing_security_compliance/listing-available-scap-contents_security-compliance |
Preface | Preface As a developer or system administrator, you can deploy a variety of Red Hat Decision Manager environments on Red Hat OpenShift Container Platform, such as an authoring environment, a managed server environment, an immutable server environment, and other supported environment options. | null | https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/deploying_red_hat_decision_manager_on_red_hat_openshift_container_platform/pr01 |
3.5. Date and Time Conversions | 3.5. Date and Time Conversions JBoss Data Virtualization can implicitly convert properly formatted literal strings to their associated date-related data types as follows: Table 3.4. Date and Time Conversions String Literal Format Possible Implicit Conversion Type yyyy-mm-dd DATE hh:mm:ss TIME yyyy-mm-dd hh:mm:ss.[fff...] TIMESTAMP The formats above are those expected by the JDBC date types. To use other formats see the functions PARSEDATE , PARSETIME , PARSETIMESTAMP . | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_3_reference_material/date_and_time_conversions |
3.4.4. Modifying User Settings | 3.4.4. Modifying User Settings When a user already exists and you need to specify any of the options now, use the usermod command. The logic of using usermod is identical to useradd as well as its syntax: If you need to change the user's user name, use the -l option with the new user name (or login). Example 3.10. Changing User's Login The -l option changes the name of the user from the login emily to the new login, emily-smith . Nothing else is changed. In particular, emily 's home directory name ( /home/emily ) remains the same unless it is changed manually to reflect the new user name. In a similar way you can change the user's UID or user's home directory. See the example below: Note Find all files owned by the specified UID in system and change their owner. Do the same for Access Control List (ACL) referring to the UID. It is recommended to check there are no running processes as they keep running with the old UID. Example 3.11. Changing User's UID and Home Directory The command with -a -u and -d options changes the settings of user robert . Now, his ID is 699 instead of 501, and his home directory is no longer /home/robert but /home/dir_2 . With the usermod command you can also move the content of the user's home directory to a new location, or lock the account by locking its password. Example 3.12. Changing User's In this sample command, the -m and -d options used together move the content of jane 's home directory to the /home/dir_3 directory. The -L option locks the access to jane 's account by locking its password. For the whole list of options to be used with the usermod command, see the usermod (8) man page or run usermod --help on the command line. | [
"usermod option(s) username",
"~]# usermod -l \"emily-smith\" emily",
"~]# usermod -a -u 699 -d /home/dir_2 robert",
"~]# usermod -m -d /home/jane -L jane"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/cl-tools-usermod |
Chapter 5. Using build strategies | Chapter 5. Using build strategies The following sections define the primary supported build strategies, and how to use them. 5.1. Docker build OpenShift Container Platform uses Buildah to build a container image from a Dockerfile. For more information on building container images with Dockerfiles, see the Dockerfile reference documentation . Tip If you set Docker build arguments by using the buildArgs array, see Understand how ARG and FROM interact in the Dockerfile reference documentation. 5.1.1. Replacing the Dockerfile FROM image You can replace the FROM instruction of the Dockerfile with the from parameters of the BuildConfig object. If the Dockerfile uses multi-stage builds, the image in the last FROM instruction will be replaced. Procedure To replace the FROM instruction of the Dockerfile with the from parameters of the BuildConfig object, add the following settings to the BuildConfig object: strategy: dockerStrategy: from: kind: "ImageStreamTag" name: "debian:latest" 5.1.2. Using Dockerfile path By default, docker builds use a Dockerfile located at the root of the context specified in the BuildConfig.spec.source.contextDir field. The dockerfilePath field allows the build to use a different path to locate your Dockerfile, relative to the BuildConfig.spec.source.contextDir field. It can be a different file name than the default Dockerfile, such as MyDockerfile , or a path to a Dockerfile in a subdirectory, such as dockerfiles/app1/Dockerfile . Procedure Set the dockerfilePath field for the build to use a different path to locate your Dockerfile: strategy: dockerStrategy: dockerfilePath: dockerfiles/app1/Dockerfile 5.1.3. Using docker environment variables To make environment variables available to the docker build process and resulting image, you can add environment variables to the dockerStrategy definition of the build configuration. The environment variables defined there are inserted as a single ENV Dockerfile instruction right after the FROM instruction, so that it can be referenced later on within the Dockerfile. The variables are defined during build and stay in the output image, therefore they will be present in any container that runs that image as well. For example, defining a custom HTTP proxy to be used during build and runtime: dockerStrategy: ... env: - name: "HTTP_PROXY" value: "http://myproxy.net:5187/" You can also manage environment variables defined in the build configuration with the oc set env command. 5.1.4. Adding Docker build arguments You can set Docker build arguments using the buildArgs array. The build arguments are passed to Docker when a build is started. Tip See Understand how ARG and FROM interact in the Dockerfile reference documentation. Procedure To set Docker build arguments, add entries to the buildArgs array, which is located in the dockerStrategy definition of the BuildConfig object. For example: dockerStrategy: ... buildArgs: - name: "version" value: "latest" Note Only the name and value fields are supported. Any settings on the valueFrom field are ignored. 5.1.5. Squashing layers with docker builds Docker builds normally create a layer representing each instruction in a Dockerfile. Setting the imageOptimizationPolicy to SkipLayers merges all instructions into a single layer on top of the base image. Procedure Set the imageOptimizationPolicy to SkipLayers : strategy: dockerStrategy: imageOptimizationPolicy: SkipLayers 5.1.6. Using build volumes You can mount build volumes to give running builds access to information that you do not want to persist in the output container image. Build volumes provide sensitive information, such as repository credentials, that the build environment or configuration only needs at build time. Build volumes are different from build inputs, whose data can persist in the output container image. The mount points of build volumes, from which the running build reads data, are functionally similar to pod volume mounts . Prerequisites You have added an input secret, config map, or both to a BuildConfig object. Procedure In the dockerStrategy definition of the BuildConfig object, add any build volumes to the volumes array. For example: spec: dockerStrategy: volumes: - name: secret-mvn 1 mounts: - destinationPath: /opt/app-root/src/.ssh 2 source: type: Secret 3 secret: secretName: my-secret 4 - name: settings-mvn 5 mounts: - destinationPath: /opt/app-root/src/.m2 6 source: type: ConfigMap 7 configMap: name: my-config 8 - name: my-csi-volume 9 mounts: - destinationPath: /opt/app-root/src/some_path 10 source: type: CSI 11 csi: driver: csi.sharedresource.openshift.io 12 readOnly: true 13 volumeAttributes: 14 attribute: value 1 5 9 Required. A unique name. 2 6 10 Required. The absolute path of the mount point. It must not contain .. or : and does not collide with the destination path generated by the builder. The /opt/app-root/src is the default home directory for many Red Hat S2I-enabled images. 3 7 11 Required. The type of source, ConfigMap , Secret , or CSI . 4 8 Required. The name of the source. 12 Required. The driver that provides the ephemeral CSI volume. 13 Required. This value must be set to true . Provides a read-only volume. 14 Optional. The volume attributes of the ephemeral CSI volume. Consult the CSI driver's documentation for supported attribute keys and values. Important Shared Resource CSI Driver is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Important Shared Resource CSI Driver is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Additional resources Build inputs Input secrets and config maps 5.2. Source-to-image build Source-to-image (S2I) is a tool for building reproducible container images. It produces ready-to-run images by injecting application source into a container image and assembling a new image. The new image incorporates the base image, the builder, and built source and is ready to use with the buildah run command. S2I supports incremental builds, which re-use previously downloaded dependencies, previously built artifacts, and so on. 5.2.1. Performing source-to-image incremental builds Source-to-image (S2I) can perform incremental builds, which means it reuses artifacts from previously-built images. Procedure To create an incremental build, create a with the following modification to the strategy definition: strategy: sourceStrategy: from: kind: "ImageStreamTag" name: "incremental-image:latest" 1 incremental: true 2 1 Specify an image that supports incremental builds. Consult the documentation of the builder image to determine if it supports this behavior. 2 This flag controls whether an incremental build is attempted. If the builder image does not support incremental builds, the build will still succeed, but you will get a log message stating the incremental build was not successful because of a missing save-artifacts script. Additional resources See S2I Requirements for information on how to create a builder image supporting incremental builds. 5.2.2. Overriding source-to-image builder image scripts You can override the assemble , run , and save-artifacts source-to-image (S2I) scripts provided by the builder image. Procedure To override the assemble , run , and save-artifacts S2I scripts provided by the builder image, complete one of the following actions: Provide an assemble , run , or save-artifacts script in the .s2i/bin directory of your application source repository. Provide a URL of a directory containing the scripts as part of the strategy definition in the BuildConfig object. For example: strategy: sourceStrategy: from: kind: "ImageStreamTag" name: "builder-image:latest" scripts: "http://somehost.com/scripts_directory" 1 1 The build process appends run , assemble , and save-artifacts to the path. If any or all scripts with these names exist, the build process uses these scripts in place of scripts with the same name that are provided in the image. Note Files located at the scripts URL take precedence over files located in .s2i/bin of the source repository. 5.2.3. Source-to-image environment variables There are two ways to make environment variables available to the source build process and resulting image: environment files and BuildConfig environment values. The variables that you provide using either method will be present during the build process and in the output image. 5.2.3.1. Using source-to-image environment files Source build enables you to set environment values, one per line, inside your application, by specifying them in a .s2i/environment file in the source repository. The environment variables specified in this file are present during the build process and in the output image. If you provide a .s2i/environment file in your source repository, source-to-image (S2I) reads this file during the build. This allows customization of the build behavior as the assemble script may use these variables. Procedure For example, to disable assets compilation for your Rails application during the build: Add DISABLE_ASSET_COMPILATION=true in the .s2i/environment file. In addition to builds, the specified environment variables are also available in the running application itself. For example, to cause the Rails application to start in development mode instead of production : Add RAILS_ENV=development to the .s2i/environment file. The complete list of supported environment variables is available in the using images section for each image. 5.2.3.2. Using source-to-image build configuration environment You can add environment variables to the sourceStrategy definition of the build configuration. The environment variables defined there are visible during the assemble script execution and will be defined in the output image, making them also available to the run script and application code. Procedure For example, to disable assets compilation for your Rails application: sourceStrategy: ... env: - name: "DISABLE_ASSET_COMPILATION" value: "true" Additional resources The build environment section provides more advanced instructions. You can also manage environment variables defined in the build configuration with the oc set env command. 5.2.4. Ignoring source-to-image source files Source-to-image (S2I) supports a .s2iignore file, which contains a list of file patterns that should be ignored. Files in the build working directory, as provided by the various input sources, that match a pattern found in the .s2iignore file will not be made available to the assemble script. 5.2.5. Creating images from source code with source-to-image Source-to-image (S2I) is a framework that makes it easy to write images that take application source code as an input and produce a new image that runs the assembled application as output. The main advantage of using S2I for building reproducible container images is the ease of use for developers. As a builder image author, you must understand two basic concepts in order for your images to provide the best S2I performance, the build process and S2I scripts. 5.2.5.1. Understanding the source-to-image build process The build process consists of the following three fundamental elements, which are combined into a final container image: Sources Source-to-image (S2I) scripts Builder image S2I generates a Dockerfile with the builder image as the first FROM instruction. The Dockerfile generated by S2I is then passed to Buildah. 5.2.5.2. How to write source-to-image scripts You can write source-to-image (S2I) scripts in any programming language, as long as the scripts are executable inside the builder image. S2I supports multiple options providing assemble / run / save-artifacts scripts. All of these locations are checked on each build in the following order: A script specified in the build configuration. A script found in the application source .s2i/bin directory. A script found at the default image URL with the io.openshift.s2i.scripts-url label. Both the io.openshift.s2i.scripts-url label specified in the image and the script specified in a build configuration can take one of the following forms: image:///path_to_scripts_dir : absolute path inside the image to a directory where the S2I scripts are located. file:///path_to_scripts_dir : relative or absolute path to a directory on the host where the S2I scripts are located. http(s)://path_to_scripts_dir : URL to a directory where the S2I scripts are located. Table 5.1. S2I scripts Script Description assemble The assemble script builds the application artifacts from a source and places them into appropriate directories inside the image. This script is required. The workflow for this script is: Optional: Restore build artifacts. If you want to support incremental builds, make sure to define save-artifacts as well. Place the application source in the desired location. Build the application artifacts. Install the artifacts into locations appropriate for them to run. run The run script executes your application. This script is required. save-artifacts The save-artifacts script gathers all dependencies that can speed up the build processes that follow. This script is optional. For example: For Ruby, gems installed by Bundler. For Java, .m2 contents. These dependencies are gathered into a tar file and streamed to the standard output. usage The usage script allows you to inform the user how to properly use your image. This script is optional. test/run The test/run script allows you to create a process to check if the image is working correctly. This script is optional. The proposed flow of that process is: Build the image. Run the image to verify the usage script. Run s2i build to verify the assemble script. Optional: Run s2i build again to verify the save-artifacts and assemble scripts save and restore artifacts functionality. Run the image to verify the test application is working. Note The suggested location to put the test application built by your test/run script is the test/test-app directory in your image repository. Example S2I scripts The following example S2I scripts are written in Bash. Each example assumes its tar contents are unpacked into the /tmp/s2i directory. assemble script: #!/bin/bash # restore build artifacts if [ "USD(ls /tmp/s2i/artifacts/ 2>/dev/null)" ]; then mv /tmp/s2i/artifacts/* USDHOME/. fi # move the application source mv /tmp/s2i/src USDHOME/src # build application artifacts pushd USD{HOME} make all # install the artifacts make install popd run script: #!/bin/bash # run the application /opt/application/run.sh save-artifacts script: #!/bin/bash pushd USD{HOME} if [ -d deps ]; then # all deps contents to tar stream tar cf - deps fi popd usage script: #!/bin/bash # inform the user how to use the image cat <<EOF This is a S2I sample builder image, to use it, install https://github.com/openshift/source-to-image EOF Additional resources S2I Image Creation Tutorial 5.2.6. Using build volumes You can mount build volumes to give running builds access to information that you do not want to persist in the output container image. Build volumes provide sensitive information, such as repository credentials, that the build environment or configuration only needs at build time. Build volumes are different from build inputs, whose data can persist in the output container image. The mount points of build volumes, from which the running build reads data, are functionally similar to pod volume mounts . Prerequisites You have added an input secret, config map, or both to a BuildConfig object. Procedure In the sourceStrategy definition of the BuildConfig object, add any build volumes to the volumes array. For example: spec: sourceStrategy: volumes: - name: secret-mvn 1 mounts: - destinationPath: /opt/app-root/src/.ssh 2 source: type: Secret 3 secret: secretName: my-secret 4 - name: settings-mvn 5 mounts: - destinationPath: /opt/app-root/src/.m2 6 source: type: ConfigMap 7 configMap: name: my-config 8 - name: my-csi-volume 9 mounts: - destinationPath: /opt/app-root/src/some_path 10 source: type: CSI 11 csi: driver: csi.sharedresource.openshift.io 12 readOnly: true 13 volumeAttributes: 14 attribute: value 1 5 9 Required. A unique name. 2 6 10 Required. The absolute path of the mount point. It must not contain .. or : and does not collide with the destination path generated by the builder. The /opt/app-root/src is the default home directory for many Red Hat S2I-enabled images. 3 7 11 Required. The type of source, ConfigMap , Secret , or CSI . 4 8 Required. The name of the source. 12 Required. The driver that provides the ephemeral CSI volume. 13 Required. This value must be set to true . Provides a read-only volume. 14 Optional. The volume attributes of the ephemeral CSI volume. Consult the CSI driver's documentation for supported attribute keys and values. Important Shared Resource CSI Driver is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Additional resources Build inputs Input secrets and config maps 5.3. Custom build The custom build strategy allows developers to define a specific builder image responsible for the entire build process. Using your own builder image allows you to customize your build process. A custom builder image is a plain container image embedded with build process logic, for example for building RPMs or base images. Custom builds run with a high level of privilege and are not available to users by default. Only users who can be trusted with cluster administration permissions should be granted access to run custom builds. 5.3.1. Using FROM image for custom builds You can use the customStrategy.from section to indicate the image to use for the custom build. Procedure Set the customStrategy.from section: strategy: customStrategy: from: kind: "DockerImage" name: "openshift/sti-image-builder" 5.3.2. Using secrets in custom builds In addition to secrets for source and images that can be added to all build types, custom strategies allow adding an arbitrary list of secrets to the builder pod. Procedure To mount each secret at a specific location, edit the secretSource and mountPath fields of the strategy YAML file: strategy: customStrategy: secrets: - secretSource: 1 name: "secret1" mountPath: "/tmp/secret1" 2 - secretSource: name: "secret2" mountPath: "/tmp/secret2" 1 secretSource is a reference to a secret in the same namespace as the build. 2 mountPath is the path inside the custom builder where the secret should be mounted. 5.3.3. Using environment variables for custom builds To make environment variables available to the custom build process, you can add environment variables to the customStrategy definition of the build configuration. The environment variables defined there are passed to the pod that runs the custom build. Procedure Define a custom HTTP proxy to be used during build: customStrategy: ... env: - name: "HTTP_PROXY" value: "http://myproxy.net:5187/" To manage environment variables defined in the build configuration, enter the following command: USD oc set env <enter_variables> 5.3.4. Using custom builder images OpenShift Container Platform's custom build strategy enables you to define a specific builder image responsible for the entire build process. When you need a build to produce individual artifacts such as packages, JARs, WARs, installable ZIPs, or base images, use a custom builder image using the custom build strategy. A custom builder image is a plain container image embedded with build process logic, which is used for building artifacts such as RPMs or base container images. Additionally, the custom builder allows implementing any extended build process, such as a CI/CD flow that runs unit or integration tests. 5.3.4.1. Custom builder image Upon invocation, a custom builder image receives the following environment variables with the information needed to proceed with the build: Table 5.2. Custom Builder Environment Variables Variable Name Description BUILD The entire serialized JSON of the Build object definition. If you must use a specific API version for serialization, you can set the buildAPIVersion parameter in the custom strategy specification of the build configuration. SOURCE_REPOSITORY The URL of a Git repository with source to be built. SOURCE_URI Uses the same value as SOURCE_REPOSITORY . Either can be used. SOURCE_CONTEXT_DIR Specifies the subdirectory of the Git repository to be used when building. Only present if defined. SOURCE_REF The Git reference to be built. ORIGIN_VERSION The version of the OpenShift Container Platform master that created this build object. OUTPUT_REGISTRY The container image registry to push the image to. OUTPUT_IMAGE The container image tag name for the image being built. PUSH_DOCKERCFG_PATH The path to the container registry credentials for running a podman push operation. 5.3.4.2. Custom builder workflow Although custom builder image authors have flexibility in defining the build process, your builder image must adhere to the following required steps necessary for running a build inside of OpenShift Container Platform: The Build object definition contains all the necessary information about input parameters for the build. Run the build process. If your build produces an image, push it to the output location of the build if it is defined. Other output locations can be passed with environment variables. 5.4. Pipeline build Important The Pipeline build strategy is deprecated in OpenShift Container Platform 4. Equivalent and improved functionality is present in the OpenShift Container Platform Pipelines based on Tekton. Jenkins images on OpenShift Container Platform are fully supported and users should follow Jenkins user documentation for defining their jenkinsfile in a job or store it in a Source Control Management system. The Pipeline build strategy allows developers to define a Jenkins pipeline for use by the Jenkins pipeline plugin. The build can be started, monitored, and managed by OpenShift Container Platform in the same way as any other build type. Pipeline workflows are defined in a jenkinsfile , either embedded directly in the build configuration, or supplied in a Git repository and referenced by the build configuration. 5.4.1. Understanding OpenShift Container Platform pipelines Important The Pipeline build strategy is deprecated in OpenShift Container Platform 4. Equivalent and improved functionality is present in the OpenShift Container Platform Pipelines based on Tekton. Jenkins images on OpenShift Container Platform are fully supported and users should follow Jenkins user documentation for defining their jenkinsfile in a job or store it in a Source Control Management system. Pipelines give you control over building, deploying, and promoting your applications on OpenShift Container Platform. Using a combination of the Jenkins Pipeline build strategy, jenkinsfiles , and the OpenShift Container Platform Domain Specific Language (DSL) provided by the Jenkins Client Plugin, you can create advanced build, test, deploy, and promote pipelines for any scenario. OpenShift Container Platform Jenkins Sync Plugin The OpenShift Container Platform Jenkins Sync Plugin keeps the build configuration and build objects in sync with Jenkins jobs and builds, and provides the following: Dynamic job and run creation in Jenkins. Dynamic creation of agent pod templates from image streams, image stream tags, or config maps. Injection of environment variables. Pipeline visualization in the OpenShift Container Platform web console. Integration with the Jenkins Git plugin, which passes commit information from OpenShift Container Platform builds to the Jenkins Git plugin. Synchronization of secrets into Jenkins credential entries. OpenShift Container Platform Jenkins Client Plugin The OpenShift Container Platform Jenkins Client Plugin is a Jenkins plugin which aims to provide a readable, concise, comprehensive, and fluent Jenkins Pipeline syntax for rich interactions with an OpenShift Container Platform API Server. The plugin uses the OpenShift Container Platform command line tool, oc , which must be available on the nodes executing the script. The Jenkins Client Plugin must be installed on your Jenkins master so the OpenShift Container Platform DSL will be available to use within the jenkinsfile for your application. This plugin is installed and enabled by default when using the OpenShift Container Platform Jenkins image. For OpenShift Container Platform Pipelines within your project, you will must use the Jenkins Pipeline Build Strategy. This strategy defaults to using a jenkinsfile at the root of your source repository, but also provides the following configuration options: An inline jenkinsfile field within your build configuration. A jenkinsfilePath field within your build configuration that references the location of the jenkinsfile to use relative to the source contextDir . Note The optional jenkinsfilePath field specifies the name of the file to use, relative to the source contextDir . If contextDir is omitted, it defaults to the root of the repository. If jenkinsfilePath is omitted, it defaults to jenkinsfile . 5.4.2. Providing the Jenkins file for pipeline builds Important The Pipeline build strategy is deprecated in OpenShift Container Platform 4. Equivalent and improved functionality is present in the OpenShift Container Platform Pipelines based on Tekton. Jenkins images on OpenShift Container Platform are fully supported and users should follow Jenkins user documentation for defining their jenkinsfile in a job or store it in a Source Control Management system. The jenkinsfile uses the standard groovy language syntax to allow fine grained control over the configuration, build, and deployment of your application. You can supply the jenkinsfile in one of the following ways: A file located within your source code repository. Embedded as part of your build configuration using the jenkinsfile field. When using the first option, the jenkinsfile must be included in your applications source code repository at one of the following locations: A file named jenkinsfile at the root of your repository. A file named jenkinsfile at the root of the source contextDir of your repository. A file name specified via the jenkinsfilePath field of the JenkinsPipelineStrategy section of your BuildConfig, which is relative to the source contextDir if supplied, otherwise it defaults to the root of the repository. The jenkinsfile is run on the Jenkins agent pod, which must have the OpenShift Container Platform client binaries available if you intend to use the OpenShift Container Platform DSL. Procedure To provide the Jenkins file, you can either: Embed the Jenkins file in the build configuration. Include in the build configuration a reference to the Git repository that contains the Jenkins file. Embedded Definition kind: "BuildConfig" apiVersion: "v1" metadata: name: "sample-pipeline" spec: strategy: jenkinsPipelineStrategy: jenkinsfile: |- node('agent') { stage 'build' openshiftBuild(buildConfig: 'ruby-sample-build', showBuildLogs: 'true') stage 'deploy' openshiftDeploy(deploymentConfig: 'frontend') } Reference to Git Repository kind: "BuildConfig" apiVersion: "v1" metadata: name: "sample-pipeline" spec: source: git: uri: "https://github.com/openshift/ruby-hello-world" strategy: jenkinsPipelineStrategy: jenkinsfilePath: some/repo/dir/filename 1 1 The optional jenkinsfilePath field specifies the name of the file to use, relative to the source contextDir . If contextDir is omitted, it defaults to the root of the repository. If jenkinsfilePath is omitted, it defaults to jenkinsfile . 5.4.3. Using environment variables for pipeline builds Important The Pipeline build strategy is deprecated in OpenShift Container Platform 4. Equivalent and improved functionality is present in the OpenShift Container Platform Pipelines based on Tekton. Jenkins images on OpenShift Container Platform are fully supported and users should follow Jenkins user documentation for defining their jenkinsfile in a job or store it in a Source Control Management system. To make environment variables available to the Pipeline build process, you can add environment variables to the jenkinsPipelineStrategy definition of the build configuration. Once defined, the environment variables will be set as parameters for any Jenkins job associated with the build configuration. Procedure To define environment variables to be used during build, edit the YAML file: jenkinsPipelineStrategy: ... env: - name: "FOO" value: "BAR" You can also manage environment variables defined in the build configuration with the oc set env command. 5.4.3.1. Mapping between BuildConfig environment variables and Jenkins job parameters When a Jenkins job is created or updated based on changes to a Pipeline strategy build configuration, any environment variables in the build configuration are mapped to Jenkins job parameters definitions, where the default values for the Jenkins job parameters definitions are the current values of the associated environment variables. After the Jenkins job's initial creation, you can still add additional parameters to the job from the Jenkins console. The parameter names differ from the names of the environment variables in the build configuration. The parameters are honored when builds are started for those Jenkins jobs. How you start builds for the Jenkins job dictates how the parameters are set. If you start with oc start-build , the values of the environment variables in the build configuration are the parameters set for the corresponding job instance. Any changes you make to the parameters' default values from the Jenkins console are ignored. The build configuration values take precedence. If you start with oc start-build -e , the values for the environment variables specified in the -e option take precedence. If you specify an environment variable not listed in the build configuration, they will be added as a Jenkins job parameter definitions. Any changes you make from the Jenkins console to the parameters corresponding to the environment variables are ignored. The build configuration and what you specify with oc start-build -e takes precedence. If you start the Jenkins job with the Jenkins console, then you can control the setting of the parameters with the Jenkins console as part of starting a build for the job. Note It is recommended that you specify in the build configuration all possible environment variables to be associated with job parameters. Doing so reduces disk I/O and improves performance during Jenkins processing. 5.4.4. Pipeline build tutorial Important The Pipeline build strategy is deprecated in OpenShift Container Platform 4. Equivalent and improved functionality is present in the OpenShift Container Platform Pipelines based on Tekton. Jenkins images on OpenShift Container Platform are fully supported and users should follow Jenkins user documentation for defining their jenkinsfile in a job or store it in a Source Control Management system. This example demonstrates how to create an OpenShift Container Platform Pipeline that will build, deploy, and verify a Node.js/MongoDB application using the nodejs-mongodb.json template. Procedure Create the Jenkins master: USD oc project <project_name> Select the project that you want to use or create a new project with oc new-project <project_name> . USD oc new-app jenkins-ephemeral 1 If you want to use persistent storage, use jenkins-persistent instead. Create a file named nodejs-sample-pipeline.yaml with the following content: Note This creates a BuildConfig object that employs the Jenkins pipeline strategy to build, deploy, and scale the Node.js/MongoDB example application. kind: "BuildConfig" apiVersion: "v1" metadata: name: "nodejs-sample-pipeline" spec: strategy: jenkinsPipelineStrategy: jenkinsfile: <pipeline content from below> type: JenkinsPipeline After you create a BuildConfig object with a jenkinsPipelineStrategy , tell the pipeline what to do by using an inline jenkinsfile : Note This example does not set up a Git repository for the application. The following jenkinsfile content is written in Groovy using the OpenShift Container Platform DSL. For this example, include inline content in the BuildConfig object using the YAML Literal Style, though including a jenkinsfile in your source repository is the preferred method. def templatePath = 'https://raw.githubusercontent.com/openshift/nodejs-ex/master/openshift/templates/nodejs-mongodb.json' 1 def templateName = 'nodejs-mongodb-example' 2 pipeline { agent { node { label 'nodejs' 3 } } options { timeout(time: 20, unit: 'MINUTES') 4 } stages { stage('preamble') { steps { script { openshift.withCluster() { openshift.withProject() { echo "Using project: USD{openshift.project()}" } } } } } stage('cleanup') { steps { script { openshift.withCluster() { openshift.withProject() { openshift.selector("all", [ template : templateName ]).delete() 5 if (openshift.selector("secrets", templateName).exists()) { 6 openshift.selector("secrets", templateName).delete() } } } } } } stage('create') { steps { script { openshift.withCluster() { openshift.withProject() { openshift.newApp(templatePath) 7 } } } } } stage('build') { steps { script { openshift.withCluster() { openshift.withProject() { def builds = openshift.selector("bc", templateName).related('builds') timeout(5) { 8 builds.untilEach(1) { return (it.object().status.phase == "Complete") } } } } } } } stage('deploy') { steps { script { openshift.withCluster() { openshift.withProject() { def rm = openshift.selector("dc", templateName).rollout() timeout(5) { 9 openshift.selector("dc", templateName).related('pods').untilEach(1) { return (it.object().status.phase == "Running") } } } } } } } stage('tag') { steps { script { openshift.withCluster() { openshift.withProject() { openshift.tag("USD{templateName}:latest", "USD{templateName}-staging:latest") 10 } } } } } } } 1 Path of the template to use. 1 2 Name of the template that will be created. 3 Spin up a node.js agent pod on which to run this build. 4 Set a timeout of 20 minutes for this pipeline. 5 Delete everything with this template label. 6 Delete any secrets with this template label. 7 Create a new application from the templatePath . 8 Wait up to five minutes for the build to complete. 9 Wait up to five minutes for the deployment to complete. 10 If everything else succeeded, tag the USD {templateName}:latest image as USD {templateName}-staging:latest . A pipeline build configuration for the staging environment can watch for the USD {templateName}-staging:latest image to change and then deploy it to the staging environment. Note The example was written using the declarative pipeline style, but the older scripted pipeline style is also supported. Create the Pipeline BuildConfig in your OpenShift Container Platform cluster: USD oc create -f nodejs-sample-pipeline.yaml If you do not want to create your own file, you can use the sample from the Origin repository by running: USD oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/jenkins/pipeline/nodejs-sample-pipeline.yaml Start the Pipeline: USD oc start-build nodejs-sample-pipeline Note Alternatively, you can start your pipeline with the OpenShift Container Platform web console by navigating to the Builds Pipeline section and clicking Start Pipeline , or by visiting the Jenkins Console, navigating to the Pipeline that you created, and clicking Build Now . Once the pipeline is started, you should see the following actions performed within your project: A job instance is created on the Jenkins server. An agent pod is launched, if your pipeline requires one. The pipeline runs on the agent pod, or the master if no agent is required. Any previously created resources with the template=nodejs-mongodb-example label will be deleted. A new application, and all of its associated resources, will be created from the nodejs-mongodb-example template. A build will be started using the nodejs-mongodb-example BuildConfig . The pipeline will wait until the build has completed to trigger the stage. A deployment will be started using the nodejs-mongodb-example deployment configuration. The pipeline will wait until the deployment has completed to trigger the stage. If the build and deploy are successful, the nodejs-mongodb-example:latest image will be tagged as nodejs-mongodb-example:stage . The agent pod is deleted, if one was required for the pipeline. Note The best way to visualize the pipeline execution is by viewing it in the OpenShift Container Platform web console. You can view your pipelines by logging in to the web console and navigating to Builds Pipelines. 5.5. Adding secrets with web console You can add a secret to your build configuration so that it can access a private repository. Procedure To add a secret to your build configuration so that it can access a private repository from the OpenShift Container Platform web console: Create a new OpenShift Container Platform project. Create a secret that contains credentials for accessing a private source code repository. Create a build configuration. On the build configuration editor page or in the create app from builder image page of the web console, set the Source Secret . Click Save . 5.6. Enabling pulling and pushing You can enable pulling to a private registry by setting the pull secret and pushing by setting the push secret in the build configuration. Procedure To enable pulling to a private registry: Set the pull secret in the build configuration. To enable pushing: Set the push secret in the build configuration. | [
"strategy: dockerStrategy: from: kind: \"ImageStreamTag\" name: \"debian:latest\"",
"strategy: dockerStrategy: dockerfilePath: dockerfiles/app1/Dockerfile",
"dockerStrategy: env: - name: \"HTTP_PROXY\" value: \"http://myproxy.net:5187/\"",
"dockerStrategy: buildArgs: - name: \"version\" value: \"latest\"",
"strategy: dockerStrategy: imageOptimizationPolicy: SkipLayers",
"spec: dockerStrategy: volumes: - name: secret-mvn 1 mounts: - destinationPath: /opt/app-root/src/.ssh 2 source: type: Secret 3 secret: secretName: my-secret 4 - name: settings-mvn 5 mounts: - destinationPath: /opt/app-root/src/.m2 6 source: type: ConfigMap 7 configMap: name: my-config 8 - name: my-csi-volume 9 mounts: - destinationPath: /opt/app-root/src/some_path 10 source: type: CSI 11 csi: driver: csi.sharedresource.openshift.io 12 readOnly: true 13 volumeAttributes: 14 attribute: value",
"strategy: sourceStrategy: from: kind: \"ImageStreamTag\" name: \"incremental-image:latest\" 1 incremental: true 2",
"strategy: sourceStrategy: from: kind: \"ImageStreamTag\" name: \"builder-image:latest\" scripts: \"http://somehost.com/scripts_directory\" 1",
"sourceStrategy: env: - name: \"DISABLE_ASSET_COMPILATION\" value: \"true\"",
"#!/bin/bash restore build artifacts if [ \"USD(ls /tmp/s2i/artifacts/ 2>/dev/null)\" ]; then mv /tmp/s2i/artifacts/* USDHOME/. fi move the application source mv /tmp/s2i/src USDHOME/src build application artifacts pushd USD{HOME} make all install the artifacts make install popd",
"#!/bin/bash run the application /opt/application/run.sh",
"#!/bin/bash pushd USD{HOME} if [ -d deps ]; then # all deps contents to tar stream tar cf - deps fi popd",
"#!/bin/bash inform the user how to use the image cat <<EOF This is a S2I sample builder image, to use it, install https://github.com/openshift/source-to-image EOF",
"spec: sourceStrategy: volumes: - name: secret-mvn 1 mounts: - destinationPath: /opt/app-root/src/.ssh 2 source: type: Secret 3 secret: secretName: my-secret 4 - name: settings-mvn 5 mounts: - destinationPath: /opt/app-root/src/.m2 6 source: type: ConfigMap 7 configMap: name: my-config 8 - name: my-csi-volume 9 mounts: - destinationPath: /opt/app-root/src/some_path 10 source: type: CSI 11 csi: driver: csi.sharedresource.openshift.io 12 readOnly: true 13 volumeAttributes: 14 attribute: value",
"strategy: customStrategy: from: kind: \"DockerImage\" name: \"openshift/sti-image-builder\"",
"strategy: customStrategy: secrets: - secretSource: 1 name: \"secret1\" mountPath: \"/tmp/secret1\" 2 - secretSource: name: \"secret2\" mountPath: \"/tmp/secret2\"",
"customStrategy: env: - name: \"HTTP_PROXY\" value: \"http://myproxy.net:5187/\"",
"oc set env <enter_variables>",
"kind: \"BuildConfig\" apiVersion: \"v1\" metadata: name: \"sample-pipeline\" spec: strategy: jenkinsPipelineStrategy: jenkinsfile: |- node('agent') { stage 'build' openshiftBuild(buildConfig: 'ruby-sample-build', showBuildLogs: 'true') stage 'deploy' openshiftDeploy(deploymentConfig: 'frontend') }",
"kind: \"BuildConfig\" apiVersion: \"v1\" metadata: name: \"sample-pipeline\" spec: source: git: uri: \"https://github.com/openshift/ruby-hello-world\" strategy: jenkinsPipelineStrategy: jenkinsfilePath: some/repo/dir/filename 1",
"jenkinsPipelineStrategy: env: - name: \"FOO\" value: \"BAR\"",
"oc project <project_name>",
"oc new-app jenkins-ephemeral 1",
"kind: \"BuildConfig\" apiVersion: \"v1\" metadata: name: \"nodejs-sample-pipeline\" spec: strategy: jenkinsPipelineStrategy: jenkinsfile: <pipeline content from below> type: JenkinsPipeline",
"def templatePath = 'https://raw.githubusercontent.com/openshift/nodejs-ex/master/openshift/templates/nodejs-mongodb.json' 1 def templateName = 'nodejs-mongodb-example' 2 pipeline { agent { node { label 'nodejs' 3 } } options { timeout(time: 20, unit: 'MINUTES') 4 } stages { stage('preamble') { steps { script { openshift.withCluster() { openshift.withProject() { echo \"Using project: USD{openshift.project()}\" } } } } } stage('cleanup') { steps { script { openshift.withCluster() { openshift.withProject() { openshift.selector(\"all\", [ template : templateName ]).delete() 5 if (openshift.selector(\"secrets\", templateName).exists()) { 6 openshift.selector(\"secrets\", templateName).delete() } } } } } } stage('create') { steps { script { openshift.withCluster() { openshift.withProject() { openshift.newApp(templatePath) 7 } } } } } stage('build') { steps { script { openshift.withCluster() { openshift.withProject() { def builds = openshift.selector(\"bc\", templateName).related('builds') timeout(5) { 8 builds.untilEach(1) { return (it.object().status.phase == \"Complete\") } } } } } } } stage('deploy') { steps { script { openshift.withCluster() { openshift.withProject() { def rm = openshift.selector(\"dc\", templateName).rollout() timeout(5) { 9 openshift.selector(\"dc\", templateName).related('pods').untilEach(1) { return (it.object().status.phase == \"Running\") } } } } } } } stage('tag') { steps { script { openshift.withCluster() { openshift.withProject() { openshift.tag(\"USD{templateName}:latest\", \"USD{templateName}-staging:latest\") 10 } } } } } } }",
"oc create -f nodejs-sample-pipeline.yaml",
"oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/jenkins/pipeline/nodejs-sample-pipeline.yaml",
"oc start-build nodejs-sample-pipeline"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/builds_using_buildconfig/build-strategies |
3.2. Mounting an XFS File System | 3.2. Mounting an XFS File System An XFS file system can be mounted with no extra options, for example: The default for Red Hat Enterprise Linux 7 is inode64. Note Unlike mke2fs , mkfs.xfs does not utilize a configuration file; they are all specified on the command line. Write Barriers By default, XFS uses write barriers to ensure file system integrity even when power is lost to a device with write caches enabled. For devices without write caches, or with battery-backed write caches, disable the barriers by using the nobarrier option: For more information about write barriers, see Chapter 22, Write Barriers . Direct Access Technology Preview Since Red Hat Enterprise Linux 7.3, Direct Access (DAX) is available as a Technology Preview on the ext4 and XFS file systems. It is a means for an application to directly map persistent memory into its address space. To use DAX, a system must have some form of persistent memory available, usually in the form of one or more Non-Volatile Dual Inline Memory Modules (NVDIMMs), and a file system that supports DAX must be created on the NVDIMM(s). Also, the file system must be mounted with the dax mount option. Then, an mmap of a file on the dax-mounted file system results in a direct mapping of storage into the application's address space. | [
"mount /dev/ device /mount/point",
"mount -o nobarrier /dev/device /mount/point"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/storage_administration_guide/xfsmounting |
Chapter 1. Red Hat Quay release notes | Chapter 1. Red Hat Quay release notes The following sections detail y and z stream release information. 1.1. RHBA-2025:1598 - Red Hat Quay 3.12.8 release Issued 2025-02-20 Red Hat Quay release 3.12.8 is now available with Clair 4.7.4. The bug fixes that are included in the update are listed in the RHBA-2025:1598 advisory. 1.2. RHBA-2025:0250 - Red Hat Quay 3.12.7 release Issued 2025-01-15 Red Hat Quay release 3.12.7 is now available with Clair 4.7.4. The bug fixes that are included in the update are listed in the RHBA-2025:0250 advisory. 1.3. RHBA-2024:10968 - Red Hat Quay 3.12.6 release Issued 2024-12-11 Red Hat Quay release 3.12.6 is now available with Clair 4.7.4. The bug fixes that are included in the update are listed in the RHBA-2024:10968 advisory. 1.3.1. Red Hat Quay 3.12.6 bug fixes PROJQUAY-7977 . When deploying Red Hat Quay with an custom HorizontalPodAutoscaler component and then setting the component to managed: false in the QuayRegistry custom resource definition (CRD), the Red Hat Quay Operator continuously terminates and resets the minReplicas value to 2 for mirror and clair components. To work around this issue, see Using unmanaged Horizontal Pod Autoscalers . 1.4. RHBA-2024:10182 - Red Hat Quay 3.12.5 release Issued 2024-11-25 Red Hat Quay release 3.12.5 is now available with Clair 4.7.4. The bug fixes that are included in the update are listed in the RHBA-2024:10182 advisory. 1.4.1. Red Hat Quay 3.12.5 bug fixes PROJQUAY-8024 . Previously, using Hitachi HCP v9.7 as your storage provider would return errors when attempting to pull images. This issue has been resolved. PROJQUAY-5086 . Previously, Red Hat Quay on OpenShift Container Platform would produce information about horizontal pod autoscalers (HPAs) for some components (for example, Clair , Redis , PostgreSQL , and ObjectStorage ) when they were unmanaged by the Operator. This issue has been resolved and information about HPAs are not longer reported for unmanaged components. PROJQUAY-8243 . Previously, the REST API was more lenient than its UI counterpart when executing certain calls. For example, using the API allows users to create team names with dashes ( - ) in them, while the web UI would now allow dashes. This issue has been resolved. 1.5. RHBA-2024:8371 - Red Hat Quay 3.12.4 release Issued 2024-10-31 Red Hat Quay release 3.12.4 is now available with Clair 4.7.4. The bug fixes that are included in the update are listed in the RHBA-2024:8371 advisory. 1.5.1. Red Hat Quay 3.12.4 bug fixes PROJQUAY-6465 . Previously, when the quota management feature was enabled, the actual quota consumed was incorrectly displaying NaN KiB . This issue has been resolved. PROJQUAY-8006 . Previously, PROJQUAY-8023 . Previously, the GLOBAL_READONLY_SUPER_USERS configuration field did not work in tandem with the /v2/_catalog endpoint, and superusers could not properly used the /v2_catalog endpoint. This issue has been resolved. PROJQUAY-8059 . Previously, during a Red Hat Quay bootstrap, the validator would print out the database URI if debugging was enabled, and would not obfuscate the password. This could cause security concerns. This issue has been resolved. 1.6. RHBA-2024:6048 - Red Hat Quay 3.12.3 release Issued 2024-09-26 Red Hat Quay release 3.12.3 is now available with Clair 4.7.4. The bug fixes that are included in the update are listed in the RHBA-2024:7072 advisory. 1.6.1. Red Hat Quay 3.12.3 bug fixes PROJQUAY-7561 . Previously, pulling an image failed when using Hitachi Content Platform for Cloud Scale (HCP-CS). This issue has been resolved. This issue has been resolved, however HCP-CS is still not yet supported for use with Red Hat Quay. PROJQUAY-7735 . Previously, there was a bug on the Red Hat Quay v2 UI wherein users could not select the When the image is due to expiry in days event trigger. This issue has been resolved. PROJQUAY-6562 . Previously, too many INFO log lines were reported in the quay-app logs. This issue has been resolved. INFO log lines now return when using the debug option. 1.7. RHBA-2024:6048 - Red Hat Quay 3.12.2 release Issued 2024-09-3 Red Hat Quay release 3.12.2 is now available with Clair 4.7.4. The bug fixes that are included in the update are listed in the RHBA-2024:6048 advisory. 1.7.1. Red Hat Quay 3.12.2 bug fixes PROJQUAY-7598 . Invalid manifests are now returned when using the API. PROJQUAY-7689 . Previously, there was a bug affecting STS S3Storage engines, in which a config error was returned. This bug has been resolved. 1.8. RHBA-2024:5039 - Red Hat Quay 3.12.1 release Issued 2024-08-14 Red Hat Quay release 3.12.1 is now available with Clair 4.7.4. The bug fixes that are included in the update are listed in the RHBA-2024:5039 advisory. 1.8.1. Red Hat Quay 3.12.1 new features With this release, NetApp ONTAP S3 object storage is now supported. For more information, see NetApp ONTAP S3 object storage . 1.8.2. Red Hat Quay 3.12.1 known issues When using NetApp ONTAP S3 object storage, images with large layer sizes fail to push. This is a known issue and will be fixed in a future version of Red Hat Quay. ( PROJQUAY-7462 ). 1.8.3. Red Hat Quay 3.12.1 bug fixes PROJQUAY-7177 . Previously, global read-only superusers could not obtain resources from an organization when using the API. This issue has been resolved. PROJQUAY-7446 . Previously, global read-only superusers could not obtain correct information when using the listRepos API endpoints. This issue has been resolved. PROJQUAY-7449 . Previously, global read-only superusers could not use some superuser API endpoints. This issue has been resolved. PROJQUAY-7487 . Previously, when a repository had multiple notifications enabled, the wrong type of event notification could be triggered. This issue has been resolved. PROJQUAY-7491 . When using NetAPP's OnTAP S3 implementation, the follow errors could be returned: presigned URL request computed using signature-version v2 is not supported by ONTAP-S3 . This error occurred because boto iterates over a map of authentications if none is requested, and returns v2 because it is ordered earlier than v4 . This issue has been fixed, and the error is no longer returned. PROJQUAY-7578 . On the 3.12.1 UI, the release notes pointed to Red Hat Quay's 3.7 release. This has been fixed, and they now point to the current version. 1.8.4. Upgrading to Red Hat Quay 3.12.1 For information about upgrading standalone Red Hat Quay deployments, see Standalone upgrade . For information about upgrading Red Hat Quay on OpenShift Container Platform, see Upgrading the Red Hat Quay Operator . 1.9. RHBA-2024:4525 - Red Hat Quay 3.12.0 release Issued 2024-07-23 Red Hat Quay release 3.12 is now available with Clair 4.7.4. The bug fixes that are included in the update are listed in the RHBA-2024:4525 advisory. For the most recent compatibility matrix, see Quay Enterprise 3.x Tested Integrations . 1.10. Red Hat Quay release cadence With the release of Red Hat Quay 3.10, the product has begun to align its release cadence and lifecycle with OpenShift Container Platform. As a result, Red Hat Quay releases are now generally available (GA) within approximately four weeks of the most recent version of OpenShift Container Platform. Customers can not expect the support lifecycle phases of Red Hat Quay to align with OpenShift Container Platform releases. For more information, see the Red Hat Quay Life Cycle Policy . 1.11. Red Hat Quay documentation changes The following documentation changes have been made with the Red Hat Quay 3.12 release: The Use Red Hat Quay guide now includes accompanying API procedures for basic operations, such as creating and deleting repositories and organizations by using the API, access management, and so on. 1.12. Red Hat Quay new features and enhancements The following updates have been made to Red Hat Quay. 1.12.1. Splunk event collector enhancements With this update, Red Hat Quay administrators can configure their deployment to forward action logs directly to a Splunk HTTP Event Collector (HEC). This enhancement enables seamless integration with Splunk for comprehensive log management and analysis. For more information, see Configuring action log storage for Splunk . 1.12.2. API token ownership Previously, when a Red Hat Quay organization owner created an API OAuth token, and that API OAuth token was used by another organization member, the action was logged to the creator of the token. This was undesirable for auditing purpose, notably in restricted environments where only dedicated registry administrators are organization owners. With this release, organization administrators can now assign OAuth API tokens to be created by other users with specific permissions. This allows the audit logs to be reflected accurately when the token is used by a user that has no organization administrative permissions to create an OAuth API token. For more information, see Reassigning an OAuth access token . 1.12.3. Image expiration notification Previously, Red Hat Quay administrators and users had no way of being alerted when an image was about to expire. With this update, an event can be configured to notify users when an image is about to expire. This helps Red Hat Quay users avoid unexpected pull failures. Image expiration event triggers can be configured to notify users through email, Slack, webhooks, and so on, and can be configured at the repository level. Triggers can be set for images expiring in any amount of days, and can work in conjunction with the auto-pruning feature. For more information, see Creating an image expiration notification . 1.12.4. Red Hat Quay auto-pruning enhancements With the release of Red Hat Quay 3.10, a new auto-pruning feature was released. With that feature, Red Hat Quay administrators could set up auto-pruning policies on namespaces for both users and organizations so that image tags were automatically deleted based on specified criteria. In Red Hat Quay 3.11, this feature was enhanced so that auto-pruning policies could be set up on specified repositories. With this release, default auto-pruning policies can now be set up at the registry level. Default auto-pruning policies set up at the registry level can be configured on new and existing organizations. This feature saves Red Hat Quay administrators time, effort, and storage by enforcing registry-wide rules. Red Hat Quay administrators must enable this feature by updating their config.yaml file to include the DEFAULT_NAMESPACE_AUTOPRUNE_POLICY configuration field and one of number_of_tags or creation_date methods. Currently, this feature cannot be enabled by using the v2 UI or the API. For more information, see Red Hat Quay auto-pruning overview . 1.12.5. Open Container Initiative 1.1 implementation Red Hat Quay now supports the Open Container Initiative (OCI) 1.1 distribution spec version 1.1. Key highlights of this update include support for the following areas: Enhanced capabilities for handling various types of artifacts, which provides better flexibility and compliance with OCI 1.1. Introduction of new reference types, which allows more descriptive referencing of artifacts. Introduction of the referrers API , which aids in the retrieval and management of referrers, which helps improve container image management. Enhance UI to better visualize referrers, which makes it easier for users to track and manage dependencies. For more information about OCI spec 1.1, see OCI Distribution Specification . For more information about OCI support and Red Hat Quay, see Open Container Initiative support . 1.12.6. Metadata support through annotations Some OCI media types do not utilize labels and, as such, critical information such as expiration timestamps are not included. With this release, Red Hat Quay now supports metadata passed through annotations to accommodate OCI media types that do not include these labels for metadata transmission. Tools such as ORAS (OCI Registry as Storage) can now be used to embed information with artifact types to help ensure that images operate properly, for example, to expire. For more information about OCI media types and how adding an annotation with ORAS works, see Open Container Initiative support . 1.12.7. Red Hat Quay v2 UI enhancements The following enhancements have been made to the Red Hat Quay v2 UI. 1.12.7.1. Robot account creation enhancement When creating a robot account with the Red Hat Quay v2 UI, administrators can now specify that the kubernetes runtime use a secret only for a specific organization or repository. This option can be selected by clicking the name of your robot account on the v2 UI, and then clicking the Kubernetes tab. 1.13. New Red Hat Quay configuration fields The following configuration fields have been added to Red Hat Quay 3.12. 1.13.1. OAuth access token reassignment configuration field The following configuration field has been added for reassigning OAuth access tokens: Field Type Description FEATURE_ASSIGN_OAUTH_TOKEN Boolean Allows organization administrators to assign OAuth tokens to other users. Example OAuth access token reassignment YAML # ... FEATURE_ASSIGN_OAUTH_TOKEN: true # ... 1.13.2. Notification interval configuration field The following configuration field has been added to enhance Red Hat Quay notifications: Field Type Description NOTIFICATION_TASK_RUN_MINIMUM_INTERVAL_MINUTES Integer The interval, in minutes, that defines the frequency to re-run notifications for expiring images. By default, this field is set to notify Red Hat Quay users of events happening every 5 hours. Example notification re-run YAML # ... NOTIFICATION_TASK_RUN_MINIMUM_INTERVAL_MINUTES: 10 # ... 1.13.3. Registry auto-pruning configuration fields The following configuration fields have been added to Red Hat Quay auto-pruning feature: Field Type Description NOTIFICATION_TASK_RUN_MINIMUM_INTERVAL_MINUTES Integer The interval, in minutes, that defines the frequency to re-run notifications for expiring images. Default: 300 DEFAULT_NAMESPACE_AUTOPRUNE_POLICY Object The default organization-wide auto-prune policy. .method: number_of_tags Object The option specifying the number of tags to keep. .value: <integer> Integer When used with method: number_of_tags , denotes the number of tags to keep. For example, to keep two tags, specify 2 . .method: creation_date Object The option specifying the duration of which to keep tags. .value: <integer> Integer When used with creation_date , denotes how long to keep tags. Can be set to seconds ( s ), days ( d ), months ( m ), weeks ( w ), or years ( y ). Must include a valid integer. For example, to keep tags for one year, specify 1y . AUTO_PRUNING_DEFAULT_POLICY_POLL_PERIOD Integer The period in which the auto-pruner worker runs at the registry level. By default, it is set to run one time per day (one time per 24 hours). Value must be in seconds. Example registry auto-prune policy by number of tags DEFAULT_NAMESPACE_AUTOPRUNE_POLICY: method: number_of_tags value: 10 Example registry auto-prune policy by creation date DEFAULT_NAMESPACE_AUTOPRUNE_POLICY: method: creation_date value: 1y 1.13.4. Vulnerability detection notification configuration field The following configuration field has been added to notify users on detected vulnerabilities based on security level: Field Type Description NOTIFICATION_MIN_SEVERITY_ON_NEW_INDEX String Set minimal security level for new notifications on detected vulnerabilities. Avoids creation of large number of notifications after first index. If not defined, defaults to High . Available options include Critical , High , Medium , Low , Negligible , and Unknown . Example image vulnerability notification YAML NOTIFICATION_MIN_SEVERITY_ON_NEW_INDEX: High 1.13.5. OCI referrers API configuration field The following configuration field allows users to list OCI referrers of a manifest under a repository by using the v2 API: Field Type Description FEATURE_REFERRERS_API Boolean Enables OCI 1.1's referrers API. Example OCI referrers enablement YAML # ... FEATURE_REFERRERS_API: True # ... 1.13.6. Disable strict logging configuration field The following configuration field has been added to address when external systems like Splunk or ElasticSearch are configured as audit log destinations but are intermittently unavailable. When set to True , the logging event is logged to the stdout instead. Field Type Description ALLOW_WITHOUT_STRICT_LOGGING Boolean When set to True , if the external log system like Splunk or ElasticSearch is intermittently unavailable, allows users to push images normally. Events are logged to the stdout instead. Overrides ALLOW_PULLS_WITHOUT_STRICT_LOGGING if set. Example strict logging YAML # ... ALLOW_WITHOUT_STRICT_LOGGING: True # ... 1.13.7. Clair indexing layer size configuration field The following configuration field has been added for the Clair security scanner, which allows Red Hat Quay administrators to set a maximum layer size allowed for indexing. Field Type Description SECURITY_SCANNER_V4_INDEX_MAX_LAYER_SIZE String The maximum layer size allowed for indexing. If the layer size exceeds the configured size, the Red Hat Quay UI returns the following message: The manifest for this tag has layer(s) that are too large to index by the Quay Security Scanner . The default is 8G , and the maximum recommended is 10G . Example : 8G 1.14. API endpoint enhancements 1.14.1. New changeOrganizationQuota and createOrganizationQuota endpoints: The following optional API field has been added to the changeOrganizationQuota and createOrganizationQuota endpoints: Name Description Schema limits optional Human readable storage capacity of the organization. Accepts SI units like Mi, Gi, or Ti, as well as non-standard units like GB or MB. Must be mutually exclusive with limit_bytes . string Use this field to set specific limits when creating or changing an organization's quote limit. For more information about these endpoints, see changeOrganizationQuota and createOrganizationQuota . 1.14.2. New referrer API endpoint The following API endpoint allows use to obtain referrer artifact information: Type Name Description Schema path orgname required The name of the organization string path repository required The full path of the repository. e.g. namespace/name string path referrers required Looks up the OCI referrers of a manifest under a repository. string To use this field, you must generate a v2 API OAuth token and set FEATURE_REFERRERS_API: true in your config.yaml file. For more information, see Creating an OCI referrers OAuth access token . 1.15. Red Hat Quay 3.12 known issues and limitations The following sections note known issues and limitations for Red Hat Quay 3.12. 1.15.1. Red Hat Quay v2 UI known issues The Red Hat Quay team is aware of the following known issues on the v2 UI: PROJQUAY-6910 . The new UI can't group and stack the chart on usage logs PROJQUAY-6909 . The new UI can't toggle the visibility of the chart on usage log PROJQUAY-6904 . "Permanently delete" tag should not be restored on new UI PROJQUAY-6899 . The normal user can not delete organization in new UI when enable FEATURE_SUPERUSERS_FULL_ACCESS PROJQUAY-6892 . The new UI should not invoke not required stripe and status page PROJQUAY-6884 . The new UI should show the tip of slack Webhook URL when creating slack notification PROJQUAY-6882 . The new UI global readonly super user can't see all organizations and image repos PROJQUAY-6881 . The new UI can't show all operation types in the logs chart PROJQUAY-6861 . The new UI "Last Modified" of organization always show N/A after target organization's setting is updated PROJQUAY-6860 . The new UI update the time machine configuration of organization show NULL in usage logs PROJQUAY-6859 . Thenew UI remove image repo permission show "undefined" for organization name in audit logs PROJQUAY-6852 . "Tag manifest with the branch or tag name" option in build trigger setup wizard should be checked by default. PROJQUAY-6832 . The new UI should validate the OIDC group name when enable OIDC Directory Sync PROJQUAY-6830 . The new UI should show the sync icon when the team is configured sync team members from OIDC Group PROJQUAY-6829 . The new UI team member added to team sync from OIDC group should be audited in Organization logs page PROJQUAY-6825 . Build cancel operation log can not be displayed correctly in new UI PROJQUAY-6812 . The new UI the "performer by" is NULL of build image in logs page PROJQUAY-6810 . The new UI should highlight the tag name with tag icon in logs page PROJQUAY-6808 . The new UI can't click the robot account to show credentials in logs page PROJQUAY-6807 . The new UI can't see the operations types in log page when quay is in dark mode PROJQUAY-6770 . The new UI build image by uploading Docker file should support .tar.gz or .zip PROJQUAY-6769 . The new UI should not display message "Trigger setup has already been completed" after build trigger setup completed PROJQUAY-6768 . The new UI can't navigate back to current image repo from image build PROJQUAY-6767 . The new UI can't download build logs PROJQUAY-6758 . The new UI should display correct operation number when hover over different operation type PROJQUAY-6757 . The new UI usage log should display the tag expiration time as date format 1.15.2. Red Hat Quay 3.12 limitations The following features are not supported on IBM Power ( ppc64le ) or IBM Z ( s390x ): Ceph RadosGW storage Splunk HTTP Event Collector (HEC) 1.16. Red Hat Quay bug fixes The following issues were fixed with Red Hat Quay 3.12: PROJQUAY-6763 . Quay 3.11 new UI operations of enable/disable team sync from OIDC group should be audited PROJQUAY-6826 . Log histogram can't be hidden in the new UI PROJQUAY-6855 . Quay 3.11 new UI no usage log to audit operations under user namespace PROJQUAY-6857 . Quay 3.11 new UI usage log chart covered the operations types list PROJQUAY-6931 . OCI-compliant pagination PROJQUAY-6972 . Quay 3.11 new UI can't open repository page when Quay has 2k orgs and 2k image repositories PROJQUAY-7037 . Can't get slack and email notification when package vulnerability found PROJQUAY-7069 . Invalid time format error messages and layout glitches in tag expiration modal PROJQUAY-7107 . Quay.io overview page does not work in dark mode PROJQUAY-7239 . Quay logging exception when caching specific security_reports PROJQUAY-7304 . security: Add Vary header to 404 responses PROJQUAY-6973 . Add OCI Pagination PROJQUAY-6974 . Set a default auto-pruning policy at the registry level PROJQUAY-6976 . Org owner can change ownership of API tokens PROJQUAY-6977 . Trigger event on image expiration PROJQUAY-6979 . Annotation Parsing PROJQUAY-6980 . Add support for a global read only superuser PROJQUAY-7360 . Missing index on subject_backfilled field in manifest table PROJQUAY-7393 . Create backfill index concurrently PROJQUAY-7116 . Allow to ignore audit logging failures 1.17. Red Hat Quay feature tracker New features have been added to Red Hat Quay, some of which are currently in Technology Preview. Technology Preview features are experimental features and are not intended for production use. Some features available in releases have been deprecated or removed. Deprecated functionality is still included in Red Hat Quay, but is planned for removal in a future release and is not recommended for new deployments. For the most recent list of deprecated and removed functionality in Red Hat Quay, refer to Table 1.1. Additional details for more fine-grained functionality that has been deprecated and removed are listed after the table. Table 1.1. New features tracker Feature Quay 3.12 Quay 3.11 Quay 3.10 Splunk HTTP Event Collector (HEC) support General Availability - - Open Container Initiative 1.1 support General Availability - - Reassigning an OAuth access token General Availability - - Creating an image expiration notification General Availability - - Team synchronization for Red Hat Quay OIDC deployments General Availability General Availability - Configuring resources for managed components on OpenShift Container Platform General Availability General Availability - Configuring AWS STS for Red Hat Quay , Configuring AWS STS for Red Hat Quay on OpenShift Container Platform General Availability General Availability - Red Hat Quay repository auto-pruning General Availability General Availability - Configuring dark mode on the Red Hat Quay v2 UI General Availability General Availability - Disabling robot accounts General Availability General Availability General Availability Red Hat Quay namespace auto-pruning General Availability General Availability General Availability FEATURE_UI_V2 Technology Preview Technology Preview Technology Preview 1.17.1. IBM Power, IBM Z, and IBM(R) LinuxONE support matrix Table 1.2. list of supported and unsupported features Feature IBM Power IBM Z and IBM(R) LinuxONE Allow team synchronization via OIDC on Azure Not Supported Not Supported Backing up and restoring on a standalone deployment Supported Supported Clair Disconnected Supported Supported Geo-Replication (Standalone) Supported Supported Geo-Replication (Operator) Not Supported Not Supported IPv6 Not Supported Not Supported Migrating a standalone to operator deployment Supported Supported Mirror registry Not Supported Not Supported PostgreSQL connection pooling via pgBouncer Supported Supported Quay config editor - mirror, OIDC Supported Supported Quay config editor - MAG, Kinesis, Keystone, GitHub Enterprise Not Supported Not Supported Quay config editor - Red Hat Quay V2 User Interface Supported Supported Quay Disconnected Supported Supported Repo Mirroring Supported Supported | [
"FEATURE_ASSIGN_OAUTH_TOKEN: true",
"NOTIFICATION_TASK_RUN_MINIMUM_INTERVAL_MINUTES: 10",
"DEFAULT_NAMESPACE_AUTOPRUNE_POLICY: method: number_of_tags value: 10",
"DEFAULT_NAMESPACE_AUTOPRUNE_POLICY: method: creation_date value: 1y",
"NOTIFICATION_MIN_SEVERITY_ON_NEW_INDEX: High",
"FEATURE_REFERRERS_API: True",
"ALLOW_WITHOUT_STRICT_LOGGING: True"
]
| https://docs.redhat.com/en/documentation/red_hat_quay/3.12/html/red_hat_quay_release_notes/release-notes-312 |
Preface | Preface Cryostat is a container-native Java application that connects your Java workloads running inside an OpenShift cluster to your desktop JDK Mission Control (JMC) application. The primary feature for Cryostat 2.0 is that you can install and deploy Cryostat through the OpenShift Operator in the OperatorHub of the OpenShift Container Platform web console. After you install and deploy Cryostat, you can access a fully featured Cryostat instance and then explore any of the additional features listed in the Release notes for the Red Hat build of Cryostat 2.0 guide. Important Red Hat build of Cryostat is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope (Red Hat Customer Portal). | null | https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/2/html/release_notes_for_the_red_hat_build_of_cryostat_2.0/preface-cryostat-2-0 |
Chapter 4. Support for FIPS cryptography | Chapter 4. Support for FIPS cryptography You can install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 , ppc64le , and s390x architectures. Important To enable FIPS mode for your cluster, you must run the installation program from a RHEL 8 computer that is configured to operate in FIPS mode. Running RHEL 9 with FIPS mode enabled to install an OpenShift Container Platform cluster is not possible. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . For the Red Hat Enterprise Linux CoreOS (RHCOS) machines in your cluster, this change is applied when the machines are deployed based on the status of an option in the install-config.yaml file, which governs the cluster options that a user can change during cluster deployment. With Red Hat Enterprise Linux (RHEL) machines, you must enable FIPS mode when you install the operating system on the machines that you plan to use as worker machines. These configuration methods ensure that your cluster meets the requirements of a FIPS compliance audit: only FIPS validated or Modules In Process cryptography packages are enabled before the initial system boot. Because FIPS must be enabled before the operating system that your cluster uses boots for the first time, you cannot enable FIPS after you deploy a cluster. 4.1. FIPS validation in OpenShift Container Platform OpenShift Container Platform uses certain FIPS validated or Modules In Process modules within RHEL and RHCOS for the operating system components that it uses. See RHEL8 core crypto components . For example, when users use SSH to connect to OpenShift Container Platform clusters and containers, those connections are properly encrypted. OpenShift Container Platform components are written in Go and built with Red Hat's golang compiler. When you enable FIPS mode for your cluster, all OpenShift Container Platform components that require cryptographic signing call RHEL and RHCOS cryptographic libraries. Table 4.1. FIPS mode attributes and limitations in OpenShift Container Platform 4.12 Attributes Limitations FIPS support in RHEL 8 and RHCOS operating systems. The FIPS implementation does not offer a single function that both computes hash functions and validates the keys that are based on that hash. This limitation will continue to be evaluated and improved in future OpenShift Container Platform releases. FIPS support in CRI-O runtimes. FIPS support in OpenShift Container Platform services. FIPS validated or Modules In Process cryptographic module and algorithms that are obtained from RHEL 8 and RHCOS binaries and images. Use of FIPS compatible golang compiler. TLS FIPS support is not complete but is planned for future OpenShift Container Platform releases. FIPS support across multiple architectures. FIPS is currently only supported on OpenShift Container Platform deployments using the x86_64 , ppc64le , and s390x architectures. 4.2. FIPS support in components that the cluster uses Although the OpenShift Container Platform cluster itself uses FIPS validated or Modules In Process modules, ensure that the systems that support your OpenShift Container Platform cluster use FIPS validated or Modules In Process modules for cryptography. 4.2.1. etcd To ensure that the secrets that are stored in etcd use FIPS validated or Modules In Process encryption, boot the node in FIPS mode. After you install the cluster in FIPS mode, you can encrypt the etcd data by using the FIPS-approved aes cbc cryptographic algorithm. 4.2.2. Storage For local storage, use RHEL-provided disk encryption or Container Native Storage that uses RHEL-provided disk encryption. By storing all data in volumes that use RHEL-provided disk encryption and enabling FIPS mode for your cluster, both data at rest and data in motion, or network data, are protected by FIPS validated or Modules In Process encryption. You can configure your cluster to encrypt the root filesystem of each node, as described in Customizing nodes . 4.2.3. Runtimes To ensure that containers know that they are running on a host that is using FIPS validated or Modules In Process cryptography modules, use CRI-O to manage your runtimes. 4.3. Installing a cluster in FIPS mode To install a cluster in FIPS mode, follow the instructions to install a customized cluster on your preferred infrastructure. Ensure that you set fips: true in the install-config.yaml file before you deploy your cluster. Amazon Web Services Alibaba Cloud Microsoft Azure Bare metal Google Cloud Platform IBM Cloud VPC IBM Power IBM Z and IBM(R) LinuxONE IBM Z and IBM(R) LinuxONE with RHEL KVM Red Hat OpenStack Platform (RHOSP) VMware vSphere Note If you are using Azure File storage, you cannot enable FIPS mode. To apply AES CBC encryption to your etcd data store, follow the Encrypting etcd data process after you install your cluster. If you add RHEL nodes to your cluster, ensure that you enable FIPS mode on the machines before their initial boot. See Adding RHEL compute machines to an OpenShift Container Platform cluster and Enabling FIPS Mode in the RHEL 8 documentation. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/installation_overview/installing-fips |
Chapter 5. The proc File System | Chapter 5. The proc File System The Linux kernel has two primary functions: to control access to physical devices on the computer and to schedule when and how processes interact with these devices. The /proc/ directory - also called the proc file system - contains a hierarchy of special files which represent the current state of the kernel - allowing applications and users to peer into the kernel's view of the system. Within the /proc/ directory, one can find a wealth of information detailing the system hardware and any processes currently running. In addition, some of the files within the /proc/ directory tree can be manipulated by users and applications to communicate configuration changes to the kernel. 5.1. A Virtual File System Under Linux, all data are stored as files. Most users are familiar with the two primary types of files: text and binary. But the /proc/ directory contains another type of file called a virtual file . It is for this reason that /proc/ is often referred to as a virtual file system . These virtual files have unique qualities. Most of them are listed as zero bytes in size and yet when one is viewed, it can contain a large amount of information. In addition, most of the time and date settings on virtual files reflect the current time and date, indicative of the fact they are constantly updated. Virtual files such as /proc/interrupts , /proc/meminfo , /proc/mounts , and /proc/partitions provide an up-to-the-moment glimpse of the system's hardware. Others, like the /proc/filesystems file and the /proc/sys/ directory provide system configuration information and interfaces. For organizational purposes, files containing information on a similar topic are grouped into virtual directories and sub-directories. For instance, /proc/ide/ contains information for all physical IDE devices. Likewise, process directories contain information about each running process on the system. 5.1.1. Viewing Virtual Files By using the cat , more , or less commands on files within the /proc/ directory, users can immediately access enormous amounts of information about the system. For example, to display the type of CPU a computer has, type cat /proc/cpuinfo to receive output similar to the following: When viewing different virtual files in the /proc/ file system, some of the information is easily understandable while some is not human-readable. This is in part why utilities exist to pull data from virtual files and display it in a useful way. Examples of these utilities include lspci , apm , free , and top . Note Some of the virtual files in the /proc/ directory are readable only by the root user. 5.1.2. Changing Virtual Files As a general rule, most virtual files within the /proc/ directory are read-only. However, some can be used to adjust settings in the kernel. This is especially true for files in the /proc/sys/ subdirectory. To change the value of a virtual file, use the echo command and a greater than symbol ( > ) to redirect the new value to the file. For example, to change the hostname on the fly, type: Other files act as binary or boolean switches. Typing cat /proc/sys/net/ipv4/ip_forward returns either a 0 or a 1 . A 0 indicates that the kernel is not forwarding network packets. Using the echo command to change the value of the ip_forward file to 1 immediately turns packet forwarding on. Note Another command used to alter settings in the /proc/sys/ subdirectory is /sbin/sysctl . For more information on this command, refer to Section 5.4, "Using the sysctl Command" For a listing of some of the kernel configuration files available in the /proc/sys/ subdirectory, refer to Section 5.3.9, " /proc/sys/ " . | [
"processor : 0 vendor_id : AuthenticAMD cpu family : 5 model : 9 model name : AMD-K6(tm) 3D+ Processor stepping : 1 cpu MHz : 400.919 cache size : 256 KB fdiv_bug : no hlt_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 1 wp : yes flags : fpu vme de pse tsc msr mce cx8 pge mmx syscall 3dnow k6_mtrr bogomips : 799.53",
"echo www.example.com > /proc/sys/kernel/hostname"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/ch-proc |
Chapter 4. Installing a cluster on IBM Power Virtual Server with customizations | Chapter 4. Installing a cluster on IBM Power Virtual Server with customizations In OpenShift Container Platform version 4.15, you can install a customized cluster on infrastructure that the installation program provisions on IBM Power Virtual Server. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. 4.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an IBM Cloud(R) account to host the cluster. If you use a firewall, you configured it to allow the sites that your cluster requires access to. You configured the ccoctl utility before you installed the cluster. For more information, see Configuring the Cloud Credential Operator utility . 4.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.15, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 4.3. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 4.4. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 4.5. Exporting the API key You must set the API key you created as a global variable; the installation program ingests the variable during startup to set the API key. Prerequisites You have created either a user API key or service ID API key for your IBM Cloud(R) account. Procedure Export your API key for your account as a global variable: USD export IBMCLOUD_API_KEY=<api_key> Important You must set the variable name exactly as specified; the installation program expects the variable name to be present during startup. 4.6. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Note Always delete the ~/.powervs directory to avoid reusing a stale configuration. Run the following command: USD rm -rf ~/.powervs At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select powervs as the platform to target. Select the region to deploy the cluster to. Select the zone to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster. Enter a descriptive name for your cluster. Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for IBM Power(R) Virtual Server 4.6.1. Sample customized install-config.yaml file for IBM Power Virtual Server You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com compute: 1 2 - architecture: ppc64le hyperthreading: Enabled 3 name: worker platform: powervs: smtLevel: 8 4 replicas: 3 controlPlane: 5 6 architecture: ppc64le hyperthreading: Enabled 7 name: master platform: powervs: smtLevel: 8 8 replicas: 3 metadata: creationTimestamp: null name: example-cluster-name networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 192.168.0.0/24 networkType: OVNKubernetes 9 serviceNetwork: - 172.30.0.0/16 platform: powervs: userID: ibm-user-id region: powervs-region zone: powervs-zone powervsResourceGroup: "ibmcloud-resource-group" 10 serviceInstanceGUID: "powervs-region-service-instance-guid" vpcRegion : vpc-region publish: External pullSecret: '{"auths": ...}' 11 sshKey: ssh-ed25519 AAAA... 12 1 5 If you do not provide these parameters and values, the installation program provides the default value. 2 6 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Although both sections currently define a single machine pool, it is possible that OpenShift Container Platform will support defining multiple compute pools during installation. Only one control plane pool is used. 3 7 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. 4 8 The smtLevel specifies the level of SMT to set to the control plane and compute machines. The supported values are 1, 2, 4, 8, 'off' and 'on' . The default value is 8. The smtLevel 'off' sets SMT to off and smtlevel 'on' sets SMT to the default value 8 on the cluster nodes. Note When simultaneous multithreading (SMT), or hyperthreading is not enabled, one vCPU is equivalent to one physical core. When enabled, total vCPUs is computed as: (Thread(s) per core * Core(s) per socket) * Socket(s). The smtLevel controls the threads per core. Lower SMT levels may require additional assigned cores when deploying the cluster nodes. You can do this by setting the 'processors' parameter in the install-config.yaml file to an appropriate value to meet the requirements for deploying OpenShift Container Platform successfully. 9 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 10 The name of an existing resource group. 11 Required. The installation program prompts you for this value. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 4.6.2. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 4.7. Manually creating IAM Installing the cluster requires that the Cloud Credential Operator (CCO) operate in manual mode. While the installation program configures the CCO for manual mode, you must specify the identity and access management secrets for you cloud provider. You can use the Cloud Credential Operator (CCO) utility ( ccoctl ) to create the required IBM Cloud(R) resources. Prerequisites You have configured the ccoctl binary. You have an existing install-config.yaml file. Procedure Edit the install-config.yaml configuration file so that it contains the credentialsMode parameter set to Manual . Example install-config.yaml configuration file apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: ppc64le hyperthreading: Enabled 1 This line is added to set the credentialsMode parameter to Manual . To generate the manifests, run the following command from the directory that contains the installation program: USD ./openshift-install create manifests --dir <installation_directory> From the directory that contains the installation program, set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: "1.0" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer Create the service ID for each credential request, assign the policies defined, create an API key, and generate the secret: USD ccoctl ibmcloud create-service-id \ --credentials-requests-dir=<path_to_credential_requests_directory> \ 1 --name=<cluster_name> \ 2 --output-dir=<installation_directory> \ 3 --resource-group-name=<resource_group_name> 4 1 Specify the directory containing the files for the component CredentialsRequest objects. 2 Specify the name of the OpenShift Container Platform cluster. 3 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 4 Optional: Specify the name of the resource group used for scoping the access policies. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. If an incorrect resource group name is provided, the installation fails during the bootstrap phase. To find the correct resource group name, run the following command: USD grep resourceGroup <installation_directory>/manifests/cluster-infrastructure-02-config.yml Verification Ensure that the appropriate secrets were generated in your cluster's manifests directory. 4.8. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 4.9. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.15. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.15 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 4.10. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources Accessing the web console 4.11. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.15, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources About remote health monitoring 4.12. steps Customize your cluster If necessary, you can opt out of remote health reporting | [
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"export IBMCLOUD_API_KEY=<api_key>",
"./openshift-install create install-config --dir <installation_directory> 1",
"rm -rf ~/.powervs",
"apiVersion: v1 baseDomain: example.com compute: 1 2 - architecture: ppc64le hyperthreading: Enabled 3 name: worker platform: powervs: smtLevel: 8 4 replicas: 3 controlPlane: 5 6 architecture: ppc64le hyperthreading: Enabled 7 name: master platform: powervs: smtLevel: 8 8 replicas: 3 metadata: creationTimestamp: null name: example-cluster-name networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 192.168.0.0/24 networkType: OVNKubernetes 9 serviceNetwork: - 172.30.0.0/16 platform: powervs: userID: ibm-user-id region: powervs-region zone: powervs-zone powervsResourceGroup: \"ibmcloud-resource-group\" 10 serviceInstanceGUID: \"powervs-region-service-instance-guid\" vpcRegion : vpc-region publish: External pullSecret: '{\"auths\": ...}' 11 sshKey: ssh-ed25519 AAAA... 12",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: ppc64le hyperthreading: Enabled",
"./openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: \"1.0\" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer",
"ccoctl ibmcloud create-service-id --credentials-requests-dir=<path_to_credential_requests_directory> \\ 1 --name=<cluster_name> \\ 2 --output-dir=<installation_directory> \\ 3 --resource-group-name=<resource_group_name> 4",
"grep resourceGroup <installation_directory>/manifests/cluster-infrastructure-02-config.yml",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/installing_on_ibm_power_virtual_server/installing-ibm-power-vs-customizations |
8.101. java-1.7.0-openjdk | 8.101. java-1.7.0-openjdk 8.101.1. RHSA-2014:1620 - Important: java-1.7.0-openjdk security and bug fix update Updated java-1.7.0-openjdk packages that fix multiple security issues and one bug are now available for Red Hat Enterprise Linux 6 and 7. Red Hat Product Security has rated this update as having Important security impact. Common Vulnerability Scoring System (CVSS) base scores, which give detailed severity ratings, are available for each vulnerability from the CVE links associated with each description below. The java-1.7.0-openjdk packages provide the OpenJDK 7 Java Runtime Environment and the OpenJDK 7 Java Software Development Kit. Security Fixes CVE-2014-6506 , CVE-2014-6531 , CVE-2014-6502 , CVE-2014-6511 , CVE-2014-6504 , CVE-2014-6519 Multiple flaws were discovered in the Libraries, 2D, and Hotspot components in OpenJDK. An untrusted Java application or applet could use these flaws to bypass certain Java sandbox restrictions. CVE-2014-6517 It was discovered that the StAX XML parser in the JAXP component in OpenJDK performed expansion of external parameter entities even when external entity substitution was disabled. A remote attacker could use this flaw to perform XML eXternal Entity (XXE) attack against applications using the StAX parser to parse untrusted XML documents. CVE-2014-6512 It was discovered that the DatagramSocket implementation in OpenJDK failed to perform source address checks for packets received on a connected socket. A remote attacker could use this flaw to have their packets processed as if they were received from the expected source. CVE-2014-6457 It was discovered that the TLS/SSL implementation in the JSSE component in OpenJDK failed to properly verify the server identity during the renegotiation following session resumption, making it possible for malicious TLS/SSL servers to perform a Triple Handshake attack against clients using JSSE and client certificate authentication. CVE-2014-6558 It was discovered that the CipherInputStream class implementation in OpenJDK did not properly handle certain exceptions. This could possibly allow an attacker to affect the integrity of an encrypted stream handled by this class. The CVE-2014-6512 was discovered by Florian Weimer of Red Hat Product Security. Note: If the web browser plug-in provided by the icedtea-web package was installed, the issues exposed via Java applets could have been exploited without user interaction if a user visited a malicious website. Bug Fix BZ# 1148309 The TLS/SSL implementation in OpenJDK previously failed to handle Diffie-Hellman (DH) keys with more than 1024 bits. This caused client applications using JSSE to fail to establish TLS/SSL connections to servers using larger DH keys during the connection handshake. This update adds support for DH keys with size up to 2048 bits. The CVE-2014-6512 was discovered by Florian Weimer of Red Hat Product Security. Note: If the web browser plug-in provided by the icedtea-web package was installed, the issues exposed via Java applets could have been exploited without user interaction if a user visited a malicious website. All users of java-1.7.0-openjdk are advised to upgrade to these updated packages, which resolve these issues. All running instances of OpenJDK Java must be restarted for the update to take effect. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/java-1.7.0-openjdk |
Chapter 1. Introduction to Ceph block devices | Chapter 1. Introduction to Ceph block devices A block is a set length of bytes in a sequence, for example, a 512-byte block of data. Combining many blocks together into a single file can be used as a storage device that you can read from and write to. Block-based storage interfaces are the most common way to store data with rotating media such as: Hard drives CD/DVD discs Floppy disks Traditional 9-track tapes The ubiquity of block device interfaces makes a virtual block device an ideal candidate for interacting with a mass data storage system like Red Hat Ceph Storage. Ceph block devices are thin-provisioned, resizable and store data striped over multiple Object Storage Devices (OSD) in a Ceph storage cluster. Ceph block devices are also known as Reliable Autonomic Distributed Object Store (RADOS) Block Devices (RBDs). Ceph block devices leverage RADOS capabilities such as: Snapshots Replication Data consistency Ceph block devices interact with OSDs by using the librbd library. Ceph block devices deliver high performance with infinite scalability to Kernel Virtual Machines (KVMs), such as Quick Emulator (QEMU), and cloud-based computing systems, like OpenStack, that rely on the libvirt and QEMU utilities to integrate with Ceph block devices. You can use the same storage cluster to operate the Ceph Object Gateway and Ceph block devices simultaneously. Important To use Ceph block devices, requires you to have access to a running Ceph storage cluster. For details on installing a Red Hat Ceph Storage cluster, see the Red Hat Ceph Storage Installation Guide . | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/block_device_guide/introduction-to-ceph-block-devices_block |
Data Grid documentation | Data Grid documentation Documentation for Data Grid is available on the Red Hat customer portal. Data Grid 8.5 Documentation Data Grid 8.5 Component Details Supported Configurations for Data Grid 8.5 Data Grid 8 Feature Support Data Grid Deprecated Features and Functionality | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/data_grid_operator_8.5_release_notes/rhdg-docs_datagrid |
Release notes for Red Hat Decision Manager 7.13 | Release notes for Red Hat Decision Manager 7.13 Red Hat Decision Manager 7.13 | null | https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/release_notes_for_red_hat_decision_manager_7.13/index |
Chapter 2. BMCEventSubscription [metal3.io/v1alpha1] | Chapter 2. BMCEventSubscription [metal3.io/v1alpha1] Description BMCEventSubscription is the Schema for the fast eventing API Type object 2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object status object 2.1.1. .spec Description Type object Property Type Description context string Arbitrary user-provided context for the event destination string A webhook URL to send events to hostName string A reference to a BareMetalHost httpHeadersRef object A secret containing HTTP headers which should be passed along to the Destination when making a request 2.1.2. .spec.httpHeadersRef Description A secret containing HTTP headers which should be passed along to the Destination when making a request Type object Property Type Description name string name is unique within a namespace to reference a secret resource. namespace string namespace defines the space within which the secret name must be unique. 2.1.3. .status Description Type object Property Type Description error string subscriptionID string 2.2. API endpoints The following API endpoints are available: /apis/metal3.io/v1alpha1/bmceventsubscriptions GET : list objects of kind BMCEventSubscription /apis/metal3.io/v1alpha1/namespaces/{namespace}/bmceventsubscriptions DELETE : delete collection of BMCEventSubscription GET : list objects of kind BMCEventSubscription POST : create a BMCEventSubscription /apis/metal3.io/v1alpha1/namespaces/{namespace}/bmceventsubscriptions/{name} DELETE : delete a BMCEventSubscription GET : read the specified BMCEventSubscription PATCH : partially update the specified BMCEventSubscription PUT : replace the specified BMCEventSubscription /apis/metal3.io/v1alpha1/namespaces/{namespace}/bmceventsubscriptions/{name}/status GET : read status of the specified BMCEventSubscription PATCH : partially update status of the specified BMCEventSubscription PUT : replace status of the specified BMCEventSubscription 2.2.1. /apis/metal3.io/v1alpha1/bmceventsubscriptions HTTP method GET Description list objects of kind BMCEventSubscription Table 2.1. HTTP responses HTTP code Reponse body 200 - OK BMCEventSubscriptionList schema 401 - Unauthorized Empty 2.2.2. /apis/metal3.io/v1alpha1/namespaces/{namespace}/bmceventsubscriptions HTTP method DELETE Description delete collection of BMCEventSubscription Table 2.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind BMCEventSubscription Table 2.3. HTTP responses HTTP code Reponse body 200 - OK BMCEventSubscriptionList schema 401 - Unauthorized Empty HTTP method POST Description create a BMCEventSubscription Table 2.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.5. Body parameters Parameter Type Description body BMCEventSubscription schema Table 2.6. HTTP responses HTTP code Reponse body 200 - OK BMCEventSubscription schema 201 - Created BMCEventSubscription schema 202 - Accepted BMCEventSubscription schema 401 - Unauthorized Empty 2.2.3. /apis/metal3.io/v1alpha1/namespaces/{namespace}/bmceventsubscriptions/{name} Table 2.7. Global path parameters Parameter Type Description name string name of the BMCEventSubscription HTTP method DELETE Description delete a BMCEventSubscription Table 2.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 2.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified BMCEventSubscription Table 2.10. HTTP responses HTTP code Reponse body 200 - OK BMCEventSubscription schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified BMCEventSubscription Table 2.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.12. HTTP responses HTTP code Reponse body 200 - OK BMCEventSubscription schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified BMCEventSubscription Table 2.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.14. Body parameters Parameter Type Description body BMCEventSubscription schema Table 2.15. HTTP responses HTTP code Reponse body 200 - OK BMCEventSubscription schema 201 - Created BMCEventSubscription schema 401 - Unauthorized Empty 2.2.4. /apis/metal3.io/v1alpha1/namespaces/{namespace}/bmceventsubscriptions/{name}/status Table 2.16. Global path parameters Parameter Type Description name string name of the BMCEventSubscription HTTP method GET Description read status of the specified BMCEventSubscription Table 2.17. HTTP responses HTTP code Reponse body 200 - OK BMCEventSubscription schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified BMCEventSubscription Table 2.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.19. HTTP responses HTTP code Reponse body 200 - OK BMCEventSubscription schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified BMCEventSubscription Table 2.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.21. Body parameters Parameter Type Description body BMCEventSubscription schema Table 2.22. HTTP responses HTTP code Reponse body 200 - OK BMCEventSubscription schema 201 - Created BMCEventSubscription schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/provisioning_apis/bmceventsubscription-metal3-io-v1alpha1 |
Chapter 9. Troubleshooting Ceph placement groups | Chapter 9. Troubleshooting Ceph placement groups This section contains information about fixing the most common errors related to the Ceph Placement Groups (PGs). 9.1. Prerequisites Verify your network connection. Ensure that Monitors are able to form a quorum. Ensure that all healthy OSDs are up and in , and the backfilling and recovery processes are finished. 9.2. Most common Ceph placement groups errors The following table lists the most common errors messages that are returned by the ceph health detail command. The table provides links to corresponding sections that explain the errors and point to specific procedures to fix the problems. In addition, you can list placement groups that are stuck in a state that is not optimal. See Section 9.3, "Listing placement groups stuck in stale , inactive , or unclean state" for details. 9.2.1. Prerequisites A running Red Hat Ceph Storage cluster. A running Ceph Object Gateway. 9.2.2. Placement group error messages A table of common placement group error messages, and a potential fix. Error message See HEALTH_ERR pgs down Placement groups are down pgs inconsistent Inconsistent placement groups scrub errors Inconsistent placement groups HEALTH_WARN pgs stale Stale placement groups unfound Unfound objects 9.2.3. Stale placement groups The ceph health command lists some Placement Groups (PGs) as stale : What This Means The Monitor marks a placement group as stale when it does not receive any status update from the primary OSD of the placement group's acting set or when other OSDs reported that the primary OSD is down . Usually, PGs enter the stale state after you start the storage cluster and until the peering process completes. However, when the PGs remain stale for longer than expected, it might indicate that the primary OSD for those PGs is down or not reporting PG statistics to the Monitor. When the primary OSD storing stale PGs is back up , Ceph starts to recover the PGs. The mon_osd_report_timeout setting determines how often OSDs report PGs statistics to Monitors. Be default, this parameter is set to 0.5 , which means that OSDs report the statistics every half a second. To Troubleshoot This Problem Identify which PGs are stale and on what OSDs they are stored. The error message will include information similar to the following example: Example Troubleshoot any problems with the OSDs that are marked as down . For details, see Down OSDs . Additional Resources The Monitoring Placement Group sets section in the Administration Guide for Red Hat Ceph Storage 4 9.2.4. Inconsistent placement groups Some placement groups are marked as active + clean + inconsistent and the ceph health detail returns an error messages similar to the following one: What This Means When Ceph detects inconsistencies in one or more replicas of an object in a placement group, it marks the placement group as inconsistent . The most common inconsistencies are: Objects have an incorrect size. Objects are missing from one replica after a recovery finished. In most cases, errors during scrubbing cause inconsistency within placement groups. To Troubleshoot This Problem Determine which placement group is in the inconsistent state: Determine why the placement group is inconsistent . Start the deep scrubbing process on the placement group: Replace ID with the ID of the inconsistent placement group, for example: Search the output of the ceph -w for any messages related to that placement group: Replace ID with the ID of the inconsistent placement group, for example: If the output includes any error messages similar to the following ones, you can repair the inconsistent placement group. See Repairing inconsistent placement groups for details. If the output includes any error messages similar to the following ones, it is not safe to repair the inconsistent placement group because you can lose data. Open a support ticket in this situation. See Contacting Red Hat support for details. Additional Resources Listing placement group inconsistencies in the Red Hat Ceph Storage Troubleshooting Guide . The Ceph Data integrity section in the Red Hat Ceph Storage Architecture Guide . The Scrubbing the OSD section in the Red Hat Ceph Storage Configuration Guide . 9.2.5. Unclean placement groups The ceph health command returns an error message similar to the following one: What This Means Ceph marks a placement group as unclean if it has not achieved the active+clean state for the number of seconds specified in the mon_pg_stuck_threshold parameter in the Ceph configuration file. The default value of mon_pg_stuck_threshold is 300 seconds. If a placement group is unclean , it contains objects that are not replicated the number of times specified in the osd_pool_default_size parameter. The default value of osd_pool_default_size is 3 , which means that Ceph creates three replicas. Usually, unclean placement groups indicate that some OSDs might be down . To Troubleshoot This Problem Determine which OSDs are down : Troubleshoot and fix any problems with the OSDs. See Down OSDs for details. Additional Resources Listing placement groups stuck in stale inactive or unclean state . 9.2.6. Inactive placement groups The ceph health command returns a error message similar to the following one: What This Means Ceph marks a placement group as inactive if it has not be active for the number of seconds specified in the mon_pg_stuck_threshold parameter in the Ceph configuration file. The default value of mon_pg_stuck_threshold is 300 seconds. Usually, inactive placement groups indicate that some OSDs might be down . To Troubleshoot This Problem Determine which OSDs are down : Troubleshoot and fix any problems with the OSDs. Additional Resources Listing placement groups stuck in stale inactive or unclean state See Down OSDs for details. 9.2.7. Placement groups are down The ceph health detail command reports that some placement groups are down : What This Means In certain cases, the peering process can be blocked, which prevents a placement group from becoming active and usable. Usually, a failure of an OSD causes the peering failures. To Troubleshoot This Problem Determine what blocks the peering process: Replace ID with the ID of the placement group that is down , for example: The recovery_state section includes information why the peering process is blocked. If the output includes the peering is blocked due to down osds error message, see Down OSDs . If you see any other error message, open a support ticket. See Contacting Red Hat Support service for details. Additional Resources The Ceph OSD peering section in the Red Hat Ceph Storage Administration Guide . 9.2.8. Unfound objects The ceph health command returns an error message similar to the following one, containing the unfound keyword: What This Means Ceph marks objects as unfound when it knows these objects or their newer copies exist but it is unable to find them. As a consequence, Ceph cannot recover such objects and proceed with the recovery process. An Example Situation A placement group stores data on osd.1 and osd.2 . osd.1 goes down . osd.2 handles some write operations. osd.1 comes up . A peering process between osd.1 and osd.2 starts, and the objects missing on osd.1 are queued for recovery. Before Ceph copies new objects, osd.2 goes down . As a result, osd.1 knows that these objects exist, but there is no OSD that has a copy of the objects. In this scenario, Ceph is waiting for the failed node to be accessible again, and the unfound objects blocks the recovery process. To Troubleshoot This Problem Determine which placement group contain unfound objects: List more information about the placement group: Replace ID with the ID of the placement group containing the unfound objects, for example: The might_have_unfound section includes OSDs where Ceph tried to locate the unfound objects: The already probed status indicates that Ceph cannot locate the unfound objects in that OSD. The osd is down status indicates that Ceph cannot contact that OSD. Troubleshoot the OSDs that are marked as down . See Down OSDs for details. If you are unable to fix the problem that causes the OSD to be down , open a support ticket. See Contacting Red Hat Support for service for details. 9.3. Listing placement groups stuck in stale , inactive , or unclean state After a failure, placement groups enter states like degraded or peering . This states indicate normal progression through the failure recovery process. However, if a placement group stays in one of these states for a longer time than expected, it can be an indication of a larger problem. The Monitors reports when placement groups get stuck in a state that is not optimal. The mon_pg_stuck_threshold option in the Ceph configuration file determines the number of seconds after which placement groups are considered inactive , unclean , or stale . The following table lists these states together with a short explanation. State What it means Most common causes See inactive The PG has not been able to service read/write requests. Peering problems Inactive placement groups unclean The PG contains objects that are not replicated the desired number of times. Something is preventing the PG from recovering. unfound objects OSDs are down Incorrect configuration Unclean placement groups stale The status of the PG has not been updated by a ceph-osd daemon. OSDs are down Stale placement groups Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure List the stuck PGs: Additional Resources See the Placement group states section in the Red Hat Ceph Storage Administration Guide . 9.4. Listing placement group inconsistencies Use the rados utility to list inconsistencies in various replicas of an objects. Use the --format=json-pretty option to list a more detailed output. This section covers the listing of: Inconsistent placement group in a pool Inconsistent objects in a placement group Inconsistent snapshot sets in a placement group Prerequisites A running Red Hat Ceph Storage cluster in a healthy state. Root-level access to the node. Procedure For example, list all inconsistent placement groups in a pool named data : For example, list inconsistent objects in a placement group with ID 0.6 : The following fields are important to determine what causes the inconsistency: name : The name of the object with inconsistent replicas. nspace : The namespace that is a logical separation of a pool. It's empty by default. locator : The key that is used as the alternative of the object name for placement. snap : The snapshot ID of the object. The only writable version of the object is called head . If an object is a clone, this field includes its sequential ID. version : The version ID of the object with inconsistent replicas. Each write operation to an object increments it. errors : A list of errors that indicate inconsistencies between shards without determining which shard or shards are incorrect. See the shard array to further investigate the errors. data_digest_mismatch : The digest of the replica read from one OSD is different from the other OSDs. size_mismatch : The size of a clone or the head object does not match the expectation. read_error : This error indicates inconsistencies caused most likely by disk errors. union_shard_error : The union of all errors specific to shards. These errors are connected to a faulty shard. The errors that end with oi indicate that you have to compare the information from a faulty object to information with selected objects. See the shard array to further investigate the errors. In the above example, the object replica stored on osd.2 has different digest than the replicas stored on osd.0 and osd.1 . Specifically, the digest of the replica is not 0xffffffff as calculated from the shard read from osd.2 , but 0xe978e67f . In addition, the size of the replica read from osd.2 is 0, while the size reported by osd.0 and osd.1 is 968. For example, list inconsistent sets of snapshots ( snapsets ) in a placement group with ID 0.23 : The command returns the following errors: ss_attr_missing : One or more attributes are missing. Attributes are information about snapshots encoded into a snapshot set as a list of key-value pairs. ss_attr_corrupted : One or more attributes fail to decode. clone_missing : A clone is missing. snapset_mismatch : The snapshot set is inconsistent by itself. head_mismatch : The snapshot set indicates that head exists or not, but the scrub results report otherwise. headless : The head of the snapshot set is missing. size_mismatch : The size of a clone or the head object does not match the expectation. Additional Resources Inconsistent placement groups section in the Red Hat Ceph Storage Troubleshooting Guide . Repairing inconsistent placement groups section in the Red Hat Ceph Storage Troubleshooting Guide . 9.5. Repairing inconsistent placement groups Due to an error during deep scrubbing, some placement groups can include inconsistencies. Ceph reports such placement groups as inconsistent : Warning You can repair only certain inconsistencies. Do not repair the placement groups if the Ceph logs include the following errors: Open a support ticket instead. See Contacting Red Hat Support for service for details. Prerequisites Root-level access to the Ceph Monitor node. Procedure Repair the inconsistent placement groups: Replace ID with the ID of the inconsistent placement group. Additional Resources Inconsistent placement groups section in the Red Hat Ceph Storage Troubleshooting Guide . Listing placement group inconsistencies Red Hat Ceph Storage Troubleshooting Guide . 9.6. Increasing the placement group Insufficient Placement Group (PG) count impacts the performance of the Ceph cluster and data distribution. It is one of the main causes of the nearfull osds error messages. The recommended ratio is between 100 and 300 PGs per OSD. This ratio can decrease when you add more OSDs to the cluster. The pg_num and pgp_num parameters determine the PG count. These parameters are configured per each pool, and therefore, you must adjust each pool with low PG count separately. Important Increasing the PG count is the most intensive process that you can perform on a Ceph cluster. This process might have serious performance impact if not done in a slow and methodical way. Once you increase pgp_num , you will not be able to stop or reverse the process and you must complete it. Consider increasing the PG count outside of business critical processing time allocation, and alert all clients about the potential performance impact. Do not change the PG count if the cluster is in the HEALTH_ERR state. Prerequisites A running Red Hat Ceph Storage cluster in a healthy state. Root-level access to the node. Procedure Reduce the impact of data redistribution and recovery on individual OSDs and OSD hosts: Lower the value of the osd max backfills , osd_recovery_max_active , and osd_recovery_op_priority parameters: Disable the shallow and deep scrubbing: Use the Ceph Placement Groups (PGs) per Pool Calculator to calculate the optimal value of the pg_num and pgp_num parameters. Increase the pg_num value in small increments until you reach the desired value. Determine the starting increment value. Use a very low value that is a power of two, and increase it when you determine the impact on the cluster. The optimal value depends on the pool size, OSD count, and client I/O load. Increment the pg_num value: Specify the pool name and the new value, for example: Monitor the status of the cluster: The PGs state will change from creating to active+clean . Wait until all PGs are in the active+clean state. Increase the pgp_num value in small increments until you reach the desired value: Determine the starting increment value. Use a very low value that is a power of two, and increase it when you determine the impact on the cluster. The optimal value depends on the pool size, OSD count, and client I/O load. Increment the pgp_num value: Specify the pool name and the new value, for example: Monitor the status of the cluster: The PGs state will change through peering , wait_backfill , backfilling , recover , and others. Wait until all PGs are in the active+clean state. Repeat the steps for all pools with insufficient PG count. Set osd max backfills , osd_recovery_max_active , and osd_recovery_op_priority to their default values: Enable the shallow and deep scrubbing: Additional Resources Nearfull OSDs The Monitoring Placement Group Sets section in the Administration Guide for Red Hat Ceph Storage 4 9.7. Additional Resources See Chapter 3, Troubleshooting networking issues for details. See Chapter 4, Troubleshooting Ceph Monitors for details about troubleshooting the most common errors related to Ceph Monitors. See Chapter 5, Troubleshooting Ceph OSDs for details about troubleshooting the most common errors related to Ceph OSDs. | [
"HEALTH_WARN 24 pgs stale; 3/300 in osds are down",
"ceph health detail HEALTH_WARN 24 pgs stale; 3/300 in osds are down pg 2.5 is stuck stale+active+remapped, last acting [2,0] osd.10 is down since epoch 23, last address 192.168.106.220:6800/11080 osd.11 is down since epoch 13, last address 192.168.106.220:6803/11539 osd.12 is down since epoch 24, last address 192.168.106.220:6806/11861",
"HEALTH_ERR 1 pgs inconsistent; 2 scrub errors pg 0.6 is active+clean+inconsistent, acting [0,1,2] 2 scrub errors",
"ceph health detail HEALTH_ERR 1 pgs inconsistent; 2 scrub errors pg 0.6 is active+clean+inconsistent, acting [0,1,2] 2 scrub errors",
"ceph pg deep-scrub ID",
"ceph pg deep-scrub 0.6 instructing pg 0.6 on osd.0 to deep-scrub",
"ceph -w | grep ID",
"ceph -w | grep 0.6 2015-02-26 01:35:36.778215 osd.106 [ERR] 0.6 deep-scrub stat mismatch, got 636/635 objects, 0/0 clones, 0/0 dirty, 0/0 omap, 0/0 hit_set_archive, 0/0 whiteouts, 1855455/1854371 bytes. 2015-02-26 01:35:36.788334 osd.106 [ERR] 0.6 deep-scrub 1 errors",
"PG . ID shard OSD : soid OBJECT missing attr , missing attr _ATTRIBUTE_TYPE PG . ID shard OSD : soid OBJECT digest 0 != known digest DIGEST , size 0 != known size SIZE PG . ID shard OSD : soid OBJECT size 0 != known size SIZE PG . ID deep-scrub stat mismatch, got MISMATCH PG . ID shard OSD : soid OBJECT candidate had a read error, digest 0 != known digest DIGEST",
"PG . ID shard OSD : soid OBJECT digest DIGEST != known digest DIGEST PG . ID shard OSD : soid OBJECT omap_digest DIGEST != known omap_digest DIGEST",
"HEALTH_WARN 197 pgs stuck unclean",
"ceph osd tree",
"HEALTH_WARN 197 pgs stuck inactive",
"ceph osd tree",
"HEALTH_ERR 7 pgs degraded; 12 pgs down; 12 pgs peering; 1 pgs recovering; 6 pgs stuck unclean; 114/3300 degraded (3.455%); 1/3 in osds are down pg 0.5 is down+peering pg 1.4 is down+peering osd.1 is down since epoch 69, last address 192.168.106.220:6801/8651",
"ceph pg ID query",
"ceph pg 0.5 query { \"state\": \"down+peering\", \"recovery_state\": [ { \"name\": \"Started\\/Primary\\/Peering\\/GetInfo\", \"enter_time\": \"2012-03-06 14:40:16.169679\", \"requested_info_from\": []}, { \"name\": \"Started\\/Primary\\/Peering\", \"enter_time\": \"2012-03-06 14:40:16.169659\", \"probing_osds\": [ 0, 1], \"blocked\": \"peering is blocked due to down osds\", \"down_osds_we_would_probe\": [ 1], \"peering_blocked_by\": [ { \"osd\": 1, \"current_lost_at\": 0, \"comment\": \"starting or marking this osd lost may let us proceed\"}]}, { \"name\": \"Started\", \"enter_time\": \"2012-03-06 14:40:16.169513\"} ] }",
"HEALTH_WARN 1 pgs degraded; 78/3778 unfound (2.065%)",
"ceph health detail HEALTH_WARN 1 pgs recovering; 1 pgs stuck unclean; recovery 5/937611 objects degraded (0.001%); 1/312537 unfound (0.000%) pg 3.8a5 is stuck unclean for 803946.712780, current state active+recovering, last acting [320,248,0] pg 3.8a5 is active+recovering, acting [320,248,0], 1 unfound recovery 5/937611 objects degraded (0.001%); **1/312537 unfound (0.000%)**",
"ceph pg ID query",
"ceph pg 3.8a5 query { \"state\": \"active+recovering\", \"epoch\": 10741, \"up\": [ 320, 248, 0], \"acting\": [ 320, 248, 0], <snip> \"recovery_state\": [ { \"name\": \"Started\\/Primary\\/Active\", \"enter_time\": \"2015-01-28 19:30:12.058136\", \"might_have_unfound\": [ { \"osd\": \"0\", \"status\": \"already probed\"}, { \"osd\": \"248\", \"status\": \"already probed\"}, { \"osd\": \"301\", \"status\": \"already probed\"}, { \"osd\": \"362\", \"status\": \"already probed\"}, { \"osd\": \"395\", \"status\": \"already probed\"}, { \"osd\": \"429\", \"status\": \"osd is down\"}], \"recovery_progress\": { \"backfill_targets\": [], \"waiting_on_backfill\": [], \"last_backfill_started\": \"0\\/\\/0\\/\\/-1\", \"backfill_info\": { \"begin\": \"0\\/\\/0\\/\\/-1\", \"end\": \"0\\/\\/0\\/\\/-1\", \"objects\": []}, \"peer_backfill_info\": [], \"backfills_in_flight\": [], \"recovering\": [], \"pg_backend\": { \"pull_from_peer\": [], \"pushing\": []}}, \"scrub\": { \"scrubber.epoch_start\": \"0\", \"scrubber.active\": 0, \"scrubber.block_writes\": 0, \"scrubber.finalizing\": 0, \"scrubber.waiting_on\": 0, \"scrubber.waiting_on_whom\": []}}, { \"name\": \"Started\", \"enter_time\": \"2015-01-28 19:30:11.044020\"}],",
"ceph pg dump_stuck inactive ceph pg dump_stuck unclean ceph pg dump_stuck stale",
"rados list-inconsistent-pg POOL --format=json-pretty",
"rados list-inconsistent-pg data --format=json-pretty [0.6]",
"rados list-inconsistent-obj PLACEMENT_GROUP_ID",
"rados list-inconsistent-obj 0.6 { \"epoch\": 14, \"inconsistents\": [ { \"object\": { \"name\": \"image1\", \"nspace\": \"\", \"locator\": \"\", \"snap\": \"head\", \"version\": 1 }, \"errors\": [ \"data_digest_mismatch\", \"size_mismatch\" ], \"union_shard_errors\": [ \"data_digest_mismatch_oi\", \"size_mismatch_oi\" ], \"selected_object_info\": \"0:602f83fe:::foo:head(16'1 client.4110.0:1 dirty|data_digest|omap_digest s 968 uv 1 dd e978e67f od ffffffff alloc_hint [0 0 0])\", \"shards\": [ { \"osd\": 0, \"errors\": [], \"size\": 968, \"omap_digest\": \"0xffffffff\", \"data_digest\": \"0xe978e67f\" }, { \"osd\": 1, \"errors\": [], \"size\": 968, \"omap_digest\": \"0xffffffff\", \"data_digest\": \"0xe978e67f\" }, { \"osd\": 2, \"errors\": [ \"data_digest_mismatch_oi\", \"size_mismatch_oi\" ], \"size\": 0, \"omap_digest\": \"0xffffffff\", \"data_digest\": \"0xffffffff\" } ] } ] }",
"rados list-inconsistent-snapset PLACEMENT_GROUP_ID",
"rados list-inconsistent-snapset 0.23 --format=json-pretty { \"epoch\": 64, \"inconsistents\": [ { \"name\": \"obj5\", \"nspace\": \"\", \"locator\": \"\", \"snap\": \"0x00000001\", \"headless\": true }, { \"name\": \"obj5\", \"nspace\": \"\", \"locator\": \"\", \"snap\": \"0x00000002\", \"headless\": true }, { \"name\": \"obj5\", \"nspace\": \"\", \"locator\": \"\", \"snap\": \"head\", \"ss_attr_missing\": true, \"extra_clones\": true, \"extra clones\": [ 2, 1 ] } ]",
"HEALTH_ERR 1 pgs inconsistent; 2 scrub errors pg 0.6 is active+clean+inconsistent, acting [0,1,2] 2 scrub errors",
"_PG_._ID_ shard _OSD_: soid _OBJECT_ digest _DIGEST_ != known digest _DIGEST_ _PG_._ID_ shard _OSD_: soid _OBJECT_ omap_digest _DIGEST_ != known omap_digest _DIGEST_",
"ceph pg repair ID",
"ceph tell osd.* injectargs '--osd_max_backfills 1 --osd_recovery_max_active 1 --osd_recovery_op_priority 1'",
"ceph osd set noscrub ceph osd set nodeep-scrub",
"ceph osd pool set POOL pg_num VALUE",
"ceph osd pool set data pg_num 4",
"ceph -s",
"ceph osd pool set POOL pgp_num VALUE",
"ceph osd pool set data pgp_num 4",
"ceph -s",
"ceph tell osd.* injectargs '--osd_max_backfills 1 --osd_recovery_max_active 3 --osd_recovery_op_priority 3'",
"ceph osd unset noscrub ceph osd unset nodeep-scrub"
]
| https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/troubleshooting_guide/troubleshooting-ceph-placement-groups |
function::user_string2 | function::user_string2 Name function::user_string2 - Retrieves string from user space with alternative error string Synopsis Arguments addr the user space address to retrieve the string from err_msg the error message to return when data isn't available Description Returns the null terminated C string from a given user space memory address. Reports the given error message on the rare cases when userspace data is not accessible. | [
"user_string2:string(addr:long,err_msg:string)"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-user-string2 |
11.12. Setting up Shared Storage Volume | 11.12. Setting up Shared Storage Volume Features like Snapshot Scheduler, NFS Ganesha and geo-replication require a shared storage to be available across all nodes of the cluster. A gluster volume named gluster_shared_storage is made available for this purpose, and is facilitated by the following volume set option. This option accepts the following two values: enable When the volume set option is enabled, a gluster volume named gluster_shared_storage is created in the cluster, and is mounted at /var/run/gluster/shared_storage on all the nodes in the cluster. Note With the release of 3.5 Batch Update 3, the mount point of shared storage is changed from /var/run/gluster/ to /run/gluster/ . Note This option cannot be enabled if there is only one node present in the cluster, or if only one node is online in the cluster. The volume created is a replica 3 volume. This depends on the number of nodes which are online in the cluster at the time of enabling this option and each of these nodes will have one brick participating in the volume. The brick path participating in the volume is /var/lib/glusterd/ss_brick. The mount entry is also added to /etc/fstab as part of enable . Before enabling this feature make sure that there is no volume named gluster_shared_storage in the cluster. This volume name is reserved for internal use only After successfully setting up the shared storage volume, when a new node is added to the cluster, the shared storage is not mounted automatically on this node. Neither is the /etc/fstab entry added for the shared storage on this node. To make use of shared storage on this node, execute the following commands: disable When the volume set option is disabled, the gluster_shared_storage volume is unmounted on all the nodes in the cluster, and then the volume is deleted. The mount entry from /etc/fstab as part of disable is also removed. For example: Important After creating a cluster excute the following command on all nodes present in the cluster: This is applicable for Red Hat Enterpise Linux 7 (RHEL 7) and Red Hat Enterpise Linux 8 (RHEL 8). | [
"cluster.enable-shared-storage",
"mount -t glusterfs <local node's ip>:gluster_shared_storage /var/run/gluster/shared_storage cp /etc/fstab /var/run/gluster/fstab.tmp echo \"<local node's ip>:/gluster_shared_storage /var/run/gluster/shared_storage/ glusterfs defaults 0 0\" >> /etc/fstab",
"gluster volume set all cluster.enable-shared-storage enable volume set: success",
"systemctl enable glusterfssharedstorage.service"
]
| https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/chap-managing_red_hat_storage_volumes-shared_volume |
Appendix C. Restoring manual changes overwritten by a Puppet run | Appendix C. Restoring manual changes overwritten by a Puppet run If your manual configuration has been overwritten by a Puppet run, you can restore the files to the state. The following example shows you how to restore a DHCP configuration file overwritten by a Puppet run. Procedure Copy the file you intend to restore. This allows you to compare the files to check for any mandatory changes required by the upgrade. This is not common for DNS or DHCP services. Check the log files to note down the md5sum of the overwritten file. For example: Restore the overwritten file: Compare the backup file and the restored file, and edit the restored file to include any mandatory changes required by the upgrade. | [
"cp /etc/dhcp/dhcpd.conf /etc/dhcp/dhcpd.backup",
"journalctl -xe /Stage[main]/Dhcp/File[/etc/dhcp/dhcpd.conf]: Filebucketed /etc/dhcp/dhcpd.conf to puppet with sum 622d9820b8e764ab124367c68f5fa3a1",
"puppet filebucket restore --local --bucket /var/lib/puppet/clientbucket /etc/dhcp/dhcpd.conf \\ 622d9820b8e764ab124367c68f5fa3a1"
]
| https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/installing_satellite_server_in_a_connected_network_environment/restoring-manual-changes-overwritten-by-a-puppet-run_satellite |
Chapter 1. Overview | Chapter 1. Overview Troubleshooting OpenShift Data Foundation is written to help administrators understand how to troubleshoot and fix their Red Hat OpenShift Data Foundation cluster. Most troubleshooting tasks focus on either a fix or a workaround. This document is divided into chapters based on the errors that an administrator may encounter: Chapter 2, Downloading log files and diagnostic information using must-gather shows you how to use the must-gather utility in OpenShift Data Foundation. Chapter 3, Commonly required logs for troubleshooting shows you how to obtain commonly required log files for OpenShift Data Foundation. Chapter 6, Troubleshooting alerts and errors in OpenShift Data Foundation shows you how to identify the encountered error and perform required actions. Warning Red Hat does not support running Ceph commands in OpenShift Data Foundation clusters (unless indicated by Red Hat support or Red Hat documentation) as it can cause data loss if you run the wrong commands. In that case, the Red Hat support team is only able to provide commercially reasonable effort and may not be able to restore all the data in case of any data loss. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/troubleshooting_openshift_data_foundation/overview |
21.3.3. Adding a Local Printer | 21.3.3. Adding a Local Printer Follow this procedure to add a local printer connected with other than a serial port: Open the New Printer dialog (see Section 21.3.2, "Starting Printer Setup" ). If the device does not appear automatically, select the port to which the printer is connected in the list on the left (such as Serial Port #1 or LPT #1 ). On the right, enter the connection properties: for Other URI (for example file:/dev/lp0) for Serial Port Baud Rate Parity Data Bits Flow Control Figure 21.4. Adding a local printer Click Forward . Select the printer model. See Section 21.3.8, "Selecting the Printer Model and Finishing" for details. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sec-adding_other_printer |
Chapter 145. Velocity | Chapter 145. Velocity Since Camel 1.2 Only producer is supported The Velocity component allows you to process a message using an Apache Velocity template. This can be ideal when using a template to generate responses for requests. 145.1. Dependencies When using velocity with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-velocity-starter</artifactId> </dependency> 145.2. URI format Where templateName is the classpath-local URI of the template to invoke; or the complete URL of the remote template (for example, file://folder/myfile.vm ). 145.3. Configuring Options Camel components are configured on two separate levels: component level endpoint level 145.3.1. Configuring Component Options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 145.3.2. Configuring Endpoint Options Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints. A good practice when configuring options is to use Property Placeholders , which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 145.4. Component Options The Velocity component supports 5 options, which are listed below. Name Description Default Type allowContextMapAll (producer) Sets whether the context map should allow access to all details. By default only the message body and headers can be accessed. This option can be enabled for full access to the current Exchange and CamelContext. Doing so impose a potential security risk as this opens access to the full power of CamelContext API. false boolean allowTemplateFromHeader (producer) Whether to allow to use resource template from header or not (default false). Enabling this allows to specify dynamic templates via message header. However this can be seen as a potential security vulnerability if the header is coming from a malicious user, so use this with care. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean velocityEngine (advanced) To use the VelocityEngine otherwise a new engine is created. VelocityEngine 145.5. Endpoint Options The Velocity endpoint is configured using URI syntax: with the following path and query parameters: 145.5.1. Path Parameters (1 parameters) Name Description Default Type resourceUri (producer) Required Path to the resource. You can prefix with: classpath, file, http, ref, or bean. classpath, file and http loads the resource using these protocols (classpath is default). ref will lookup the resource in the registry. bean will call a method on a bean to be used as the resource. For bean you can specify the method name after dot, eg bean:myBean.myMethod. String 145.5.2. Query Parameters (7 parameters) Name Description Default Type allowContextMapAll (producer) Sets whether the context map should allow access to all details. By default only the message body and headers can be accessed. This option can be enabled for full access to the current Exchange and CamelContext. Doing so impose a potential security risk as this opens access to the full power of CamelContext API. false boolean allowTemplateFromHeader (producer) Whether to allow to use resource template from header or not (default false). Enabling this allows to specify dynamic templates via message header. However this can be seen as a potential security vulnerability if the header is coming from a malicious user, so use this with care. false boolean contentCache (producer) Sets whether to use resource content cache or not. false boolean encoding (producer) Character encoding of the resource content. String loaderCache (producer) Enables / disables the velocity resource loader cache which is enabled by default. true boolean propertiesFile (producer) The URI of the properties file which is used for VelocityEngine initialization. String lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean 145.6. Message Headers The Velocity component supports 4 message header(s), which is/are listed below: Name Description Default Type CamelVelocityResourceUri (producer) Constant: VELOCITY_RESOURCE_URI The name of the velocity template. String CamelVelocityTemplate (producer) Constant: VELOCITY_TEMPLATE The content of the velocity template. String CamelVelocityContext (producer) Constant: VELOCITY_CONTEXT The velocity context to use. Context CamelVelocitySupplementalContext (producer) Constant: VELOCITY_SUPPLEMENTAL_CONTEXT To add additional information to the used VelocityContext. The value of this header should be a Map with key/values that will added (override any existing key with the same name). This can be used to pre setup some common key/values you want to reuse in your velocity endpoints. Map Headers set during the Velocity evaluation are returned to the message and added as headers. Then it is possible to return values from Velocity to the Message. For example, to set the header value of fruit in the Velocity template .tm : The fruit header is now accessible from the message.out.headers . 145.7. Velocity Context Camel will provide exchange information in the Velocity context (just a Map ). The Exchange is transfered as: key value exchange The Exchange itself. exchange.properties The Exchange properties. headers The headers of the In message. camelContext The Camel Context instance. request The In message. in The In message. body The In message body. out The Out message (only for InOut message exchange pattern). response The Out message (only for InOut message exchange pattern). You can setup a custom Velocity Context yourself by setting property allowTemplateFromHeader=true and setting the message header CamelVelocityContext just like this VelocityContext velocityContext = new VelocityContext(variableMap); exchange.getIn().setHeader("CamelVelocityContext", velocityContext); 145.8. Hot reloading The Velocity template resource is, by default, hot reloadable for both file and classpath resources (expanded jar). If you set contentCache=true , Camel will only load the resource once, and thus hot reloading is not possible. This scenario can be used in production, when the resource never changes. 145.9. Dynamic templates Since Camel 2.1 Camel provides two headers by which you can define a different resource location for a template or the template content itself. If any of these headers is set then Camel uses this over the endpoint configured resource. This allows you to provide a dynamic template at runtime. Header Type Description CamelVelocityResourceUri String A URI for the template resource to use instead of the endpoint configured. CamelVelocityTemplate String The template to use instead of the endpoint configured. 145.10. Samples For example, you can use: from("activemq:My.Queue"). to("velocity:com/acme/MyResponse.vm"); To use a Velocity template to formulate a response to a message for InOut message exchanges (where there is a JMSReplyTo header). If you want to use InOnly and consume the message and send it to another destination, you could use the following route: from("activemq:My.Queue"). to("velocity:com/acme/MyResponse.vm"). to("activemq:Another.Queue"); And to use the content cache, for example, for use in production, where the .vm template never changes: from("activemq:My.Queue"). to("velocity:com/acme/MyResponse.vm?contentCache=true"). to("activemq:Another.Queue"); And a file based resource: from("activemq:My.Queue"). to("velocity:file://myfolder/MyResponse.vm?contentCache=true"). to("activemq:Another.Queue"); It is possible to specify which template the component should use dynamically via a header, for example: from("direct:in"). setHeader("CamelVelocityResourceUri").constant("path/to/my/template.vm"). to("velocity:dummy?allowTemplateFromHeader=true""); It is possible to specify a template directly as a header the component should use dynamically via a header, so for example: from("direct:in"). setHeader("CamelVelocityTemplate").constant("Hi this is a velocity template that can do templating USD{body}"). to("velocity:dummy?allowTemplateFromHeader=true""); 145.11. The Email Sample In this sample, to use the Velocity templating for an order confirmation email. The email template is laid out in Velocity as: letter.vm And the java code (from an unit test): private Exchange createLetter() { Exchange exchange = context.getEndpoint("direct:a").createExchange(); Message msg = exchange.getIn(); msg.setHeader("firstName", "Claus"); msg.setHeader("lastName", "Ibsen"); msg.setHeader("item", "Camel in Action"); msg.setBody("PS: beer is on me, James"); return exchange; } @Test public void testVelocityLetter() throws Exception { MockEndpoint mock = getMockEndpoint("mock:result"); mock.expectedMessageCount(1); mock.message(0).body(String.class).contains("Thanks for the order of Camel in Action"); template.send("direct:a", createLetter()); mock.assertIsSatisfied(); } @Override protected RouteBuilder createRouteBuilder() { return new RouteBuilder() { public void configure() { from("direct:a") .to("velocity:org/apache/camel/component/velocity/letter.vm") .to("mock:result"); } }; } 145.12. Spring Boot Auto-Configuration The component supports 6 options, which are listed below. Name Description Default Type camel.component.velocity.allow-context-map-all Sets whether the context map should allow access to all details. By default only the message body and headers can be accessed. This option can be enabled for full access to the current Exchange and CamelContext. Doing so impose a potential security risk as this opens access to the full power of CamelContext API. false Boolean camel.component.velocity.allow-template-from-header Whether to allow to use resource template from header or not (default false). Enabling this allows to specify dynamic templates via message header. However this can be seen as a potential security vulnerability if the header is coming from a malicious user, so use this with care. false Boolean camel.component.velocity.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.velocity.enabled Whether to enable auto configuration of the velocity component. This is enabled by default. Boolean camel.component.velocity.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.velocity.velocity-engine To use the VelocityEngine otherwise a new engine is created. The option is a org.apache.velocity.app.VelocityEngine type. VelocityEngine | [
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-velocity-starter</artifactId> </dependency>",
"velocity:templateName[?options]",
"velocity:resourceUri",
"USDin.setHeader(\"fruit\", \"Apple\")",
"VelocityContext velocityContext = new VelocityContext(variableMap); exchange.getIn().setHeader(\"CamelVelocityContext\", velocityContext);",
"from(\"activemq:My.Queue\"). to(\"velocity:com/acme/MyResponse.vm\");",
"from(\"activemq:My.Queue\"). to(\"velocity:com/acme/MyResponse.vm\"). to(\"activemq:Another.Queue\");",
"from(\"activemq:My.Queue\"). to(\"velocity:com/acme/MyResponse.vm?contentCache=true\"). to(\"activemq:Another.Queue\");",
"from(\"activemq:My.Queue\"). to(\"velocity:file://myfolder/MyResponse.vm?contentCache=true\"). to(\"activemq:Another.Queue\");",
"from(\"direct:in\"). setHeader(\"CamelVelocityResourceUri\").constant(\"path/to/my/template.vm\"). to(\"velocity:dummy?allowTemplateFromHeader=true\"\");",
"from(\"direct:in\"). setHeader(\"CamelVelocityTemplate\").constant(\"Hi this is a velocity template that can do templating USD{body}\"). to(\"velocity:dummy?allowTemplateFromHeader=true\"\");",
"Dear USD{headers.lastName}, USD{headers.firstName} Thanks for the order of USD{headers.item}. Regards Camel Riders Bookstore USD{body}",
"private Exchange createLetter() { Exchange exchange = context.getEndpoint(\"direct:a\").createExchange(); Message msg = exchange.getIn(); msg.setHeader(\"firstName\", \"Claus\"); msg.setHeader(\"lastName\", \"Ibsen\"); msg.setHeader(\"item\", \"Camel in Action\"); msg.setBody(\"PS: Next beer is on me, James\"); return exchange; } @Test public void testVelocityLetter() throws Exception { MockEndpoint mock = getMockEndpoint(\"mock:result\"); mock.expectedMessageCount(1); mock.message(0).body(String.class).contains(\"Thanks for the order of Camel in Action\"); template.send(\"direct:a\", createLetter()); mock.assertIsSatisfied(); } @Override protected RouteBuilder createRouteBuilder() { return new RouteBuilder() { public void configure() { from(\"direct:a\") .to(\"velocity:org/apache/camel/component/velocity/letter.vm\") .to(\"mock:result\"); } }; }"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-velocity-component-starter |
Chapter 8. neutron | Chapter 8. neutron The following chapter contains information about the configuration options in the neutron service. 8.1. dhcp_agent.ini This section contains options for the /etc/neutron/dhcp_agent.ini file. 8.1.1. DEFAULT The following table outlines the options available under the [DEFAULT] group in the /etc/neutron/dhcp_agent.ini file. . Configuration option = Default value Type Description bulk_reload_interval = 0 integer value Time to sleep between reloading the DHCP allocations. This will only be invoked if the value is not 0. If a network has N updates in X seconds then we will reload once with the port changes in the X seconds and not N times. debug = False boolean value If set to true, the logging level will be set to DEBUG instead of the default INFO level. default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO'] list value List of package logging levels in logger=LEVEL pairs. This option is ignored if log_config_append is set. dhcp_broadcast_reply = False boolean value Use broadcast in DHCP replies. dhcp_confs = USDstate_path/dhcp string value Location to store DHCP server config files. dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq string value The driver used to manage the DHCP server. dhcp_rebinding_time = 0 integer value DHCP rebinding time T2 (in seconds). If set to 0, it will default to 7/8 of the lease time. dhcp_renewal_time = 0 integer value DHCP renewal time T1 (in seconds). If set to 0, it will default to half of the lease time. dnsmasq_base_log_dir = None string value Base log dir for dnsmasq logging. The log contains DHCP and DNS log information and is useful for debugging issues with either DHCP or DNS. If this section is null, disable dnsmasq log. `dnsmasq_config_file = ` string value Override the default dnsmasq settings with this file. dnsmasq_dns_servers = [] list value Comma-separated list of the DNS servers which will be used as forwarders. dnsmasq_enable_addr6_list = False boolean value Enable dhcp-host entry with list of addresses when port has multiple IPv6 addresses in the same subnet. dnsmasq_lease_max = 16777216 integer value Limit number of leases to prevent a denial-of-service. dnsmasq_local_resolv = False boolean value Enables the dnsmasq service to provide name resolution for instances via DNS resolvers on the host running the DHCP agent. Effectively removes the --no-resolv option from the dnsmasq process arguments. Adding custom DNS resolvers to the dnsmasq_dns_servers option disables this feature. enable_isolated_metadata = False boolean value The DHCP server can assist with providing metadata support on isolated networks. Setting this value to True will cause the DHCP server to append specific host routes to the DHCP request. The metadata service will only be activated when the subnet does not contain any router port. The guest instance must be configured to request host routes via DHCP (Option 121). This option doesn't have any effect when force_metadata is set to True. enable_metadata_network = False boolean value Allows for serving metadata requests coming from a dedicated metadata access network whose CIDR is 169.254.169.254/16 (or larger prefix), and is connected to a Neutron router from which the VMs send metadata:1 request. In this case DHCP Option 121 will not be injected in VMs, as they will be able to reach 169.254.169.254 through a router. This option requires enable_isolated_metadata = True. fatal_deprecations = False boolean value Enables or disables fatal status of deprecations. force_metadata = False boolean value In some cases the Neutron router is not present to provide the metadata IP but the DHCP server can be used to provide this info. Setting this value will force the DHCP server to append specific host routes to the DHCP request. If this option is set, then the metadata service will be activated for all the networks. `instance_format = [instance: %(uuid)s] ` string value The format for an instance that is passed with the log message. `instance_uuid_format = [instance: %(uuid)s] ` string value The format for an instance UUID that is passed with the log message. interface_driver = None string value The driver used to manage the virtual interface. log-config-append = None string value The name of a logging configuration file. This file is appended to any existing logging configuration files. For details about logging configuration files, see the Python logging module documentation. Note that when logging configuration files are used then all logging configuration is set in the configuration file and other logging configuration options are ignored (for example, log-date-format). log-date-format = %Y-%m-%d %H:%M:%S string value Defines the format string for %%(asctime)s in log records. Default: %(default)s . This option is ignored if log_config_append is set. log-dir = None string value (Optional) The base directory used for relative log_file paths. This option is ignored if log_config_append is set. log-file = None string value (Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr as defined by use_stderr. This option is ignored if log_config_append is set. log_rotate_interval = 1 integer value The amount of time before the log files are rotated. This option is ignored unless log_rotation_type is setto "interval". log_rotate_interval_type = days string value Rotation interval type. The time of the last file change (or the time when the service was started) is used when scheduling the rotation. log_rotation_type = none string value Log rotation type. logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s string value Format string to use for log messages with context. Used by oslo_log.formatters.ContextFormatter logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d string value Additional data to append to log message when logging level for the message is DEBUG. Used by oslo_log.formatters.ContextFormatter logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s string value Format string to use for log messages when context is undefined. Used by oslo_log.formatters.ContextFormatter logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s string value Prefix each line of exception output with this format. Used by oslo_log.formatters.ContextFormatter logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s string value Defines the format string for %(user_identity)s that is used in logging_context_format_string. Used by oslo_log.formatters.ContextFormatter max_logfile_count = 30 integer value Maximum number of rotated log files. max_logfile_size_mb = 200 integer value Log file maximum size in MB. This option is ignored if "log_rotation_type" is not set to "size". num_sync_threads = 4 integer value Number of threads to use during sync process. Should not exceed connection pool size configured on server. ovs_integration_bridge = br-int string value Name of Open vSwitch bridge to use ovs_use_veth = False boolean value Uses veth for an OVS interface or not. Support kernels with limited namespace support (e.g. RHEL 6.5) and rate limiting on router's gateway port so long as ovs_use_veth is set to True. publish_errors = False boolean value Enables or disables publication of error events. rate_limit_burst = 0 integer value Maximum number of logged messages per rate_limit_interval. rate_limit_except_level = CRITICAL string value Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or empty string. Logs with level greater or equal to rate_limit_except_level are not filtered. An empty string means that all levels are filtered. rate_limit_interval = 0 integer value Interval, number of seconds, of log rate limiting. resync_interval = 5 integer value The DHCP agent will resync its state with Neutron to recover from any transient notification or RPC errors. The interval is maximum number of seconds between attempts. The resync can be done more often based on the events triggered. resync_throttle = 1 integer value Throttle the number of resync state events between the local DHCP state and Neutron to only once per resync_throttle seconds. The value of throttle introduces a minimum interval between resync state events. Otherwise the resync may end up in a busy-loop. The value must be less than resync_interval. rpc_response_max_timeout = 600 integer value Maximum seconds to wait for a response from an RPC call. syslog-log-facility = LOG_USER string value Syslog facility to receive log lines. This option is ignored if log_config_append is set. use-journal = False boolean value Enable journald for logging. If running in a systemd environment you may wish to enable journal support. Doing so will use the journal native protocol which includes structured metadata in addition to log messages.This option is ignored if log_config_append is set. use-json = False boolean value Use JSON formatting for logging. This option is ignored if log_config_append is set. use-syslog = False boolean value Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to honor RFC5424. This option is ignored if log_config_append is set. use_eventlog = False boolean value Log output to Windows Event Log. use_stderr = False boolean value Log output to standard error. This option is ignored if log_config_append is set. watch-log-file = False boolean value Uses logging handler designed to watch file system. When log file is moved or removed this handler will open a new log file with specified path instantaneously. It makes sense only if log_file option is specified and Linux platform is used. This option is ignored if log_config_append is set. 8.1.2. agent The following table outlines the options available under the [agent] group in the /etc/neutron/dhcp_agent.ini file. Table 8.1. agent Configuration option = Default value Type Description availability_zone = nova string value Availability zone of this node log_agent_heartbeats = False boolean value Log agent heartbeats report_interval = 30 floating point value Seconds between nodes reporting state to server; should be less than agent_down_time, best if it is half or less than agent_down_time. 8.1.3. ovs The following table outlines the options available under the [ovs] group in the /etc/neutron/dhcp_agent.ini file. Table 8.2. ovs Configuration option = Default value Type Description bridge_mac_table_size = 50000 integer value The maximum number of MAC addresses to learn on a bridge managed by the Neutron OVS agent. Values outside a reasonable range (10 to 1,000,000) might be overridden by Open vSwitch according to the documentation. igmp_snooping_enable = False boolean value Enable IGMP snooping for integration bridge. If this option is set to True, support for Internet Group Management Protocol (IGMP) is enabled in integration bridge. Setting this option to True will also enable Open vSwitch mcast-snooping-disable-flood-unregistered flag. This option will disable flooding of unregistered multicast packets to all ports. The switch will send unregistered multicast packets only to ports connected to multicast routers. ovsdb_connection = tcp:127.0.0.1:6640 string value The connection string for the OVSDB backend. Will be used for all ovsdb commands and by ovsdb-client when monitoring ovsdb_debug = False boolean value Enable OVSDB debug logs ovsdb_timeout = 10 integer value Timeout in seconds for ovsdb commands. If the timeout expires, ovsdb commands will fail with ALARMCLOCK error. ssl_ca_cert_file = None string value The Certificate Authority (CA) certificate to use when interacting with OVSDB. Required when using an "ssl:" prefixed ovsdb_connection ssl_cert_file = None string value The SSL certificate file to use when interacting with OVSDB. Required when using an "ssl:" prefixed ovsdb_connection ssl_key_file = None string value The SSL private key file to use when interacting with OVSDB. Required when using an "ssl:" prefixed ovsdb_connection 8.2. l3_agent.ini This section contains options for the /etc/neutron/l3_agent.ini file. 8.2.1. DEFAULT The following table outlines the options available under the [DEFAULT] group in the /etc/neutron/l3_agent.ini file. . Configuration option = Default value Type Description agent_mode = legacy string value The working mode for the agent. Allowed modes are: legacy - this preserves the existing behavior where the L3 agent is deployed on a centralized networking node to provide L3 services like DNAT, and SNAT. Use this mode if you do not want to adopt DVR. dvr - this mode enables DVR functionality and must be used for an L3 agent that runs on a compute host. dvr_snat - this enables centralized SNAT support in conjunction with DVR. This mode must be used for an L3 agent running on a centralized node (or in single-host deployments, e.g. devstack). dvr_no_external - this mode enables only East/West DVR routing functionality for a L3 agent that runs on a compute host, the North/South functionality such as DNAT and SNAT will be provided by the centralized network node that is running in dvr_snat mode. This mode should be used when there is no external network connectivity on the compute host. api_workers = None integer value Number of separate API worker processes for service. If not specified, the default is equal to the number of CPUs available for best performance, capped by potential RAM usage. cleanup_on_shutdown = False boolean value Delete all routers on L3 agent shutdown. For L3 HA routers it includes a shutdown of keepalived and the state change monitor. NOTE: Setting to True could affect the data plane when stopping or restarting the L3 agent. debug = False boolean value If set to true, the logging level will be set to DEBUG instead of the default INFO level. default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO'] list value List of package logging levels in logger=LEVEL pairs. This option is ignored if log_config_append is set. enable_metadata_proxy = True boolean value Allow running metadata proxy. external_ingress_mark = 0x2 string value Iptables mangle mark used to mark ingress from external network. This mark will be masked with 0xffff so that only the lower 16 bits will be used. fatal_deprecations = False boolean value Enables or disables fatal status of deprecations. ha_confs_path = USDstate_path/ha_confs string value Location to store keepalived config files ha_keepalived_state_change_server_threads = <based on operating system> integer value Number of concurrent threads for keepalived server connection requests. More threads create a higher CPU load on the agent node. ha_vrrp_advert_int = 2 integer value The advertisement interval in seconds ha_vrrp_auth_password = None string value VRRP authentication password ha_vrrp_auth_type = PASS string value VRRP authentication type ha_vrrp_garp_master_delay = 5 integer value The delay for second set of gratuitous ARPs after lower priority advert received when MASTER. NOTE: this config option will be available only in OSP13 and OSP16. Future releases will implement a template form to provide the "keepalived" configuration. ha_vrrp_garp_master_repeat = 5 integer value The number of gratuitous ARP messages to send at a time after transition to MASTER. NOTE: this config option will be available only in OSP13 and OSP16. Future releases will implement a template form to provide the "keepalived" configuration. ha_vrrp_health_check_interval = 0 integer value The VRRP health check interval in seconds. Values > 0 enable VRRP health checks. Setting it to 0 disables VRRP health checks. Recommended value is 5. This will cause pings to be sent to the gateway IP address(es) - requires ICMP_ECHO_REQUEST to be enabled on the gateway(s). If a gateway fails, all routers will be reported as primary, and a primary election will be repeated in a round-robin fashion, until one of the routers restores the gateway connection. handle_internal_only_routers = True boolean value Indicates that this L3 agent should also handle routers that do not have an external network gateway configured. This option should be True only for a single agent in a Neutron deployment, and may be False for all agents if all routers must have an external network gateway. `instance_format = [instance: %(uuid)s] ` string value The format for an instance that is passed with the log message. `instance_uuid_format = [instance: %(uuid)s] ` string value The format for an instance UUID that is passed with the log message. interface_driver = None string value The driver used to manage the virtual interface. `ipv6_gateway = ` string value With IPv6, the network used for the external gateway does not need to have an associated subnet, since the automatically assigned link-local address (LLA) can be used. However, an IPv6 gateway address is needed for use as the -hop for the default route. If no IPv6 gateway address is configured here, (and only then) the neutron router will be configured to get its default route from router advertisements (RAs) from the upstream router; in which case the upstream router must also be configured to send these RAs. The ipv6_gateway, when configured, should be the LLA of the interface on the upstream router. If a -hop using a global unique address (GUA) is desired, it needs to be done via a subnet allocated to the network and not through this parameter. keepalived_use_no_track = True boolean value If keepalived without support for "no_track" option is used, this should be set to False. Support for this option was introduced in keepalived 2.x log-config-append = None string value The name of a logging configuration file. This file is appended to any existing logging configuration files. For details about logging configuration files, see the Python logging module documentation. Note that when logging configuration files are used then all logging configuration is set in the configuration file and other logging configuration options are ignored (for example, log-date-format). log-date-format = %Y-%m-%d %H:%M:%S string value Defines the format string for %%(asctime)s in log records. Default: %(default)s . This option is ignored if log_config_append is set. log-dir = None string value (Optional) The base directory used for relative log_file paths. This option is ignored if log_config_append is set. log-file = None string value (Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr as defined by use_stderr. This option is ignored if log_config_append is set. log_rotate_interval = 1 integer value The amount of time before the log files are rotated. This option is ignored unless log_rotation_type is setto "interval". log_rotate_interval_type = days string value Rotation interval type. The time of the last file change (or the time when the service was started) is used when scheduling the rotation. log_rotation_type = none string value Log rotation type. logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s string value Format string to use for log messages with context. Used by oslo_log.formatters.ContextFormatter logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d string value Additional data to append to log message when logging level for the message is DEBUG. Used by oslo_log.formatters.ContextFormatter logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s string value Format string to use for log messages when context is undefined. Used by oslo_log.formatters.ContextFormatter logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s string value Prefix each line of exception output with this format. Used by oslo_log.formatters.ContextFormatter logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s string value Defines the format string for %(user_identity)s that is used in logging_context_format_string. Used by oslo_log.formatters.ContextFormatter max_logfile_count = 30 integer value Maximum number of rotated log files. max_logfile_size_mb = 200 integer value Log file maximum size in MB. This option is ignored if "log_rotation_type" is not set to "size". max_rtr_adv_interval = 100 integer value MaxRtrAdvInterval setting for radvd.conf metadata_access_mark = 0x1 string value Iptables mangle mark used to mark metadata valid requests. This mark will be masked with 0xffff so that only the lower 16 bits will be used. metadata_port = 9697 port value TCP Port used by Neutron metadata namespace proxy. min_rtr_adv_interval = 30 integer value MinRtrAdvInterval setting for radvd.conf ovs_integration_bridge = br-int string value Name of Open vSwitch bridge to use ovs_use_veth = False boolean value Uses veth for an OVS interface or not. Support kernels with limited namespace support (e.g. RHEL 6.5) and rate limiting on router's gateway port so long as ovs_use_veth is set to True. pd_confs = USDstate_path/pd string value Location to store IPv6 PD files. periodic_fuzzy_delay = 5 integer value Range of seconds to randomly delay when starting the periodic task scheduler to reduce stampeding. (Disable by setting to 0) periodic_interval = 40 integer value Seconds between running periodic tasks. prefix_delegation_driver = dibbler string value Driver used for ipv6 prefix delegation. This needs to be an entry point defined in the neutron.agent.linux.pd_drivers namespace. See setup.cfg for entry points included with the neutron source. publish_errors = False boolean value Enables or disables publication of error events. ra_confs = USDstate_path/ra string value Location to store IPv6 RA config files `radvd_user = ` string value The username passed to radvd, used to drop root privileges and change user ID to username and group ID to the primary group of username. If no user specified (by default), the user executing the L3 agent will be passed. If "root" specified, because radvd is spawned as root, no "username" parameter will be passed. rate_limit_burst = 0 integer value Maximum number of logged messages per rate_limit_interval. rate_limit_except_level = CRITICAL string value Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or empty string. Logs with level greater or equal to rate_limit_except_level are not filtered. An empty string means that all levels are filtered. rate_limit_interval = 0 integer value Interval, number of seconds, of log rate limiting. rpc_response_max_timeout = 600 integer value Maximum seconds to wait for a response from an RPC call. rpc_state_report_workers = 1 integer value Number of RPC worker processes dedicated to state reports queue. rpc_workers = None integer value Number of RPC worker processes for service. If not specified, the default is equal to half the number of API workers. syslog-log-facility = LOG_USER string value Syslog facility to receive log lines. This option is ignored if log_config_append is set. use-journal = False boolean value Enable journald for logging. If running in a systemd environment you may wish to enable journal support. Doing so will use the journal native protocol which includes structured metadata in addition to log messages.This option is ignored if log_config_append is set. use-json = False boolean value Use JSON formatting for logging. This option is ignored if log_config_append is set. use-syslog = False boolean value Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to honor RFC5424. This option is ignored if log_config_append is set. use_eventlog = False boolean value Log output to Windows Event Log. use_stderr = False boolean value Log output to standard error. This option is ignored if log_config_append is set. vendor_pen = 8888 string value A decimal value as Vendor's Registered Private Enterprise Number as required by RFC3315 DUID-EN. watch-log-file = False boolean value Uses logging handler designed to watch file system. When log file is moved or removed this handler will open a new log file with specified path instantaneously. It makes sense only if log_file option is specified and Linux platform is used. This option is ignored if log_config_append is set. 8.2.2. agent The following table outlines the options available under the [agent] group in the /etc/neutron/l3_agent.ini file. Table 8.3. agent Configuration option = Default value Type Description availability_zone = nova string value Availability zone of this node extensions = [] list value Extensions list to use log_agent_heartbeats = False boolean value Log agent heartbeats report_interval = 30 floating point value Seconds between nodes reporting state to server; should be less than agent_down_time, best if it is half or less than agent_down_time. 8.2.3. network_log The following table outlines the options available under the [network_log] group in the /etc/neutron/l3_agent.ini file. Table 8.4. network_log Configuration option = Default value Type Description burst_limit = 25 integer value Maximum number of packets per rate_limit. local_output_log_base = None string value Output logfile path on agent side, default syslog file. rate_limit = 100 integer value Maximum packets logging per second. 8.2.4. ovs The following table outlines the options available under the [ovs] group in the /etc/neutron/l3_agent.ini file. Table 8.5. ovs Configuration option = Default value Type Description bridge_mac_table_size = 50000 integer value The maximum number of MAC addresses to learn on a bridge managed by the Neutron OVS agent. Values outside a reasonable range (10 to 1,000,000) might be overridden by Open vSwitch according to the documentation. igmp_snooping_enable = False boolean value Enable IGMP snooping for integration bridge. If this option is set to True, support for Internet Group Management Protocol (IGMP) is enabled in integration bridge. Setting this option to True will also enable Open vSwitch mcast-snooping-disable-flood-unregistered flag. This option will disable flooding of unregistered multicast packets to all ports. The switch will send unregistered multicast packets only to ports connected to multicast routers. ovsdb_connection = tcp:127.0.0.1:6640 string value The connection string for the OVSDB backend. Will be used for all ovsdb commands and by ovsdb-client when monitoring ovsdb_debug = False boolean value Enable OVSDB debug logs ovsdb_timeout = 10 integer value Timeout in seconds for ovsdb commands. If the timeout expires, ovsdb commands will fail with ALARMCLOCK error. ssl_ca_cert_file = None string value The Certificate Authority (CA) certificate to use when interacting with OVSDB. Required when using an "ssl:" prefixed ovsdb_connection ssl_cert_file = None string value The SSL certificate file to use when interacting with OVSDB. Required when using an "ssl:" prefixed ovsdb_connection ssl_key_file = None string value The SSL private key file to use when interacting with OVSDB. Required when using an "ssl:" prefixed ovsdb_connection 8.3. linuxbridge_agent.ini This section contains options for the /etc/neutron/plugins/ml2/linuxbridge_agent.ini file. 8.3.1. DEFAULT The following table outlines the options available under the [DEFAULT] group in the /etc/neutron/plugins/ml2/linuxbridge_agent.ini file. . Configuration option = Default value Type Description debug = False boolean value If set to true, the logging level will be set to DEBUG instead of the default INFO level. default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO'] list value List of package logging levels in logger=LEVEL pairs. This option is ignored if log_config_append is set. fatal_deprecations = False boolean value Enables or disables fatal status of deprecations. `instance_format = [instance: %(uuid)s] ` string value The format for an instance that is passed with the log message. `instance_uuid_format = [instance: %(uuid)s] ` string value The format for an instance UUID that is passed with the log message. log-config-append = None string value The name of a logging configuration file. This file is appended to any existing logging configuration files. For details about logging configuration files, see the Python logging module documentation. Note that when logging configuration files are used then all logging configuration is set in the configuration file and other logging configuration options are ignored (for example, log-date-format). log-date-format = %Y-%m-%d %H:%M:%S string value Defines the format string for %%(asctime)s in log records. Default: %(default)s . This option is ignored if log_config_append is set. log-dir = None string value (Optional) The base directory used for relative log_file paths. This option is ignored if log_config_append is set. log-file = None string value (Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr as defined by use_stderr. This option is ignored if log_config_append is set. log_rotate_interval = 1 integer value The amount of time before the log files are rotated. This option is ignored unless log_rotation_type is setto "interval". log_rotate_interval_type = days string value Rotation interval type. The time of the last file change (or the time when the service was started) is used when scheduling the rotation. log_rotation_type = none string value Log rotation type. logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s string value Format string to use for log messages with context. Used by oslo_log.formatters.ContextFormatter logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d string value Additional data to append to log message when logging level for the message is DEBUG. Used by oslo_log.formatters.ContextFormatter logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s string value Format string to use for log messages when context is undefined. Used by oslo_log.formatters.ContextFormatter logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s string value Prefix each line of exception output with this format. Used by oslo_log.formatters.ContextFormatter logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s string value Defines the format string for %(user_identity)s that is used in logging_context_format_string. Used by oslo_log.formatters.ContextFormatter max_logfile_count = 30 integer value Maximum number of rotated log files. max_logfile_size_mb = 200 integer value Log file maximum size in MB. This option is ignored if "log_rotation_type" is not set to "size". publish_errors = False boolean value Enables or disables publication of error events. rate_limit_burst = 0 integer value Maximum number of logged messages per rate_limit_interval. rate_limit_except_level = CRITICAL string value Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or empty string. Logs with level greater or equal to rate_limit_except_level are not filtered. An empty string means that all levels are filtered. rate_limit_interval = 0 integer value Interval, number of seconds, of log rate limiting. rpc_response_max_timeout = 600 integer value Maximum seconds to wait for a response from an RPC call. syslog-log-facility = LOG_USER string value Syslog facility to receive log lines. This option is ignored if log_config_append is set. use-journal = False boolean value Enable journald for logging. If running in a systemd environment you may wish to enable journal support. Doing so will use the journal native protocol which includes structured metadata in addition to log messages.This option is ignored if log_config_append is set. use-json = False boolean value Use JSON formatting for logging. This option is ignored if log_config_append is set. use-syslog = False boolean value Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to honor RFC5424. This option is ignored if log_config_append is set. use_eventlog = False boolean value Log output to Windows Event Log. use_stderr = False boolean value Log output to standard error. This option is ignored if log_config_append is set. watch-log-file = False boolean value Uses logging handler designed to watch file system. When log file is moved or removed this handler will open a new log file with specified path instantaneously. It makes sense only if log_file option is specified and Linux platform is used. This option is ignored if log_config_append is set. 8.3.2. agent The following table outlines the options available under the [agent] group in the /etc/neutron/plugins/ml2/linuxbridge_agent.ini file. Table 8.6. agent Configuration option = Default value Type Description dscp = None integer value The DSCP value to use for outer headers during tunnel encapsulation. dscp_inherit = False boolean value If set to True, the DSCP value of tunnel interfaces is overwritten and set to inherit. The DSCP value of the inner header is then copied to the outer header. extensions = [] list value Extensions list to use polling_interval = 2 integer value The number of seconds the agent will wait between polling for local device changes. quitting_rpc_timeout = 10 integer value Set new timeout in seconds for new rpc calls after agent receives SIGTERM. If value is set to 0, rpc timeout won't be changed 8.3.3. linux_bridge The following table outlines the options available under the [linux_bridge] group in the /etc/neutron/plugins/ml2/linuxbridge_agent.ini file. Table 8.7. linux_bridge Configuration option = Default value Type Description bridge_mappings = [] list value List of <physical_network>:<physical_bridge> physical_interface_mappings = [] list value Comma-separated list of <physical_network>:<physical_interface> tuples mapping physical network names to the agent's node-specific physical network interfaces to be used for flat and VLAN networks. All physical networks listed in network_vlan_ranges on the server should have mappings to appropriate interfaces on each agent. 8.3.4. network_log The following table outlines the options available under the [network_log] group in the /etc/neutron/plugins/ml2/linuxbridge_agent.ini file. Table 8.8. network_log Configuration option = Default value Type Description burst_limit = 25 integer value Maximum number of packets per rate_limit. local_output_log_base = None string value Output logfile path on agent side, default syslog file. rate_limit = 100 integer value Maximum packets logging per second. 8.3.5. securitygroup The following table outlines the options available under the [securitygroup] group in the /etc/neutron/plugins/ml2/linuxbridge_agent.ini file. Table 8.9. securitygroup Configuration option = Default value Type Description enable_ipset = True boolean value Use ipset to speed-up the iptables based security groups. Enabling ipset support requires that ipset is installed on L2 agent node. enable_security_group = True boolean value Controls whether the neutron security group API is enabled in the server. It should be false when using no security groups or using the nova security group API. firewall_driver = None string value Driver for security groups firewall in the L2 agent permitted_ethertypes = [] list value Comma-separated list of ethertypes to be permitted, in hexadecimal (starting with "0x"). For example, "0x4008" to permit InfiniBand. 8.3.6. vxlan The following table outlines the options available under the [vxlan] group in the /etc/neutron/plugins/ml2/linuxbridge_agent.ini file. Table 8.10. vxlan Configuration option = Default value Type Description arp_responder = False boolean value Enable local ARP responder which provides local responses instead of performing ARP broadcast into the overlay. Enabling local ARP responder is not fully compatible with the allowed-address-pairs extension. enable_vxlan = True boolean value Enable VXLAN on the agent. Can be enabled when agent is managed by ml2 plugin using linuxbridge mechanism driver l2_population = False boolean value Extension to use alongside ml2 plugin's l2population mechanism driver. It enables the plugin to populate VXLAN forwarding table. local_ip = None IP address value IP address of local overlay (tunnel) network endpoint. Use either an IPv4 or IPv6 address that resides on one of the host network interfaces. The IP version of this value must match the value of the overlay_ip_version option in the ML2 plug-in configuration file on the neutron server node(s). multicast_ranges = [] list value Optional comma-separated list of <multicast address>:<vni_min>:<vni_max> triples describing how to assign a multicast address to VXLAN according to its VNI ID. tos = None integer value TOS for vxlan interface protocol packets. This option is deprecated in favor of the dscp option in the AGENT section and will be removed in a future release. To convert the TOS value to DSCP, divide by 4. ttl = None integer value TTL for vxlan interface protocol packets. udp_dstport = None port value The UDP port used for VXLAN communication. By default, the Linux kernel doesn't use the IANA assigned standard value, so if you want to use it, this option must be set to 4789. It is not set by default because of backward compatibiltiy. udp_srcport_max = 0 port value The maximum of the UDP source port range used for VXLAN communication. udp_srcport_min = 0 port value The minimum of the UDP source port range used for VXLAN communication. vxlan_group = 224.0.0.1 string value Multicast group(s) for vxlan interface. A range of group addresses may be specified by using CIDR notation. Specifying a range allows different VNIs to use different group addresses, reducing or eliminating spurious broadcast traffic to the tunnel endpoints. To reserve a unique group for each possible (24-bit) VNI, use a /8 such as 239.0.0.0/8. This setting must be the same on all the agents. 8.4. metadata_agent.ini This section contains options for the /etc/neutron/metadata_agent.ini file. 8.4.1. DEFAULT The following table outlines the options available under the [DEFAULT] group in the /etc/neutron/metadata_agent.ini file. . Configuration option = Default value Type Description auth_ca_cert = None string value Certificate Authority public key (CA cert) file for ssl debug = False boolean value If set to true, the logging level will be set to DEBUG instead of the default INFO level. default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO'] list value List of package logging levels in logger=LEVEL pairs. This option is ignored if log_config_append is set. fatal_deprecations = False boolean value Enables or disables fatal status of deprecations. `instance_format = [instance: %(uuid)s] ` string value The format for an instance that is passed with the log message. `instance_uuid_format = [instance: %(uuid)s] ` string value The format for an instance UUID that is passed with the log message. log-config-append = None string value The name of a logging configuration file. This file is appended to any existing logging configuration files. For details about logging configuration files, see the Python logging module documentation. Note that when logging configuration files are used then all logging configuration is set in the configuration file and other logging configuration options are ignored (for example, log-date-format). log-date-format = %Y-%m-%d %H:%M:%S string value Defines the format string for %%(asctime)s in log records. Default: %(default)s . This option is ignored if log_config_append is set. log-dir = None string value (Optional) The base directory used for relative log_file paths. This option is ignored if log_config_append is set. log-file = None string value (Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr as defined by use_stderr. This option is ignored if log_config_append is set. log_rotate_interval = 1 integer value The amount of time before the log files are rotated. This option is ignored unless log_rotation_type is setto "interval". log_rotate_interval_type = days string value Rotation interval type. The time of the last file change (or the time when the service was started) is used when scheduling the rotation. log_rotation_type = none string value Log rotation type. logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s string value Format string to use for log messages with context. Used by oslo_log.formatters.ContextFormatter logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d string value Additional data to append to log message when logging level for the message is DEBUG. Used by oslo_log.formatters.ContextFormatter logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s string value Format string to use for log messages when context is undefined. Used by oslo_log.formatters.ContextFormatter logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s string value Prefix each line of exception output with this format. Used by oslo_log.formatters.ContextFormatter logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s string value Defines the format string for %(user_identity)s that is used in logging_context_format_string. Used by oslo_log.formatters.ContextFormatter max_logfile_count = 30 integer value Maximum number of rotated log files. max_logfile_size_mb = 200 integer value Log file maximum size in MB. This option is ignored if "log_rotation_type" is not set to "size". metadata_backlog = 4096 integer value Number of backlog requests to configure the metadata server socket with `metadata_proxy_group = ` string value Group (gid or name) running metadata proxy after its initialization (if empty: agent effective group). `metadata_proxy_shared_secret = ` string value When proxying metadata requests, Neutron signs the Instance-ID header with a shared secret to prevent spoofing. You may select any string for a secret, but it must match here and in the configuration used by the Nova Metadata Server. NOTE: Nova uses the same config key, but in [neutron] section. metadata_proxy_socket = USDstate_path/metadata_proxy string value Location for Metadata Proxy UNIX domain socket. metadata_proxy_socket_mode = deduce string value Metadata Proxy UNIX domain socket mode, 4 values allowed: deduce : deduce mode from metadata_proxy_user/group values, user : set metadata proxy socket mode to 0o644, to use when metadata_proxy_user is agent effective user or root, group : set metadata proxy socket mode to 0o664, to use when metadata_proxy_group is agent effective group or root, all : set metadata proxy socket mode to 0o666, to use otherwise. `metadata_proxy_user = ` string value User (uid or name) running metadata proxy after its initialization (if empty: agent effective user). metadata_workers = <based on operating system> integer value Number of separate worker processes for metadata server (defaults to 2 when used with ML2/OVN and half of the number of CPUs with other backend drivers) `nova_client_cert = ` string value Client certificate for nova metadata api server. `nova_client_priv_key = ` string value Private key of client certificate. nova_metadata_host = 127.0.0.1 host address value IP address or DNS name of Nova metadata server. nova_metadata_insecure = False boolean value Allow to perform insecure SSL (https) requests to nova metadata nova_metadata_port = 8775 port value TCP Port used by Nova metadata server. nova_metadata_protocol = http string value Protocol to access nova metadata, http or https publish_errors = False boolean value Enables or disables publication of error events. rate_limit_burst = 0 integer value Maximum number of logged messages per rate_limit_interval. rate_limit_except_level = CRITICAL string value Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or empty string. Logs with level greater or equal to rate_limit_except_level are not filtered. An empty string means that all levels are filtered. rate_limit_interval = 0 integer value Interval, number of seconds, of log rate limiting. rpc_response_max_timeout = 600 integer value Maximum seconds to wait for a response from an RPC call. syslog-log-facility = LOG_USER string value Syslog facility to receive log lines. This option is ignored if log_config_append is set. use-journal = False boolean value Enable journald for logging. If running in a systemd environment you may wish to enable journal support. Doing so will use the journal native protocol which includes structured metadata in addition to log messages.This option is ignored if log_config_append is set. use-json = False boolean value Use JSON formatting for logging. This option is ignored if log_config_append is set. use-syslog = False boolean value Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to honor RFC5424. This option is ignored if log_config_append is set. use_eventlog = False boolean value Log output to Windows Event Log. use_stderr = False boolean value Log output to standard error. This option is ignored if log_config_append is set. watch-log-file = False boolean value Uses logging handler designed to watch file system. When log file is moved or removed this handler will open a new log file with specified path instantaneously. It makes sense only if log_file option is specified and Linux platform is used. This option is ignored if log_config_append is set. 8.4.2. agent The following table outlines the options available under the [agent] group in the /etc/neutron/metadata_agent.ini file. Table 8.11. agent Configuration option = Default value Type Description log_agent_heartbeats = False boolean value Log agent heartbeats report_interval = 30 floating point value Seconds between nodes reporting state to server; should be less than agent_down_time, best if it is half or less than agent_down_time. 8.4.3. cache The following table outlines the options available under the [cache] group in the /etc/neutron/metadata_agent.ini file. Table 8.12. cache Configuration option = Default value Type Description backend = dogpile.cache.null string value Cache backend module. For eventlet-based or environments with hundreds of threaded servers, Memcache with pooling (oslo_cache.memcache_pool) is recommended. For environments with less than 100 threaded servers, Memcached (dogpile.cache.memcached) or Redis (dogpile.cache.redis) is recommended. Test environments with a single instance of the server can use the dogpile.cache.memory backend. backend_argument = [] multi valued Arguments supplied to the backend module. Specify this option once per argument to be passed to the dogpile.cache backend. Example format: "<argname>:<value>". config_prefix = cache.oslo string value Prefix for building the configuration dictionary for the cache region. This should not need to be changed unless there is another dogpile.cache region with the same configuration name. dead_timeout = 60 floating point value Time in seconds before attempting to add a node back in the pool in the HashClient's internal mechanisms. debug_cache_backend = False boolean value Extra debugging from the cache backend (cache keys, get/set/delete/etc calls). This is only really useful if you need to see the specific cache-backend get/set/delete calls with the keys/values. Typically this should be left set to false. enable_retry_client = False boolean value Enable retry client mechanisms to handle failure. Those mechanisms can be used to wrap all kind of pymemcache clients. The wrapper allows you to define how many attempts to make and how long to wait between attemots. enable_socket_keepalive = False boolean value Global toggle for the socket keepalive of dogpile's pymemcache backend enabled = False boolean value Global toggle for caching. expiration_time = 600 integer value Default TTL, in seconds, for any cached item in the dogpile.cache region. This applies to any cached method that doesn't have an explicit cache expiration time defined for it. hashclient_retry_attempts = 2 integer value Amount of times a client should be tried before it is marked dead and removed from the pool in the HashClient's internal mechanisms. hashclient_retry_delay = 1 floating point value Time in seconds that should pass between retry attempts in the HashClient's internal mechanisms. memcache_dead_retry = 300 integer value Number of seconds memcached server is considered dead before it is tried again. (dogpile.cache.memcache and oslo_cache.memcache_pool backends only). memcache_pool_connection_get_timeout = 10 integer value Number of seconds that an operation will wait to get a memcache client connection. memcache_pool_flush_on_reconnect = False boolean value Global toggle if memcache will be flushed on reconnect. (oslo_cache.memcache_pool backend only). memcache_pool_maxsize = 10 integer value Max total number of open connections to every memcached server. (oslo_cache.memcache_pool backend only). memcache_pool_unused_timeout = 60 integer value Number of seconds a connection to memcached is held unused in the pool before it is closed. (oslo_cache.memcache_pool backend only). memcache_servers = ['localhost:11211'] list value Memcache servers in the format of "host:port". (dogpile.cache.memcached and oslo_cache.memcache_pool backends only). If a given host refer to an IPv6 or a given domain refer to IPv6 then you should prefix the given address with the address family ( inet6 ) (e.g inet6[::1]:11211 , inet6:[fd12:3456:789a:1::1]:11211 , inet6:[controller-0.internalapi]:11211 ). If the address family is not given then default address family used will be inet which correspond to IPv4 memcache_socket_timeout = 1.0 floating point value Timeout in seconds for every call to a server. (dogpile.cache.memcache and oslo_cache.memcache_pool backends only). proxies = [] list value Proxy classes to import that will affect the way the dogpile.cache backend functions. See the dogpile.cache documentation on changing-backend-behavior. retry_attempts = 2 integer value Number of times to attempt an action before failing. retry_delay = 0 floating point value Number of seconds to sleep between each attempt. socket_keepalive_count = 1 integer value The maximum number of keepalive probes TCP should send before dropping the connection. Should be a positive integer greater than zero. socket_keepalive_idle = 1 integer value The time (in seconds) the connection needs to remain idle before TCP starts sending keepalive probes. Should be a positive integer most greater than zero. socket_keepalive_interval = 1 integer value The time (in seconds) between individual keepalive probes. Should be a positive integer greater than zero. tls_allowed_ciphers = None string value Set the available ciphers for sockets created with the TLS context. It should be a string in the OpenSSL cipher list format. If not specified, all OpenSSL enabled ciphers will be available. tls_cafile = None string value Path to a file of concatenated CA certificates in PEM format necessary to establish the caching servers' authenticity. If tls_enabled is False, this option is ignored. tls_certfile = None string value Path to a single file in PEM format containing the client's certificate as well as any number of CA certificates needed to establish the certificate's authenticity. This file is only required when client side authentication is necessary. If tls_enabled is False, this option is ignored. tls_enabled = False boolean value Global toggle for TLS usage when comunicating with the caching servers. tls_keyfile = None string value Path to a single file containing the client's private key in. Otherwhise the private key will be taken from the file specified in tls_certfile. If tls_enabled is False, this option is ignored. 8.5. metering_agent.ini This section contains options for the /etc/neutron/metering_agent.ini file. 8.5.1. DEFAULT The following table outlines the options available under the [DEFAULT] group in the /etc/neutron/metering_agent.ini file. . Configuration option = Default value Type Description debug = False boolean value If set to true, the logging level will be set to DEBUG instead of the default INFO level. default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO'] list value List of package logging levels in logger=LEVEL pairs. This option is ignored if log_config_append is set. driver = neutron.services.metering.drivers.noop.noop_driver.NoopMeteringDriver string value Metering driver fatal_deprecations = False boolean value Enables or disables fatal status of deprecations. granular_traffic_data = False boolean value Defines if the metering agent driver should present traffic data in a granular fashion, instead of grouping all of the traffic data for all projects and routers where the labels were assigned to. The default value is False for backward compatibility. `instance_format = [instance: %(uuid)s] ` string value The format for an instance that is passed with the log message. `instance_uuid_format = [instance: %(uuid)s] ` string value The format for an instance UUID that is passed with the log message. interface_driver = None string value The driver used to manage the virtual interface. log-config-append = None string value The name of a logging configuration file. This file is appended to any existing logging configuration files. For details about logging configuration files, see the Python logging module documentation. Note that when logging configuration files are used then all logging configuration is set in the configuration file and other logging configuration options are ignored (for example, log-date-format). log-date-format = %Y-%m-%d %H:%M:%S string value Defines the format string for %%(asctime)s in log records. Default: %(default)s . This option is ignored if log_config_append is set. log-dir = None string value (Optional) The base directory used for relative log_file paths. This option is ignored if log_config_append is set. log-file = None string value (Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr as defined by use_stderr. This option is ignored if log_config_append is set. log_rotate_interval = 1 integer value The amount of time before the log files are rotated. This option is ignored unless log_rotation_type is setto "interval". log_rotate_interval_type = days string value Rotation interval type. The time of the last file change (or the time when the service was started) is used when scheduling the rotation. log_rotation_type = none string value Log rotation type. logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s string value Format string to use for log messages with context. Used by oslo_log.formatters.ContextFormatter logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d string value Additional data to append to log message when logging level for the message is DEBUG. Used by oslo_log.formatters.ContextFormatter logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s string value Format string to use for log messages when context is undefined. Used by oslo_log.formatters.ContextFormatter logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s string value Prefix each line of exception output with this format. Used by oslo_log.formatters.ContextFormatter logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s string value Defines the format string for %(user_identity)s that is used in logging_context_format_string. Used by oslo_log.formatters.ContextFormatter max_logfile_count = 30 integer value Maximum number of rotated log files. max_logfile_size_mb = 200 integer value Log file maximum size in MB. This option is ignored if "log_rotation_type" is not set to "size". measure_interval = 30 integer value Interval between two metering measures ovs_integration_bridge = br-int string value Name of Open vSwitch bridge to use ovs_use_veth = False boolean value Uses veth for an OVS interface or not. Support kernels with limited namespace support (e.g. RHEL 6.5) and rate limiting on router's gateway port so long as ovs_use_veth is set to True. publish_errors = False boolean value Enables or disables publication of error events. rate_limit_burst = 0 integer value Maximum number of logged messages per rate_limit_interval. rate_limit_except_level = CRITICAL string value Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or empty string. Logs with level greater or equal to rate_limit_except_level are not filtered. An empty string means that all levels are filtered. rate_limit_interval = 0 integer value Interval, number of seconds, of log rate limiting. report_interval = 300 integer value Interval between two metering reports rpc_response_max_timeout = 600 integer value Maximum seconds to wait for a response from an RPC call. syslog-log-facility = LOG_USER string value Syslog facility to receive log lines. This option is ignored if log_config_append is set. use-journal = False boolean value Enable journald for logging. If running in a systemd environment you may wish to enable journal support. Doing so will use the journal native protocol which includes structured metadata in addition to log messages.This option is ignored if log_config_append is set. use-json = False boolean value Use JSON formatting for logging. This option is ignored if log_config_append is set. use-syslog = False boolean value Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to honor RFC5424. This option is ignored if log_config_append is set. use_eventlog = False boolean value Log output to Windows Event Log. use_stderr = False boolean value Log output to standard error. This option is ignored if log_config_append is set. watch-log-file = False boolean value Uses logging handler designed to watch file system. When log file is moved or removed this handler will open a new log file with specified path instantaneously. It makes sense only if log_file option is specified and Linux platform is used. This option is ignored if log_config_append is set. 8.5.2. agent The following table outlines the options available under the [agent] group in the /etc/neutron/metering_agent.ini file. Table 8.13. agent Configuration option = Default value Type Description log_agent_heartbeats = False boolean value Log agent heartbeats report_interval = 30 floating point value Seconds between nodes reporting state to server; should be less than agent_down_time, best if it is half or less than agent_down_time. 8.5.3. ovs The following table outlines the options available under the [ovs] group in the /etc/neutron/metering_agent.ini file. Table 8.14. ovs Configuration option = Default value Type Description bridge_mac_table_size = 50000 integer value The maximum number of MAC addresses to learn on a bridge managed by the Neutron OVS agent. Values outside a reasonable range (10 to 1,000,000) might be overridden by Open vSwitch according to the documentation. igmp_snooping_enable = False boolean value Enable IGMP snooping for integration bridge. If this option is set to True, support for Internet Group Management Protocol (IGMP) is enabled in integration bridge. Setting this option to True will also enable Open vSwitch mcast-snooping-disable-flood-unregistered flag. This option will disable flooding of unregistered multicast packets to all ports. The switch will send unregistered multicast packets only to ports connected to multicast routers. ovsdb_connection = tcp:127.0.0.1:6640 string value The connection string for the OVSDB backend. Will be used for all ovsdb commands and by ovsdb-client when monitoring ovsdb_debug = False boolean value Enable OVSDB debug logs ovsdb_timeout = 10 integer value Timeout in seconds for ovsdb commands. If the timeout expires, ovsdb commands will fail with ALARMCLOCK error. ssl_ca_cert_file = None string value The Certificate Authority (CA) certificate to use when interacting with OVSDB. Required when using an "ssl:" prefixed ovsdb_connection ssl_cert_file = None string value The SSL certificate file to use when interacting with OVSDB. Required when using an "ssl:" prefixed ovsdb_connection ssl_key_file = None string value The SSL private key file to use when interacting with OVSDB. Required when using an "ssl:" prefixed ovsdb_connection 8.6. ml2_conf.ini This section contains options for the /etc/neutron/plugins/ml2/ml2_conf.ini file. 8.6.1. DEFAULT The following table outlines the options available under the [DEFAULT] group in the /etc/neutron/plugins/ml2/ml2_conf.ini file. . Configuration option = Default value Type Description debug = False boolean value If set to true, the logging level will be set to DEBUG instead of the default INFO level. default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO'] list value List of package logging levels in logger=LEVEL pairs. This option is ignored if log_config_append is set. fatal_deprecations = False boolean value Enables or disables fatal status of deprecations. `instance_format = [instance: %(uuid)s] ` string value The format for an instance that is passed with the log message. `instance_uuid_format = [instance: %(uuid)s] ` string value The format for an instance UUID that is passed with the log message. log-config-append = None string value The name of a logging configuration file. This file is appended to any existing logging configuration files. For details about logging configuration files, see the Python logging module documentation. Note that when logging configuration files are used then all logging configuration is set in the configuration file and other logging configuration options are ignored (for example, log-date-format). log-date-format = %Y-%m-%d %H:%M:%S string value Defines the format string for %%(asctime)s in log records. Default: %(default)s . This option is ignored if log_config_append is set. log-dir = None string value (Optional) The base directory used for relative log_file paths. This option is ignored if log_config_append is set. log-file = None string value (Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr as defined by use_stderr. This option is ignored if log_config_append is set. log_rotate_interval = 1 integer value The amount of time before the log files are rotated. This option is ignored unless log_rotation_type is setto "interval". log_rotate_interval_type = days string value Rotation interval type. The time of the last file change (or the time when the service was started) is used when scheduling the rotation. log_rotation_type = none string value Log rotation type. logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s string value Format string to use for log messages with context. Used by oslo_log.formatters.ContextFormatter logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d string value Additional data to append to log message when logging level for the message is DEBUG. Used by oslo_log.formatters.ContextFormatter logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s string value Format string to use for log messages when context is undefined. Used by oslo_log.formatters.ContextFormatter logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s string value Prefix each line of exception output with this format. Used by oslo_log.formatters.ContextFormatter logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s string value Defines the format string for %(user_identity)s that is used in logging_context_format_string. Used by oslo_log.formatters.ContextFormatter max_logfile_count = 30 integer value Maximum number of rotated log files. max_logfile_size_mb = 200 integer value Log file maximum size in MB. This option is ignored if "log_rotation_type" is not set to "size". publish_errors = False boolean value Enables or disables publication of error events. rate_limit_burst = 0 integer value Maximum number of logged messages per rate_limit_interval. rate_limit_except_level = CRITICAL string value Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or empty string. Logs with level greater or equal to rate_limit_except_level are not filtered. An empty string means that all levels are filtered. rate_limit_interval = 0 integer value Interval, number of seconds, of log rate limiting. syslog-log-facility = LOG_USER string value Syslog facility to receive log lines. This option is ignored if log_config_append is set. use-journal = False boolean value Enable journald for logging. If running in a systemd environment you may wish to enable journal support. Doing so will use the journal native protocol which includes structured metadata in addition to log messages.This option is ignored if log_config_append is set. use-json = False boolean value Use JSON formatting for logging. This option is ignored if log_config_append is set. use-syslog = False boolean value Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to honor RFC5424. This option is ignored if log_config_append is set. use_eventlog = False boolean value Log output to Windows Event Log. use_stderr = False boolean value Log output to standard error. This option is ignored if log_config_append is set. watch-log-file = False boolean value Uses logging handler designed to watch file system. When log file is moved or removed this handler will open a new log file with specified path instantaneously. It makes sense only if log_file option is specified and Linux platform is used. This option is ignored if log_config_append is set. 8.6.2. ml2 The following table outlines the options available under the [ml2] group in the /etc/neutron/plugins/ml2/ml2_conf.ini file. Table 8.15. ml2 Configuration option = Default value Type Description extension_drivers = [] list value An ordered list of extension driver entrypoints to be loaded from the neutron.ml2.extension_drivers namespace. For example: extension_drivers = port_security,qos external_network_type = None string value Default network type for external networks when no provider attributes are specified. By default it is None, which means that if provider attributes are not specified while creating external networks then they will have the same type as tenant networks. Allowed values for external_network_type config option depend on the network type values configured in type_drivers config option. mechanism_drivers = [] list value An ordered list of networking mechanism driver entrypoints to be loaded from the neutron.ml2.mechanism_drivers namespace. overlay_ip_version = 4 integer value IP version of all overlay (tunnel) network endpoints. Use a value of 4 for IPv4 or 6 for IPv6. path_mtu = 0 integer value Maximum size of an IP packet (MTU) that can traverse the underlying physical network infrastructure without fragmentation when using an overlay/tunnel protocol. This option allows specifying a physical network MTU value that differs from the default global_physnet_mtu value. physical_network_mtus = [] list value A list of mappings of physical networks to MTU values. The format of the mapping is <physnet>:<mtu val>. This mapping allows specifying a physical network MTU value that differs from the default global_physnet_mtu value. tenant_network_types = ['local'] list value Ordered list of network_types to allocate as tenant networks. The default value local is useful for single-box testing but provides no connectivity between hosts. type_drivers = ['local', 'flat', 'vlan', 'gre', 'vxlan', 'geneve'] list value List of network type driver entrypoints to be loaded from the neutron.ml2.type_drivers namespace. 8.6.3. ml2_type_flat The following table outlines the options available under the [ml2_type_flat] group in the /etc/neutron/plugins/ml2/ml2_conf.ini file. Table 8.16. ml2_type_flat Configuration option = Default value Type Description flat_networks = * list value List of physical_network names with which flat networks can be created. Use default * to allow flat networks with arbitrary physical_network names. Use an empty list to disable flat networks. 8.6.4. ml2_type_geneve The following table outlines the options available under the [ml2_type_geneve] group in the /etc/neutron/plugins/ml2/ml2_conf.ini file. Table 8.17. ml2_type_geneve Configuration option = Default value Type Description max_header_size = 30 integer value Geneve encapsulation header size is dynamic, this value is used to calculate the maximum MTU for the driver. The default size for this field is 30, which is the size of the Geneve header without any additional option headers. vni_ranges = [] list value Comma-separated list of <vni_min>:<vni_max> tuples enumerating ranges of Geneve VNI IDs that are available for tenant network allocation 8.6.5. ml2_type_gre The following table outlines the options available under the [ml2_type_gre] group in the /etc/neutron/plugins/ml2/ml2_conf.ini file. Table 8.18. ml2_type_gre Configuration option = Default value Type Description tunnel_id_ranges = [] list value Comma-separated list of <tun_min>:<tun_max> tuples enumerating ranges of GRE tunnel IDs that are available for tenant network allocation 8.6.6. ml2_type_vlan The following table outlines the options available under the [ml2_type_vlan] group in the /etc/neutron/plugins/ml2/ml2_conf.ini file. Table 8.19. ml2_type_vlan Configuration option = Default value Type Description network_vlan_ranges = [] list value List of <physical_network>:<vlan_min>:<vlan_max> or <physical_network> specifying physical_network names usable for VLAN provider and tenant networks, as well as ranges of VLAN tags on each available for allocation to tenant networks. 8.6.7. ml2_type_vxlan The following table outlines the options available under the [ml2_type_vxlan] group in the /etc/neutron/plugins/ml2/ml2_conf.ini file. Table 8.20. ml2_type_vxlan Configuration option = Default value Type Description vni_ranges = [] list value Comma-separated list of <vni_min>:<vni_max> tuples enumerating ranges of VXLAN VNI IDs that are available for tenant network allocation vxlan_group = None string value Multicast group for VXLAN. When configured, will enable sending all broadcast traffic to this multicast group. When left unconfigured, will disable multicast VXLAN mode. 8.6.8. ovs_driver The following table outlines the options available under the [ovs_driver] group in the /etc/neutron/plugins/ml2/ml2_conf.ini file. Table 8.21. ovs_driver Configuration option = Default value Type Description vnic_type_prohibit_list = [] list value Comma-separated list of VNIC types for which support is administratively prohibited by the mechanism driver. Please note that the supported vnic_types depend on your network interface card, on the kernel version of your operating system, and on other factors, like OVS version. In case of ovs mechanism driver the valid vnic types are normal and direct. Note that direct is supported only from kernel 4.8, and from ovs 2.8.0. Bind DIRECT (SR-IOV) port allows to offload the OVS flows using tc to the SR-IOV NIC. This allows to support hardware offload via tc and that allows us to manage the VF by OpenFlow control plane using representor net-device. 8.6.9. securitygroup The following table outlines the options available under the [securitygroup] group in the /etc/neutron/plugins/ml2/ml2_conf.ini file. Table 8.22. securitygroup Configuration option = Default value Type Description enable_ipset = True boolean value Use ipset to speed-up the iptables based security groups. Enabling ipset support requires that ipset is installed on L2 agent node. enable_security_group = True boolean value Controls whether the neutron security group API is enabled in the server. It should be false when using no security groups or using the nova security group API. firewall_driver = None string value Driver for security groups firewall in the L2 agent permitted_ethertypes = [] list value Comma-separated list of ethertypes to be permitted, in hexadecimal (starting with "0x"). For example, "0x4008" to permit InfiniBand. 8.6.10. sriov_driver The following table outlines the options available under the [sriov_driver] group in the /etc/neutron/plugins/ml2/ml2_conf.ini file. Table 8.23. sriov_driver Configuration option = Default value Type Description vnic_type_prohibit_list = [] list value Comma-separated list of VNIC types for which support is administratively prohibited by the mechanism driver. Please note that the supported vnic_types depend on your network interface card, on the kernel version of your operating system, and on other factors. In case of sriov mechanism driver the valid VNIC types are direct, macvtap and direct-physical. 8.7. neutron.conf This section contains options for the /etc/neutron/neutron.conf file. 8.7.1. DEFAULT The following table outlines the options available under the [DEFAULT] group in the /etc/neutron/neutron.conf file. . Configuration option = Default value Type Description agent_down_time = 75 integer value Seconds to regard the agent is down; should be at least twice report_interval, to be sure the agent is down for good. allow_automatic_dhcp_failover = True boolean value Automatically remove networks from offline DHCP agents. allow_automatic_l3agent_failover = False boolean value Automatically reschedule routers from offline L3 agents to online L3 agents. allow_bulk = True boolean value Allow the usage of the bulk API allow_overlapping_ips = False boolean value Allow overlapping IP support in Neutron. Attention: the following parameter MUST be set to False if Neutron is being used in conjunction with Nova security groups. allowed_conntrack_helpers = [{'amanda': 'tcp'}, {'ftp': 'tcp'}, {'h323': 'udp'}, {'h323': 'tcp'}, {'irc': 'tcp'}, {'netbios-ns': 'udp'}, {'pptp': 'tcp'}, {'sane': 'tcp'}, {'sip': 'udp'}, {'sip': 'tcp'}, {'snmp': 'udp'}, {'tftp': 'udp'}] list value Defines the allowed conntrack helpers, and conntack helper module protocol constraints. `api_extensions_path = ` string value The path for API extensions. Note that this can be a colon-separated list of paths. For example: api_extensions_path = extensions:/path/to/more/exts:/even/more/exts. The path of neutron.extensions is appended to this, so if your extensions are in there you don't need to specify them here. api_paste_config = api-paste.ini string value File name for the paste.deploy config for api service api_workers = None integer value Number of separate API worker processes for service. If not specified, the default is equal to the number of CPUs available for best performance, capped by potential RAM usage. auth_strategy = keystone string value The type of authentication to use backdoor_port = None string value Enable eventlet backdoor. Acceptable values are 0, <port>, and <start>:<end>, where 0 results in listening on a random tcp port number; <port> results in listening on the specified port number (and not enabling backdoor if that port is in use); and <start>:<end> results in listening on the smallest unused port number within the specified range of port numbers. The chosen port is displayed in the service's log file. backdoor_socket = None string value Enable eventlet backdoor, using the provided path as a unix socket that can receive connections. This option is mutually exclusive with backdoor_port in that only one should be provided. If both are provided then the existence of this option overrides the usage of that option. Inside the path {pid} will be replaced with the PID of the current process. backlog = 4096 integer value Number of backlog requests to configure the socket with base_mac = fa:16:3e:00:00:00 string value The base MAC address Neutron will use for VIFs. The first 3 octets will remain unchanged. If the 4th octet is not 00, it will also be used. The others will be randomly generated. bind_host = 0.0.0.0 host address value The host IP to bind to. bind_port = 9696 port value The port to bind to client_socket_timeout = 900 integer value Timeout for client connections' socket operations. If an incoming connection is idle for this number of seconds it will be closed. A value of 0 means wait forever. conn_pool_min_size = 2 integer value The pool size limit for connections expiration policy conn_pool_ttl = 1200 integer value The time-to-live in sec of idle connections in the pool control_exchange = neutron string value The default exchange under which topics are scoped. May be overridden by an exchange name specified in the transport_url option. core_plugin = None string value The core plugin Neutron will use debug = False boolean value If set to true, the logging level will be set to DEBUG instead of the default INFO level. default_availability_zones = [] list value Default value of availability zone hints. The availability zone aware schedulers use this when the resources availability_zone_hints is empty. Multiple availability zones can be specified by a comma separated string. This value can be empty. In this case, even if availability_zone_hints for a resource is empty, availability zone is considered for high availability while scheduling the resource. default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO'] list value List of package logging levels in logger=LEVEL pairs. This option is ignored if log_config_append is set. dhcp_agent_notification = True boolean value Allow sending resource operation notification to DHCP agent dhcp_agents_per_network = 1 integer value Number of DHCP agents scheduled to host a tenant network. If this number is greater than 1, the scheduler automatically assigns multiple DHCP agents for a given tenant network, providing high availability for the DHCP service. However this does not provide high availability for the IPv6 metadata service in isolated networks. dhcp_lease_duration = 86400 integer value DHCP lease duration (in seconds). Use -1 to tell dnsmasq to use infinite lease times. dhcp_load_type = networks string value Representing the resource type whose load is being reported by the agent. This can be "networks", "subnets" or "ports". When specified (Default is networks), the server will extract particular load sent as part of its agent configuration object from the agent report state, which is the number of resources being consumed, at every report_interval.dhcp_load_type can be used in combination with network_scheduler_driver = neutron.scheduler.dhcp_agent_scheduler.WeightScheduler When the network_scheduler_driver is WeightScheduler, dhcp_load_type can be configured to represent the choice for the resource being balanced. Example: dhcp_load_type=networks dns_domain = openstacklocal string value Domain to use for building the hostnames dvr_base_mac = fa:16:3f:00:00:00 string value The base mac address used for unique DVR instances by Neutron. The first 3 octets will remain unchanged. If the 4th octet is not 00, it will also be used. The others will be randomly generated. The dvr_base_mac must be different from base_mac to avoid mixing them up with MAC's allocated for tenant ports. A 4 octet example would be dvr_base_mac = fa:16:3f:4f:00:00. The default is 3 octet enable_dvr = True boolean value Determine if setup is configured for DVR. If False, DVR API extension will be disabled. enable_new_agents = True boolean value Agent starts with admin_state_up=False when enable_new_agents=False. In the case, user's resources will not be scheduled automatically to the agent until admin changes admin_state_up to True. enable_services_on_agents_with_admin_state_down = False boolean value Enable services on an agent with admin_state_up False. If this option is False, when admin_state_up of an agent is turned False, services on it will be disabled. Agents with admin_state_up False are not selected for automatic scheduling regardless of this option. But manual scheduling to such agents is available if this option is True. enable_snat_by_default = True boolean value Define the default value of enable_snat if not provided in external_gateway_info. enable_traditional_dhcp = True boolean value If False, neutron-server will disable the following DHCP-agent related functions:1. DHCP provisioning block 2. DHCP scheduler API extension 3. Network scheduling mechanism 4. DHCP RPC/notification executor_thread_pool_size = 64 integer value Size of executor thread pool when executor is threading or eventlet. external_dns_driver = None string value Driver for external DNS integration. fatal_deprecations = False boolean value Enables or disables fatal status of deprecations. filter_validation = True boolean value If True, then allow plugins to decide whether to perform validations on filter parameters. Filter validation is enabled if this config is turned on and it is supported by all plugins global_physnet_mtu = 1500 integer value MTU of the underlying physical network. Neutron uses this value to calculate MTU for all virtual network components. For flat and VLAN networks, neutron uses this value without modification. For overlay networks such as VXLAN, neutron automatically subtracts the overlay protocol overhead from this value. Defaults to 1500, the standard value for Ethernet. graceful_shutdown_timeout = 60 integer value Specify a timeout after which a gracefully shutdown server will exit. Zero value means endless wait. host = <based on operating system> host address value Hostname to be used by the Neutron server, agents and services running on this machine. All the agents and services running on this machine must use the same host value. host_dvr_for_dhcp = True boolean value Flag to determine if hosting a DVR local router to the DHCP agent is desired. If False, any L3 function supported by the DHCP agent instance will not be possible, for instance: DNS. http_retries = 3 integer value Number of times client connections (nova, ironic) should be retried on a failed HTTP call. 0 (zero) means connection is attempted only once (not retried). Setting to any positive integer means that on failure the connection is retried that many times. For example, setting to 3 means total attempts to connect will be 4. `instance_format = [instance: %(uuid)s] ` string value The format for an instance that is passed with the log message. `instance_uuid_format = [instance: %(uuid)s] ` string value The format for an instance UUID that is passed with the log message. interface_driver = None string value The driver used to manage the virtual interface. ipam_driver = internal string value Neutron IPAM (IP address management) driver to use. By default, the reference implementation of the Neutron IPAM driver is used. ipv6_pd_enabled = False boolean value Enables IPv6 Prefix Delegation for automatic subnet CIDR allocation. Set to True to enable IPv6 Prefix Delegation for subnet allocation in a PD-capable environment. Users making subnet creation requests for IPv6 subnets without providing a CIDR or subnetpool ID will be given a CIDR via the Prefix Delegation mechanism. Note that enabling PD will override the behavior of the default IPv6 subnetpool. l3_ha = False boolean value Enable HA mode for virtual routers. l3_ha_net_cidr = 169.254.192.0/18 string value Subnet used for the l3 HA admin network. `l3_ha_network_physical_name = ` string value The physical network name with which the HA network can be created. `l3_ha_network_type = ` string value The network type to use when creating the HA network for an HA router. By default or if empty, the first tenant_network_types is used. This is helpful when the VRRP traffic should use a specific network which is not the default one. log-config-append = None string value The name of a logging configuration file. This file is appended to any existing logging configuration files. For details about logging configuration files, see the Python logging module documentation. Note that when logging configuration files are used then all logging configuration is set in the configuration file and other logging configuration options are ignored (for example, log-date-format). log-date-format = %Y-%m-%d %H:%M:%S string value Defines the format string for %%(asctime)s in log records. Default: %(default)s . This option is ignored if log_config_append is set. log-dir = None string value (Optional) The base directory used for relative log_file paths. This option is ignored if log_config_append is set. log-file = None string value (Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr as defined by use_stderr. This option is ignored if log_config_append is set. log_options = True boolean value Enables or disables logging values of all registered options when starting a service (at DEBUG level). log_rotate_interval = 1 integer value The amount of time before the log files are rotated. This option is ignored unless log_rotation_type is setto "interval". log_rotate_interval_type = days string value Rotation interval type. The time of the last file change (or the time when the service was started) is used when scheduling the rotation. log_rotation_type = none string value Log rotation type. logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s string value Format string to use for log messages with context. Used by oslo_log.formatters.ContextFormatter logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d string value Additional data to append to log message when logging level for the message is DEBUG. Used by oslo_log.formatters.ContextFormatter logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s string value Format string to use for log messages when context is undefined. Used by oslo_log.formatters.ContextFormatter logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s string value Prefix each line of exception output with this format. Used by oslo_log.formatters.ContextFormatter logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s string value Defines the format string for %(user_identity)s that is used in logging_context_format_string. Used by oslo_log.formatters.ContextFormatter max_allowed_address_pair = 10 integer value Maximum number of allowed address pairs max_dns_nameservers = 5 integer value Maximum number of DNS nameservers per subnet max_header_line = 16384 integer value Maximum line size of message headers to be accepted. max_header_line may need to be increased when using large tokens (typically those generated when keystone is configured to use PKI tokens with big service catalogs). max_l3_agents_per_router = 3 integer value Maximum number of L3 agents which a HA router will be scheduled on. If it is set to 0 then the router will be scheduled on every agent. max_logfile_count = 30 integer value Maximum number of rotated log files. max_logfile_size_mb = 200 integer value Log file maximum size in MB. This option is ignored if "log_rotation_type" is not set to "size". max_routes = 30 integer value Maximum number of routes per router max_subnet_host_routes = 20 integer value Maximum number of host routes per subnet `metadata_proxy_group = ` string value Group (gid or name) running metadata proxy after its initialization (if empty: agent effective group). metadata_proxy_socket = USDstate_path/metadata_proxy string value Location for Metadata Proxy UNIX domain socket. `metadata_proxy_user = ` string value User (uid or name) running metadata proxy after its initialization (if empty: agent effective user). migration_mode = False boolean value The option tells the environment is in the process of mechanism driver migration from OVS to OVN. network_auto_schedule = True boolean value Allow auto scheduling networks to DHCP agent. network_link_prefix = None string value This string is prepended to the normal URL that is returned in links to the OpenStack Network API. If it is empty (the default), the URLs are returned unchanged. network_scheduler_driver = neutron.scheduler.dhcp_agent_scheduler.WeightScheduler string value Driver to use for scheduling network to DHCP agent notify_nova_on_port_data_changes = True boolean value Send notification to nova when port data (fixed_ips/floatingip) changes so nova can update its cache. notify_nova_on_port_status_changes = True boolean value Send notification to nova when port status changes pagination_max_limit = -1 string value The maximum number of items returned in a single response, value was infinite or negative integer means no limit periodic_fuzzy_delay = 5 integer value Range of seconds to randomly delay when starting the periodic task scheduler to reduce stampeding. (Disable by setting to 0) periodic_interval = 40 integer value Seconds between running periodic tasks. publish_errors = False boolean value Enables or disables publication of error events. rate_limit_burst = 0 integer value Maximum number of logged messages per rate_limit_interval. rate_limit_except_level = CRITICAL string value Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or empty string. Logs with level greater or equal to rate_limit_except_level are not filtered. An empty string means that all levels are filtered. rate_limit_interval = 0 integer value Interval, number of seconds, of log rate limiting. retry_until_window = 30 integer value Number of seconds to keep retrying to listen router_auto_schedule = True boolean value Allow auto scheduling of routers to L3 agent. router_distributed = False boolean value System-wide flag to determine the type of router that tenants can create. Only admin can override. router_scheduler_driver = neutron.scheduler.l3_agent_scheduler.LeastRoutersScheduler string value Driver to use for scheduling router to a default L3 agent rpc_conn_pool_size = 30 integer value Size of RPC connection pool. rpc_ping_enabled = False boolean value Add an endpoint to answer to ping calls. Endpoint is named oslo_rpc_server_ping rpc_response_max_timeout = 600 integer value Maximum seconds to wait for a response from an RPC call. rpc_response_timeout = 60 integer value Seconds to wait for a response from a call. rpc_state_report_workers = 1 integer value Number of RPC worker processes dedicated to state reports queue. rpc_workers = None integer value Number of RPC worker processes for service. If not specified, the default is equal to half the number of API workers. run_external_periodic_tasks = True boolean value Some periodic tasks can be run in a separate process. Should we run them here? send_events_interval = 2 integer value Number of seconds between sending events to nova if there are any events to send. service_plugins = [] list value The service plugins Neutron will use setproctitle = on string value Set process name to match child worker role. Available options are: off - retains the behavior; on - renames processes to neutron-server: role (original string) ; brief - renames the same as on , but without the original string, such as neutron-server: role . state_path = /var/lib/neutron string value Where to store Neutron state files. This directory must be writable by the agent. syslog-log-facility = LOG_USER string value Syslog facility to receive log lines. This option is ignored if log_config_append is set. tcp_keepidle = 600 integer value Sets the value of TCP_KEEPIDLE in seconds for each server socket. Not supported on OS X. transport_url = rabbit:// string value The network address and optional user credentials for connecting to the messaging backend, in URL format. The expected format is: driver://[user:pass@]host:port[,[userN:passN@]hostN:portN]/virtual_host?query Example: rabbit://rabbitmq:[email protected]:5672// For full details on the fields in the URL see the documentation of oslo_messaging.TransportURL at https://docs.openstack.org/oslo.messaging/latest/reference/transport.html use-journal = False boolean value Enable journald for logging. If running in a systemd environment you may wish to enable journal support. Doing so will use the journal native protocol which includes structured metadata in addition to log messages.This option is ignored if log_config_append is set. use-json = False boolean value Use JSON formatting for logging. This option is ignored if log_config_append is set. use-syslog = False boolean value Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to honor RFC5424. This option is ignored if log_config_append is set. use_eventlog = False boolean value Log output to Windows Event Log. use_ssl = False boolean value Enable SSL on the API server use_stderr = False boolean value Log output to standard error. This option is ignored if log_config_append is set. vlan_transparent = False boolean value If True, then allow plugins that support it to create VLAN transparent networks. watch-log-file = False boolean value Uses logging handler designed to watch file system. When log file is moved or removed this handler will open a new log file with specified path instantaneously. It makes sense only if log_file option is specified and Linux platform is used. This option is ignored if log_config_append is set. wsgi_default_pool_size = 100 integer value Size of the pool of greenthreads used by wsgi wsgi_keep_alive = True boolean value If False, closes the client socket connection explicitly. wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f string value A python format string that is used as the template to generate log lines. The following values can beformatted into it: client_ip, date_time, request_line, status_code, body_length, wall_seconds. wsgi_server_debug = False boolean value True if the server should send exception tracebacks to the clients on 500 errors. If False, the server will respond with empty bodies. 8.7.2. agent The following table outlines the options available under the [agent] group in the /etc/neutron/neutron.conf file. Table 8.24. agent Configuration option = Default value Type Description availability_zone = nova string value Availability zone of this node check_child_processes_action = respawn string value Action to be executed when a child process dies check_child_processes_interval = 60 integer value Interval between checks of child process liveness (seconds), use 0 to disable comment_iptables_rules = True boolean value Add comments to iptables rules. Set to false to disallow the addition of comments to generated iptables rules that describe each rule's purpose. System must support the iptables comments module for addition of comments. debug_iptables_rules = False boolean value Duplicate every iptables difference calculation to ensure the format being generated matches the format of iptables-save. This option should not be turned on for production systems because it imposes a performance penalty. kill_scripts_path = /etc/neutron/kill_scripts/ string value Location of scripts used to kill external processes. Names of scripts here must follow the pattern: "<process-name>-kill" where <process-name> is name of the process which should be killed using this script. For example, kill script for dnsmasq process should be named "dnsmasq-kill". If path is set to None, then default "kill" command will be used to stop processes. log_agent_heartbeats = False boolean value Log agent heartbeats report_interval = 30 floating point value Seconds between nodes reporting state to server; should be less than agent_down_time, best if it is half or less than agent_down_time. root_helper = sudo string value Root helper application. Use sudo neutron-rootwrap /etc/neutron/rootwrap.conf to use the real root filter facility. Change to sudo to skip the filtering and just run the command directly. root_helper_daemon = None string value Root helper daemon application to use when possible. Use sudo neutron-rootwrap-daemon /etc/neutron/rootwrap.conf to run rootwrap in "daemon mode" which has been reported to improve performance at scale. For more information on running rootwrap in "daemon mode", see: https://docs.openstack.org/oslo.rootwrap/latest/user/usage.html#daemon-mode use_helper_for_ns_read = True boolean value Use the root helper when listing the namespaces on a system. This may not be required depending on the security configuration. If the root helper is not required, set this to False for a performance improvement. use_random_fully = True boolean value Use random-fully in SNAT masquerade rules. 8.7.3. cors The following table outlines the options available under the [cors] group in the /etc/neutron/neutron.conf file. Table 8.25. cors Configuration option = Default value Type Description allow_credentials = True boolean value Indicate that the actual request can include user credentials allow_headers = ['X-Auth-Token', 'X-Identity-Status', 'X-Roles', 'X-Service-Catalog', 'X-User-Id', 'X-Tenant-Id', 'X-OpenStack-Request-ID'] list value Indicate which header field names may be used during the actual request. allow_methods = ['GET', 'PUT', 'POST', 'DELETE', 'PATCH'] list value Indicate which methods can be used during the actual request. allowed_origin = None list value Indicate whether this resource may be shared with the domain received in the requests "origin" header. Format: "<protocol>://<host>[:<port>]", no trailing slash. Example: https://horizon.example.com expose_headers = ['X-Auth-Token', 'X-Subject-Token', 'X-Service-Token', 'X-OpenStack-Request-ID', 'OpenStack-Volume-microversion'] list value Indicate which headers are safe to expose to the API. Defaults to HTTP Simple Headers. max_age = 3600 integer value Maximum cache age of CORS preflight requests. 8.7.4. database The following table outlines the options available under the [database] group in the /etc/neutron/neutron.conf file. Table 8.26. database Configuration option = Default value Type Description backend = sqlalchemy string value The back end to use for the database. connection = None string value The SQLAlchemy connection string to use to connect to the database. connection_debug = 0 integer value Verbosity of SQL debugging information: 0=None, 100=Everything. `connection_parameters = ` string value Optional URL parameters to append onto the connection URL at connect time; specify as param1=value1¶m2=value2&... connection_recycle_time = 3600 integer value Connections which have been present in the connection pool longer than this number of seconds will be replaced with a new one the time they are checked out from the pool. connection_trace = False boolean value Add Python stack traces to SQL as comment strings. db_inc_retry_interval = True boolean value If True, increases the interval between retries of a database operation up to db_max_retry_interval. db_max_retries = 20 integer value Maximum retries in case of connection error or deadlock error before error is raised. Set to -1 to specify an infinite retry count. db_max_retry_interval = 10 integer value If db_inc_retry_interval is set, the maximum seconds between retries of a database operation. db_retry_interval = 1 integer value Seconds between retries of a database transaction. `engine = ` string value Database engine for which script will be generated when using offline migration. max_overflow = 50 integer value If set, use this value for max_overflow with SQLAlchemy. max_pool_size = 5 integer value Maximum number of SQL connections to keep open in a pool. Setting a value of 0 indicates no limit. max_retries = 10 integer value Maximum number of database connection retries during startup. Set to -1 to specify an infinite retry count. mysql_enable_ndb = False boolean value If True, transparently enables support for handling MySQL Cluster (NDB). mysql_sql_mode = TRADITIONAL string value The SQL mode to be used for MySQL sessions. This option, including the default, overrides any server-set SQL mode. To use whatever SQL mode is set by the server configuration, set this to no value. Example: mysql_sql_mode= pool_timeout = None integer value If set, use this value for pool_timeout with SQLAlchemy. retry_interval = 10 integer value Interval between retries of opening a SQL connection. slave_connection = None string value The SQLAlchemy connection string to use to connect to the slave database. sqlite_synchronous = True boolean value If True, SQLite uses synchronous mode. use_db_reconnect = False boolean value Enable the experimental use of database reconnect on connection lost. 8.7.5. healthcheck The following table outlines the options available under the [healthcheck] group in the /etc/neutron/neutron.conf file. Table 8.27. healthcheck Configuration option = Default value Type Description backends = [] list value Additional backends that can perform health checks and report that information back as part of a request. detailed = False boolean value Show more detailed information as part of the response. Security note: Enabling this option may expose sensitive details about the service being monitored. Be sure to verify that it will not violate your security policies. disable_by_file_path = None string value Check the presence of a file to determine if an application is running on a port. Used by DisableByFileHealthcheck plugin. disable_by_file_paths = [] list value Check the presence of a file based on a port to determine if an application is running on a port. Expects a "port:path" list of strings. Used by DisableByFilesPortsHealthcheck plugin. path = /healthcheck string value The path to respond to healtcheck requests on. 8.7.6. ironic The following table outlines the options available under the [ironic] group in the /etc/neutron/neutron.conf file. Table 8.28. ironic Configuration option = Default value Type Description auth-url = None string value Authentication URL auth_type = None string value Authentication type to load cafile = None string value PEM encoded Certificate Authority to use when verifying HTTPs connections. certfile = None string value PEM encoded client certificate cert file collect-timing = False boolean value Collect per-API call timing information. default-domain-id = None string value Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. default-domain-name = None string value Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. domain-id = None string value Domain ID to scope to domain-name = None string value Domain name to scope to enable_notifications = False boolean value Send notification events to ironic. (For example on relevant port status changes.) insecure = False boolean value Verify HTTPS connections. keyfile = None string value PEM encoded client certificate key file password = None string value User's password project-domain-id = None string value Domain ID containing project project-domain-name = None string value Domain name containing project project-id = None string value Project ID to scope to project-name = None string value Project name to scope to split-loggers = False boolean value Log requests to multiple loggers. system-scope = None string value Scope for system operations tenant-id = None string value Tenant ID tenant-name = None string value Tenant Name timeout = None integer value Timeout value for http requests trust-id = None string value Trust ID user-domain-id = None string value User's domain id user-domain-name = None string value User's domain name user-id = None string value User id username = None string value Username 8.7.7. keystone_authtoken The following table outlines the options available under the [keystone_authtoken] group in the /etc/neutron/neutron.conf file. Table 8.29. keystone_authtoken Configuration option = Default value Type Description auth_section = None string value Config Section from which to load plugin specific options auth_type = None string value Authentication type to load auth_uri = None string value Complete "public" Identity API endpoint. This endpoint should not be an "admin" endpoint, as it should be accessible by all end users. Unauthenticated clients are redirected to this endpoint to authenticate. Although this endpoint should ideally be unversioned, client support in the wild varies. If you're using a versioned v2 endpoint here, then this should not be the same endpoint the service user utilizes for validating tokens, because normal end users may not be able to reach that endpoint. This option is deprecated in favor of www_authenticate_uri and will be removed in the S release. Deprecated since: Queens *Reason:*The auth_uri option is deprecated in favor of www_authenticate_uri and will be removed in the S release. auth_version = None string value API version of the Identity API endpoint. cache = None string value Request environment key where the Swift cache object is stored. When auth_token middleware is deployed with a Swift cache, use this option to have the middleware share a caching backend with swift. Otherwise, use the memcached_servers option instead. cafile = None string value A PEM encoded Certificate Authority to use when verifying HTTPs connections. Defaults to system CAs. certfile = None string value Required if identity server requires client certificate delay_auth_decision = False boolean value Do not handle authorization requests within the middleware, but delegate the authorization decision to downstream WSGI components. enforce_token_bind = permissive string value Used to control the use and type of token binding. Can be set to: "disabled" to not check token binding. "permissive" (default) to validate binding information if the bind type is of a form known to the server and ignore it if not. "strict" like "permissive" but if the bind type is unknown the token will be rejected. "required" any form of token binding is needed to be allowed. Finally the name of a binding method that must be present in tokens. http_connect_timeout = None integer value Request timeout value for communicating with Identity API server. http_request_max_retries = 3 integer value How many times are we trying to reconnect when communicating with Identity API Server. include_service_catalog = True boolean value (Optional) Indicate whether to set the X-Service-Catalog header. If False, middleware will not ask for service catalog on token validation and will not set the X-Service-Catalog header. insecure = False boolean value Verify HTTPS connections. interface = internal string value Interface to use for the Identity API endpoint. Valid values are "public", "internal" (default) or "admin". keyfile = None string value Required if identity server requires client certificate memcache_pool_conn_get_timeout = 10 integer value (Optional) Number of seconds that an operation will wait to get a memcached client connection from the pool. memcache_pool_dead_retry = 300 integer value (Optional) Number of seconds memcached server is considered dead before it is tried again. memcache_pool_maxsize = 10 integer value (Optional) Maximum total number of open connections to every memcached server. memcache_pool_socket_timeout = 3 integer value (Optional) Socket timeout in seconds for communicating with a memcached server. memcache_pool_unused_timeout = 60 integer value (Optional) Number of seconds a connection to memcached is held unused in the pool before it is closed. memcache_secret_key = None string value (Optional, mandatory if memcache_security_strategy is defined) This string is used for key derivation. memcache_security_strategy = None string value (Optional) If defined, indicate whether token data should be authenticated or authenticated and encrypted. If MAC, token data is authenticated (with HMAC) in the cache. If ENCRYPT, token data is encrypted and authenticated in the cache. If the value is not one of these options or empty, auth_token will raise an exception on initialization. memcache_use_advanced_pool = False boolean value (Optional) Use the advanced (eventlet safe) memcached client pool. The advanced pool will only work under python 2.x. memcached_servers = None list value Optionally specify a list of memcached server(s) to use for caching. If left undefined, tokens will instead be cached in-process. region_name = None string value The region in which the identity server can be found. service_token_roles = ['service'] list value A choice of roles that must be present in a service token. Service tokens are allowed to request that an expired token can be used and so this check should tightly control that only actual services should be sending this token. Roles here are applied as an ANY check so any role in this list must be present. For backwards compatibility reasons this currently only affects the allow_expired check. service_token_roles_required = False boolean value For backwards compatibility reasons we must let valid service tokens pass that don't pass the service_token_roles check as valid. Setting this true will become the default in a future release and should be enabled if possible. service_type = None string value The name or type of the service as it appears in the service catalog. This is used to validate tokens that have restricted access rules. token_cache_time = 300 integer value In order to prevent excessive effort spent validating tokens, the middleware caches previously-seen tokens for a configurable duration (in seconds). Set to -1 to disable caching completely. www_authenticate_uri = None string value Complete "public" Identity API endpoint. This endpoint should not be an "admin" endpoint, as it should be accessible by all end users. Unauthenticated clients are redirected to this endpoint to authenticate. Although this endpoint should ideally be unversioned, client support in the wild varies. If you're using a versioned v2 endpoint here, then this should not be the same endpoint the service user utilizes for validating tokens, because normal end users may not be able to reach that endpoint. 8.7.8. nova The following table outlines the options available under the [nova] group in the /etc/neutron/neutron.conf file. Table 8.30. nova Configuration option = Default value Type Description auth-url = None string value Authentication URL auth_type = None string value Authentication type to load cafile = None string value PEM encoded Certificate Authority to use when verifying HTTPs connections. certfile = None string value PEM encoded client certificate cert file collect-timing = False boolean value Collect per-API call timing information. default-domain-id = None string value Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. default-domain-name = None string value Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. domain-id = None string value Domain ID to scope to domain-name = None string value Domain name to scope to endpoint_type = public string value Type of the nova endpoint to use. This endpoint will be looked up in the keystone catalog and should be one of public, internal or admin. insecure = False boolean value Verify HTTPS connections. keyfile = None string value PEM encoded client certificate key file live_migration_events = False boolean value When this option is enabled, during the live migration, the OVS agent will only send the "vif-plugged-event" when the destination host interface is bound. This option also disables any other agent (like DHCP) to send to Nova this event when the port is provisioned.This option can be enabled if Nova patch https://review.opendev.org/c/openstack/nova/+/767368 is in place.This option is temporary and will be removed in Y and the behavior will be "True". password = None string value User's password project-domain-id = None string value Domain ID containing project project-domain-name = None string value Domain name containing project project-id = None string value Project ID to scope to project-name = None string value Project name to scope to region_name = None string value Name of nova region to use. Useful if keystone manages more than one region. split-loggers = False boolean value Log requests to multiple loggers. system-scope = None string value Scope for system operations tenant-id = None string value Tenant ID tenant-name = None string value Tenant Name timeout = None integer value Timeout value for http requests trust-id = None string value Trust ID user-domain-id = None string value User's domain id user-domain-name = None string value User's domain name user-id = None string value User id username = None string value Username 8.7.9. oslo_concurrency The following table outlines the options available under the [oslo_concurrency] group in the /etc/neutron/neutron.conf file. Table 8.31. oslo_concurrency Configuration option = Default value Type Description disable_process_locking = False boolean value Enables or disables inter-process locks. lock_path = None string value Directory to use for lock files. For security, the specified directory should only be writable by the user running the processes that need locking. Defaults to environment variable OSLO_LOCK_PATH. If external locks are used, a lock path must be set. 8.7.10. oslo_messaging_amqp The following table outlines the options available under the [oslo_messaging_amqp] group in the /etc/neutron/neutron.conf file. Table 8.32. oslo_messaging_amqp Configuration option = Default value Type Description addressing_mode = dynamic string value Indicates the addressing mode used by the driver. Permitted values: legacy - use legacy non-routable addressing routable - use routable addresses dynamic - use legacy addresses if the message bus does not support routing otherwise use routable addressing anycast_address = anycast string value Appended to the address prefix when sending to a group of consumers. Used by the message bus to identify messages that should be delivered in a round-robin fashion across consumers. broadcast_prefix = broadcast string value address prefix used when broadcasting to all servers connection_retry_backoff = 2 integer value Increase the connection_retry_interval by this many seconds after each unsuccessful failover attempt. connection_retry_interval = 1 integer value Seconds to pause before attempting to re-connect. connection_retry_interval_max = 30 integer value Maximum limit for connection_retry_interval + connection_retry_backoff container_name = None string value Name for the AMQP container. must be globally unique. Defaults to a generated UUID default_notification_exchange = None string value Exchange name used in notification addresses. Exchange name resolution precedence: Target.exchange if set else default_notification_exchange if set else control_exchange if set else notify default_notify_timeout = 30 integer value The deadline for a sent notification message delivery. Only used when caller does not provide a timeout expiry. default_reply_retry = 0 integer value The maximum number of attempts to re-send a reply message which failed due to a recoverable error. default_reply_timeout = 30 integer value The deadline for an rpc reply message delivery. default_rpc_exchange = None string value Exchange name used in RPC addresses. Exchange name resolution precedence: Target.exchange if set else default_rpc_exchange if set else control_exchange if set else rpc default_send_timeout = 30 integer value The deadline for an rpc cast or call message delivery. Only used when caller does not provide a timeout expiry. default_sender_link_timeout = 600 integer value The duration to schedule a purge of idle sender links. Detach link after expiry. group_request_prefix = unicast string value address prefix when sending to any server in group idle_timeout = 0 integer value Timeout for inactive connections (in seconds) link_retry_delay = 10 integer value Time to pause between re-connecting an AMQP 1.0 link that failed due to a recoverable error. multicast_address = multicast string value Appended to the address prefix when sending a fanout message. Used by the message bus to identify fanout messages. notify_address_prefix = openstack.org/om/notify string value Address prefix for all generated Notification addresses notify_server_credit = 100 integer value Window size for incoming Notification messages pre_settled = ['rpc-cast', 'rpc-reply'] multi valued Send messages of this type pre-settled. Pre-settled messages will not receive acknowledgement from the peer. Note well: pre-settled messages may be silently discarded if the delivery fails. Permitted values: rpc-call - send RPC Calls pre-settled rpc-reply - send RPC Replies pre-settled rpc-cast - Send RPC Casts pre-settled notify - Send Notifications pre-settled pseudo_vhost = True boolean value Enable virtual host support for those message buses that do not natively support virtual hosting (such as qpidd). When set to true the virtual host name will be added to all message bus addresses, effectively creating a private subnet per virtual host. Set to False if the message bus supports virtual hosting using the hostname field in the AMQP 1.0 Open performative as the name of the virtual host. reply_link_credit = 200 integer value Window size for incoming RPC Reply messages. rpc_address_prefix = openstack.org/om/rpc string value Address prefix for all generated RPC addresses rpc_server_credit = 100 integer value Window size for incoming RPC Request messages `sasl_config_dir = ` string value Path to directory that contains the SASL configuration `sasl_config_name = ` string value Name of configuration file (without .conf suffix) `sasl_default_realm = ` string value SASL realm to use if no realm present in username `sasl_mechanisms = ` string value Space separated list of acceptable SASL mechanisms server_request_prefix = exclusive string value address prefix used when sending to a specific server ssl = False boolean value Attempt to connect via SSL. If no other ssl-related parameters are given, it will use the system's CA-bundle to verify the server's certificate. `ssl_ca_file = ` string value CA certificate PEM file used to verify the server's certificate `ssl_cert_file = ` string value Self-identifying certificate PEM file for client authentication `ssl_key_file = ` string value Private key PEM file used to sign ssl_cert_file certificate (optional) ssl_key_password = None string value Password for decrypting ssl_key_file (if encrypted) ssl_verify_vhost = False boolean value By default SSL checks that the name in the server's certificate matches the hostname in the transport_url. In some configurations it may be preferable to use the virtual hostname instead, for example if the server uses the Server Name Indication TLS extension (rfc6066) to provide a certificate per virtual host. Set ssl_verify_vhost to True if the server's SSL certificate uses the virtual host name instead of the DNS name. trace = False boolean value Debug: dump AMQP frames to stdout unicast_address = unicast string value Appended to the address prefix when sending to a particular RPC/Notification server. Used by the message bus to identify messages sent to a single destination. 8.7.11. oslo_messaging_kafka The following table outlines the options available under the [oslo_messaging_kafka] group in the /etc/neutron/neutron.conf file. Table 8.33. oslo_messaging_kafka Configuration option = Default value Type Description compression_codec = none string value The compression codec for all data generated by the producer. If not set, compression will not be used. Note that the allowed values of this depend on the kafka version conn_pool_min_size = 2 integer value The pool size limit for connections expiration policy conn_pool_ttl = 1200 integer value The time-to-live in sec of idle connections in the pool consumer_group = oslo_messaging_consumer string value Group id for Kafka consumer. Consumers in one group will coordinate message consumption enable_auto_commit = False boolean value Enable asynchronous consumer commits kafka_consumer_timeout = 1.0 floating point value Default timeout(s) for Kafka consumers kafka_max_fetch_bytes = 1048576 integer value Max fetch bytes of Kafka consumer max_poll_records = 500 integer value The maximum number of records returned in a poll call pool_size = 10 integer value Pool Size for Kafka Consumers producer_batch_size = 16384 integer value Size of batch for the producer async send producer_batch_timeout = 0.0 floating point value Upper bound on the delay for KafkaProducer batching in seconds sasl_mechanism = PLAIN string value Mechanism when security protocol is SASL security_protocol = PLAINTEXT string value Protocol used to communicate with brokers `ssl_cafile = ` string value CA certificate PEM file used to verify the server certificate `ssl_client_cert_file = ` string value Client certificate PEM file used for authentication. `ssl_client_key_file = ` string value Client key PEM file used for authentication. `ssl_client_key_password = ` string value Client key password file used for authentication. 8.7.12. oslo_messaging_notifications The following table outlines the options available under the [oslo_messaging_notifications] group in the /etc/neutron/neutron.conf file. Table 8.34. oslo_messaging_notifications Configuration option = Default value Type Description driver = [] multi valued The Drivers(s) to handle sending notifications. Possible values are messaging, messagingv2, routing, log, test, noop retry = -1 integer value The maximum number of attempts to re-send a notification message which failed to be delivered due to a recoverable error. 0 - No retry, -1 - indefinite topics = ['notifications'] list value AMQP topic used for OpenStack notifications. transport_url = None string value A URL representing the messaging driver to use for notifications. If not set, we fall back to the same configuration used for RPC. 8.7.13. oslo_messaging_rabbit The following table outlines the options available under the [oslo_messaging_rabbit] group in the /etc/neutron/neutron.conf file. Table 8.35. oslo_messaging_rabbit Configuration option = Default value Type Description amqp_auto_delete = False boolean value Auto-delete queues in AMQP. amqp_durable_queues = False boolean value Use durable queues in AMQP. direct_mandatory_flag = True boolean value (DEPRECATED) Enable/Disable the RabbitMQ mandatory flag for direct send. The direct send is used as reply, so the MessageUndeliverable exception is raised in case the client queue does not exist.MessageUndeliverable exception will be used to loop for a timeout to lets a chance to sender to recover.This flag is deprecated and it will not be possible to deactivate this functionality anymore enable_cancel_on_failover = False boolean value Enable x-cancel-on-ha-failover flag so that rabbitmq server will cancel and notify consumerswhen queue is down heartbeat_in_pthread = False boolean value Run the health check heartbeat thread through a native python thread by default. If this option is equal to False then the health check heartbeat will inherit the execution model from the parent process. For example if the parent process has monkey patched the stdlib by using eventlet/greenlet then the heartbeat will be run through a green thread. This option should be set to True only for the wsgi services. heartbeat_rate = 2 integer value How often times during the heartbeat_timeout_threshold we check the heartbeat. heartbeat_timeout_threshold = 60 integer value Number of seconds after which the Rabbit broker is considered down if heartbeat's keep-alive fails (0 disables heartbeat). kombu_compression = None string value EXPERIMENTAL: Possible values are: gzip, bz2. If not set compression will not be used. This option may not be available in future versions. kombu_failover_strategy = round-robin string value Determines how the RabbitMQ node is chosen in case the one we are currently connected to becomes unavailable. Takes effect only if more than one RabbitMQ node is provided in config. kombu_missing_consumer_retry_timeout = 60 integer value How long to wait a missing client before abandoning to send it its replies. This value should not be longer than rpc_response_timeout. kombu_reconnect_delay = 1.0 floating point value How long to wait before reconnecting in response to an AMQP consumer cancel notification. rabbit_ha_queues = False boolean value Try to use HA queues in RabbitMQ (x-ha-policy: all). If you change this option, you must wipe the RabbitMQ database. In RabbitMQ 3.0, queue mirroring is no longer controlled by the x-ha-policy argument when declaring a queue. If you just want to make sure that all queues (except those with auto-generated names) are mirrored across all nodes, run: "rabbitmqctl set_policy HA ^(?!amq\.).* {"ha-mode": "all"} " rabbit_interval_max = 30 integer value Maximum interval of RabbitMQ connection retries. Default is 30 seconds. rabbit_login_method = AMQPLAIN string value The RabbitMQ login method. rabbit_qos_prefetch_count = 0 integer value Specifies the number of messages to prefetch. Setting to zero allows unlimited messages. rabbit_retry_backoff = 2 integer value How long to backoff for between retries when connecting to RabbitMQ. rabbit_retry_interval = 1 integer value How frequently to retry connecting with RabbitMQ. rabbit_transient_queues_ttl = 1800 integer value Positive integer representing duration in seconds for queue TTL (x-expires). Queues which are unused for the duration of the TTL are automatically deleted. The parameter affects only reply and fanout queues. ssl = False boolean value Connect over SSL. `ssl_ca_file = ` string value SSL certification authority file (valid only if SSL enabled). `ssl_cert_file = ` string value SSL cert file (valid only if SSL enabled). `ssl_key_file = ` string value SSL key file (valid only if SSL enabled). `ssl_version = ` string value SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some distributions. 8.7.14. oslo_middleware The following table outlines the options available under the [oslo_middleware] group in the /etc/neutron/neutron.conf file. Table 8.36. oslo_middleware Configuration option = Default value Type Description enable_proxy_headers_parsing = False boolean value Whether the application is behind a proxy or not. This determines if the middleware should parse the headers or not. 8.7.15. oslo_policy The following table outlines the options available under the [oslo_policy] group in the /etc/neutron/neutron.conf file. Table 8.37. oslo_policy Configuration option = Default value Type Description enforce_new_defaults = False boolean value This option controls whether or not to use old deprecated defaults when evaluating policies. If True , the old deprecated defaults are not going to be evaluated. This means if any existing token is allowed for old defaults but is disallowed for new defaults, it will be disallowed. It is encouraged to enable this flag along with the enforce_scope flag so that you can get the benefits of new defaults and scope_type together enforce_scope = False boolean value This option controls whether or not to enforce scope when evaluating policies. If True , the scope of the token used in the request is compared to the scope_types of the policy being enforced. If the scopes do not match, an InvalidScope exception will be raised. If False , a message will be logged informing operators that policies are being invoked with mismatching scope. policy_default_rule = default string value Default rule. Enforced when a requested rule is not found. policy_dirs = ['policy.d'] multi valued Directories where policy configuration files are stored. They can be relative to any directory in the search path defined by the config_dir option, or absolute paths. The file defined by policy_file must exist for these directories to be searched. Missing or empty directories are ignored. policy_file = policy.yaml string value The relative or absolute path of a file that maps roles to permissions for a given service. Relative paths must be specified in relation to the configuration file setting this option. remote_content_type = application/x-www-form-urlencoded string value Content Type to send and receive data for REST based policy check remote_ssl_ca_crt_file = None string value Absolute path to ca cert file for REST based policy check remote_ssl_client_crt_file = None string value Absolute path to client cert for REST based policy check remote_ssl_client_key_file = None string value Absolute path client key file REST based policy check remote_ssl_verify_server_crt = False boolean value server identity verification for REST based policy check 8.7.16. oslo_reports The following table outlines the options available under the [oslo_reports] group in the /etc/neutron/neutron.conf file. Table 8.38. oslo_reports Configuration option = Default value Type Description file_event_handler = None string value The path to a file to watch for changes to trigger the reports, instead of signals. Setting this option disables the signal trigger for the reports. If application is running as a WSGI application it is recommended to use this instead of signals. file_event_handler_interval = 1 integer value How many seconds to wait between polls when file_event_handler is set log_dir = None string value Path to a log directory where to create a file 8.7.17. placement The following table outlines the options available under the [placement] group in the /etc/neutron/neutron.conf file. Table 8.39. placement Configuration option = Default value Type Description auth-url = None string value Authentication URL auth_type = None string value Authentication type to load cafile = None string value PEM encoded Certificate Authority to use when verifying HTTPs connections. certfile = None string value PEM encoded client certificate cert file collect-timing = False boolean value Collect per-API call timing information. default-domain-id = None string value Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. default-domain-name = None string value Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. domain-id = None string value Domain ID to scope to domain-name = None string value Domain name to scope to endpoint_type = public string value Type of the placement endpoint to use. This endpoint will be looked up in the keystone catalog and should be one of public, internal or admin. insecure = False boolean value Verify HTTPS connections. keyfile = None string value PEM encoded client certificate key file password = None string value User's password project-domain-id = None string value Domain ID containing project project-domain-name = None string value Domain name containing project project-id = None string value Project ID to scope to project-name = None string value Project name to scope to region_name = None string value Name of placement region to use. Useful if keystone manages more than one region. split-loggers = False boolean value Log requests to multiple loggers. system-scope = None string value Scope for system operations tenant-id = None string value Tenant ID tenant-name = None string value Tenant Name timeout = None integer value Timeout value for http requests trust-id = None string value Trust ID user-domain-id = None string value User's domain id user-domain-name = None string value User's domain name user-id = None string value User id username = None string value Username 8.7.18. privsep The following table outlines the options available under the [privsep] group in the /etc/neutron/neutron.conf file. Table 8.40. privsep Configuration option = Default value Type Description capabilities = [] list value List of Linux capabilities retained by the privsep daemon. group = None string value Group that the privsep daemon should run as. helper_command = None string value Command to invoke to start the privsep daemon if not using the "fork" method. If not specified, a default is generated using "sudo privsep-helper" and arguments designed to recreate the current configuration. This command must accept suitable --privsep_context and --privsep_sock_path arguments. logger_name = oslo_privsep.daemon string value Logger name to use for this privsep context. By default all contexts log with oslo_privsep.daemon. thread_pool_size = <based on operating system> integer value The number of threads available for privsep to concurrently run processes. Defaults to the number of CPU cores in the system. user = None string value User that the privsep daemon should run as. 8.7.19. profiler The following table outlines the options available under the [profiler] group in the /etc/neutron/neutron.conf file. Table 8.41. profiler Configuration option = Default value Type Description connection_string = messaging:// string value Connection string for a notifier backend. Default value is messaging:// which sets the notifier to oslo_messaging. Examples of possible values: messaging:// - use oslo_messaging driver for sending spans. redis://127.0.0.1:6379 - use redis driver for sending spans. mongodb://127.0.0.1:27017 - use mongodb driver for sending spans. elasticsearch://127.0.0.1:9200 - use elasticsearch driver for sending spans. jaeger://127.0.0.1:6831 - use jaeger tracing as driver for sending spans. enabled = False boolean value Enable the profiling for all services on this node. Default value is False (fully disable the profiling feature). Possible values: True: Enables the feature False: Disables the feature. The profiling cannot be started via this project operations. If the profiling is triggered by another project, this project part will be empty. es_doc_type = notification string value Document type for notification indexing in elasticsearch. es_scroll_size = 10000 integer value Elasticsearch splits large requests in batches. This parameter defines maximum size of each batch (for example: es_scroll_size=10000). es_scroll_time = 2m string value This parameter is a time value parameter (for example: es_scroll_time=2m), indicating for how long the nodes that participate in the search will maintain relevant resources in order to continue and support it. filter_error_trace = False boolean value Enable filter traces that contain error/exception to a separated place. Default value is set to False. Possible values: True: Enable filter traces that contain error/exception. False: Disable the filter. hmac_keys = SECRET_KEY string value Secret key(s) to use for encrypting context data for performance profiling. This string value should have the following format: <key1>[,<key2>,... <keyn>], where each key is some random string. A user who triggers the profiling via the REST API has to set one of these keys in the headers of the REST API call to include profiling results of this node for this particular project. Both "enabled" flag and "hmac_keys" config options should be set to enable profiling. Also, to generate correct profiling information across all services at least one key needs to be consistent between OpenStack projects. This ensures it can be used from client side to generate the trace, containing information from all possible resources. sentinel_service_name = mymaster string value Redissentinel uses a service name to identify a master redis service. This parameter defines the name (for example: sentinal_service_name=mymaster ). socket_timeout = 0.1 floating point value Redissentinel provides a timeout option on the connections. This parameter defines that timeout (for example: socket_timeout=0.1). trace_sqlalchemy = False boolean value Enable SQL requests profiling in services. Default value is False (SQL requests won't be traced). Possible values: True: Enables SQL requests profiling. Each SQL query will be part of the trace and can the be analyzed by how much time was spent for that. False: Disables SQL requests profiling. The spent time is only shown on a higher level of operations. Single SQL queries cannot be analyzed this way. 8.7.20. quotas The following table outlines the options available under the [quotas] group in the /etc/neutron/neutron.conf file. Table 8.42. quotas Configuration option = Default value Type Description default_quota = -1 integer value Default number of resource allowed per tenant. A negative value means unlimited. quota_driver = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver string value Default driver to use for quota checks. quota_floatingip = 50 integer value Number of floating IPs allowed per tenant. A negative value means unlimited. quota_network = 100 integer value Number of networks allowed per tenant. A negative value means unlimited. quota_port = 500 integer value Number of ports allowed per tenant. A negative value means unlimited. quota_router = 10 integer value Number of routers allowed per tenant. A negative value means unlimited. quota_security_group = 10 integer value Number of security groups allowed per tenant. A negative value means unlimited. quota_security_group_rule = 100 integer value Number of security rules allowed per tenant. A negative value means unlimited. quota_subnet = 100 integer value Number of subnets allowed per tenant, A negative value means unlimited. track_quota_usage = True boolean value Keep in track in the database of current resource quota usage. Plugins which do not leverage the neutron database should set this flag to False. 8.7.21. ssl The following table outlines the options available under the [ssl] group in the /etc/neutron/neutron.conf file. Table 8.43. ssl Configuration option = Default value Type Description ca_file = None string value CA certificate file to use to verify connecting clients. cert_file = None string value Certificate file to use when starting the server securely. ciphers = None string value Sets the list of available ciphers. value should be a string in the OpenSSL cipher list format. key_file = None string value Private key file to use when starting the server securely. version = None string value SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some distributions. 8.8. openvswitch_agent.ini This section contains options for the /etc/neutron/plugins/ml2/openvswitch_agent.ini file. 8.8.1. DEFAULT The following table outlines the options available under the [DEFAULT] group in the /etc/neutron/plugins/ml2/openvswitch_agent.ini file. . Configuration option = Default value Type Description debug = False boolean value If set to true, the logging level will be set to DEBUG instead of the default INFO level. default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO'] list value List of package logging levels in logger=LEVEL pairs. This option is ignored if log_config_append is set. fatal_deprecations = False boolean value Enables or disables fatal status of deprecations. `instance_format = [instance: %(uuid)s] ` string value The format for an instance that is passed with the log message. `instance_uuid_format = [instance: %(uuid)s] ` string value The format for an instance UUID that is passed with the log message. log-config-append = None string value The name of a logging configuration file. This file is appended to any existing logging configuration files. For details about logging configuration files, see the Python logging module documentation. Note that when logging configuration files are used then all logging configuration is set in the configuration file and other logging configuration options are ignored (for example, log-date-format). log-date-format = %Y-%m-%d %H:%M:%S string value Defines the format string for %%(asctime)s in log records. Default: %(default)s . This option is ignored if log_config_append is set. log-dir = None string value (Optional) The base directory used for relative log_file paths. This option is ignored if log_config_append is set. log-file = None string value (Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr as defined by use_stderr. This option is ignored if log_config_append is set. log_rotate_interval = 1 integer value The amount of time before the log files are rotated. This option is ignored unless log_rotation_type is setto "interval". log_rotate_interval_type = days string value Rotation interval type. The time of the last file change (or the time when the service was started) is used when scheduling the rotation. log_rotation_type = none string value Log rotation type. logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s string value Format string to use for log messages with context. Used by oslo_log.formatters.ContextFormatter logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d string value Additional data to append to log message when logging level for the message is DEBUG. Used by oslo_log.formatters.ContextFormatter logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s string value Format string to use for log messages when context is undefined. Used by oslo_log.formatters.ContextFormatter logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s string value Prefix each line of exception output with this format. Used by oslo_log.formatters.ContextFormatter logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s string value Defines the format string for %(user_identity)s that is used in logging_context_format_string. Used by oslo_log.formatters.ContextFormatter max_logfile_count = 30 integer value Maximum number of rotated log files. max_logfile_size_mb = 200 integer value Log file maximum size in MB. This option is ignored if "log_rotation_type" is not set to "size". publish_errors = False boolean value Enables or disables publication of error events. rate_limit_burst = 0 integer value Maximum number of logged messages per rate_limit_interval. rate_limit_except_level = CRITICAL string value Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or empty string. Logs with level greater or equal to rate_limit_except_level are not filtered. An empty string means that all levels are filtered. rate_limit_interval = 0 integer value Interval, number of seconds, of log rate limiting. rpc_response_max_timeout = 600 integer value Maximum seconds to wait for a response from an RPC call. syslog-log-facility = LOG_USER string value Syslog facility to receive log lines. This option is ignored if log_config_append is set. use-journal = False boolean value Enable journald for logging. If running in a systemd environment you may wish to enable journal support. Doing so will use the journal native protocol which includes structured metadata in addition to log messages.This option is ignored if log_config_append is set. use-json = False boolean value Use JSON formatting for logging. This option is ignored if log_config_append is set. use-syslog = False boolean value Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to honor RFC5424. This option is ignored if log_config_append is set. use_eventlog = False boolean value Log output to Windows Event Log. use_stderr = False boolean value Log output to standard error. This option is ignored if log_config_append is set. watch-log-file = False boolean value Uses logging handler designed to watch file system. When log file is moved or removed this handler will open a new log file with specified path instantaneously. It makes sense only if log_file option is specified and Linux platform is used. This option is ignored if log_config_append is set. 8.8.2. agent The following table outlines the options available under the [agent] group in the /etc/neutron/plugins/ml2/openvswitch_agent.ini file. Table 8.44. agent Configuration option = Default value Type Description arp_responder = False boolean value Enable local ARP responder if it is supported. Requires OVS 2.1 and ML2 l2population driver. Allows the switch (when supporting an overlay) to respond to an ARP request locally without performing a costly ARP broadcast into the overlay. NOTE: If enable_distributed_routing is set to True then arp_responder will automatically be set to True in the agent, regardless of the setting in the config file. baremetal_smartnic = False boolean value Enable the agent to process Smart NIC ports. dont_fragment = True boolean value Set or un-set the don't fragment (DF) bit on outgoing IP packet carrying GRE/VXLAN tunnel. drop_flows_on_start = False boolean value Reset flow table on start. Setting this to True will cause brief traffic interruption. enable_distributed_routing = False boolean value Make the l2 agent run in DVR mode. explicitly_egress_direct = False boolean value When set to True, the accepted egress unicast traffic will not use action NORMAL. The accepted egress packets will be taken care of in the final egress tables direct output flows for unicast traffic. extensions = [] list value Extensions list to use l2_population = False boolean value Use ML2 l2population mechanism driver to learn remote MAC and IPs and improve tunnel scalability. minimize_polling = True boolean value Minimize polling by monitoring ovsdb for interface changes. ovsdb_monitor_respawn_interval = 30 integer value The number of seconds to wait before respawning the ovsdb monitor after losing communication with it. tunnel_csum = False boolean value Set or un-set the tunnel header checksum on outgoing IP packet carrying GRE/VXLAN tunnel. tunnel_types = [] list value Network types supported by the agent (gre, vxlan and/or geneve). veth_mtu = 9000 integer value MTU size of veth interfaces vxlan_udp_port = 4789 port value The UDP port to use for VXLAN tunnels. 8.8.3. network_log The following table outlines the options available under the [network_log] group in the /etc/neutron/plugins/ml2/openvswitch_agent.ini file. Table 8.45. network_log Configuration option = Default value Type Description burst_limit = 25 integer value Maximum number of packets per rate_limit. local_output_log_base = None string value Output logfile path on agent side, default syslog file. rate_limit = 100 integer value Maximum packets logging per second. 8.8.4. ovs The following table outlines the options available under the [ovs] group in the /etc/neutron/plugins/ml2/openvswitch_agent.ini file. Table 8.46. ovs Configuration option = Default value Type Description bridge_mappings = [] list value Comma-separated list of <physical_network>:<bridge> tuples mapping physical network names to the agent's node-specific Open vSwitch bridge names to be used for flat and VLAN networks. The length of bridge names should be no more than 11. Each bridge must exist, and should have a physical network interface configured as a port. All physical networks configured on the server should have mappings to appropriate bridges on each agent. Note: If you remove a bridge from this mapping, make sure to disconnect it from the integration bridge as it won't be managed by the agent anymore. datapath_type = system string value OVS datapath to use. system is the default value and corresponds to the kernel datapath. To enable the userspace datapath set this value to netdev . disable_packet_marking = False boolean value Disables the packet marking when the QoS extension is enabled. This option needs to be enabled when using OVS with hardware offload until the skb_priority , sbk_mark and output queue fields are supported and can be offloaded. If this options is enabled, no rate QoS rule (bandwidth limit nor minimum bandwidth) will work for VirtIO ports. int_peer_patch_port = patch-tun string value Peer patch port in integration bridge for tunnel bridge. integration_bridge = br-int string value Integration bridge to use. Do not change this parameter unless you have a good reason to. This is the name of the OVS integration bridge. There is one per hypervisor. The integration bridge acts as a virtual patch bay . All VM VIFs are attached to this bridge and then patched according to their network connectivity. local_ip = None IP address value IP address of local overlay (tunnel) network endpoint. Use either an IPv4 or IPv6 address that resides on one of the host network interfaces. The IP version of this value must match the value of the overlay_ip_version option in the ML2 plug-in configuration file on the neutron server node(s). of_connect_timeout = 300 integer value Timeout in seconds to wait for the local switch connecting the controller. of_inactivity_probe = 10 integer value The inactivity_probe interval in seconds for the local switch connection to the controller. A value of 0 disables inactivity probes. of_listen_address = 127.0.0.1 IP address value Address to listen on for OpenFlow connections. of_listen_port = 6633 port value Port to listen on for OpenFlow connections. of_request_timeout = 300 integer value Timeout in seconds to wait for a single OpenFlow request. ovsdb_connection = tcp:127.0.0.1:6640 string value The connection string for the OVSDB backend. Will be used for all ovsdb commands and by ovsdb-client when monitoring ovsdb_debug = False boolean value Enable OVSDB debug logs resource_provider_bandwidths = [] list value Comma-separated list of <bridge>:<egress_bw>:<ingress_bw> tuples, showing the available bandwidth for the given bridge in the given direction. The direction is meant from VM perspective. Bandwidth is measured in kilobits per second (kbps). The bridge must appear in bridge_mappings as the value. But not all bridges in bridge_mappings must be listed here. For a bridge not listed here we neither create a resource provider in placement nor report inventories against. An omitted direction means we do not report an inventory for the corresponding class. resource_provider_default_hypervisor = None string value The default hypervisor name used to locate the parent of the resource provider. If this option is not set, canonical name is used resource_provider_hypervisors = {} dict value Mapping of bridges to hypervisors: <bridge>:<hypervisor>,... hypervisor name is used to locate the parent of the resource provider tree. Only needs to be set in the rare case when the hypervisor name is different from the resource_provider_default_hypervisor config option value as known by the nova-compute managing that hypervisor. resource_provider_inventory_defaults = {'allocation_ratio': 1.0, 'min_unit': 1, 'reserved': 0, 'step_size': 1} dict value Key:value pairs to specify defaults used while reporting resource provider inventories. Possible keys with their types: allocation_ratio:float, max_unit:int, min_unit:int, reserved:int, step_size:int, See also: https://docs.openstack.org/api-ref/placement/#update-resource-provider-inventories ssl_ca_cert_file = None string value The Certificate Authority (CA) certificate to use when interacting with OVSDB. Required when using an "ssl:" prefixed ovsdb_connection ssl_cert_file = None string value The SSL certificate file to use when interacting with OVSDB. Required when using an "ssl:" prefixed ovsdb_connection ssl_key_file = None string value The SSL private key file to use when interacting with OVSDB. Required when using an "ssl:" prefixed ovsdb_connection tun_peer_patch_port = patch-int string value Peer patch port in tunnel bridge for integration bridge. tunnel_bridge = br-tun string value Tunnel bridge to use. vhostuser_socket_dir = /var/run/openvswitch string value OVS vhost-user socket directory. 8.8.5. securitygroup The following table outlines the options available under the [securitygroup] group in the /etc/neutron/plugins/ml2/openvswitch_agent.ini file. Table 8.47. securitygroup Configuration option = Default value Type Description enable_ipset = True boolean value Use ipset to speed-up the iptables based security groups. Enabling ipset support requires that ipset is installed on L2 agent node. enable_security_group = True boolean value Controls whether the neutron security group API is enabled in the server. It should be false when using no security groups or using the nova security group API. firewall_driver = None string value Driver for security groups firewall in the L2 agent permitted_ethertypes = [] list value Comma-separated list of ethertypes to be permitted, in hexadecimal (starting with "0x"). For example, "0x4008" to permit InfiniBand. 8.9. sriov_agent.ini This section contains options for the /etc/neutron/plugins/ml2/sriov_agent.ini file. 8.9.1. DEFAULT The following table outlines the options available under the [DEFAULT] group in the /etc/neutron/plugins/ml2/sriov_agent.ini file. . Configuration option = Default value Type Description debug = False boolean value If set to true, the logging level will be set to DEBUG instead of the default INFO level. default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO'] list value List of package logging levels in logger=LEVEL pairs. This option is ignored if log_config_append is set. fatal_deprecations = False boolean value Enables or disables fatal status of deprecations. `instance_format = [instance: %(uuid)s] ` string value The format for an instance that is passed with the log message. `instance_uuid_format = [instance: %(uuid)s] ` string value The format for an instance UUID that is passed with the log message. log-config-append = None string value The name of a logging configuration file. This file is appended to any existing logging configuration files. For details about logging configuration files, see the Python logging module documentation. Note that when logging configuration files are used then all logging configuration is set in the configuration file and other logging configuration options are ignored (for example, log-date-format). log-date-format = %Y-%m-%d %H:%M:%S string value Defines the format string for %%(asctime)s in log records. Default: %(default)s . This option is ignored if log_config_append is set. log-dir = None string value (Optional) The base directory used for relative log_file paths. This option is ignored if log_config_append is set. log-file = None string value (Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr as defined by use_stderr. This option is ignored if log_config_append is set. log_rotate_interval = 1 integer value The amount of time before the log files are rotated. This option is ignored unless log_rotation_type is setto "interval". log_rotate_interval_type = days string value Rotation interval type. The time of the last file change (or the time when the service was started) is used when scheduling the rotation. log_rotation_type = none string value Log rotation type. logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s string value Format string to use for log messages with context. Used by oslo_log.formatters.ContextFormatter logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d string value Additional data to append to log message when logging level for the message is DEBUG. Used by oslo_log.formatters.ContextFormatter logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s string value Format string to use for log messages when context is undefined. Used by oslo_log.formatters.ContextFormatter logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s string value Prefix each line of exception output with this format. Used by oslo_log.formatters.ContextFormatter logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s string value Defines the format string for %(user_identity)s that is used in logging_context_format_string. Used by oslo_log.formatters.ContextFormatter max_logfile_count = 30 integer value Maximum number of rotated log files. max_logfile_size_mb = 200 integer value Log file maximum size in MB. This option is ignored if "log_rotation_type" is not set to "size". publish_errors = False boolean value Enables or disables publication of error events. rate_limit_burst = 0 integer value Maximum number of logged messages per rate_limit_interval. rate_limit_except_level = CRITICAL string value Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or empty string. Logs with level greater or equal to rate_limit_except_level are not filtered. An empty string means that all levels are filtered. rate_limit_interval = 0 integer value Interval, number of seconds, of log rate limiting. rpc_response_max_timeout = 600 integer value Maximum seconds to wait for a response from an RPC call. syslog-log-facility = LOG_USER string value Syslog facility to receive log lines. This option is ignored if log_config_append is set. use-journal = False boolean value Enable journald for logging. If running in a systemd environment you may wish to enable journal support. Doing so will use the journal native protocol which includes structured metadata in addition to log messages.This option is ignored if log_config_append is set. use-json = False boolean value Use JSON formatting for logging. This option is ignored if log_config_append is set. use-syslog = False boolean value Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to honor RFC5424. This option is ignored if log_config_append is set. use_eventlog = False boolean value Log output to Windows Event Log. use_stderr = False boolean value Log output to standard error. This option is ignored if log_config_append is set. watch-log-file = False boolean value Uses logging handler designed to watch file system. When log file is moved or removed this handler will open a new log file with specified path instantaneously. It makes sense only if log_file option is specified and Linux platform is used. This option is ignored if log_config_append is set. 8.9.2. agent The following table outlines the options available under the [agent] group in the /etc/neutron/plugins/ml2/sriov_agent.ini file. Table 8.48. agent Configuration option = Default value Type Description extensions = [] list value Extensions list to use 8.9.3. sriov_nic The following table outlines the options available under the [sriov_nic] group in the /etc/neutron/plugins/ml2/sriov_agent.ini file. Table 8.49. sriov_nic Configuration option = Default value Type Description exclude_devices = [] list value Comma-separated list of <network_device>:<vfs_to_exclude> tuples, mapping network_device to the agent's node-specific list of virtual functions that should not be used for virtual networking. vfs_to_exclude is a semicolon-separated list of virtual functions to exclude from network_device. The network_device in the mapping should appear in the physical_device_mappings list. physical_device_mappings = [] list value Comma-separated list of <physical_network>:<network_device> tuples mapping physical network names to the agent's node-specific physical network device interfaces of SR-IOV physical function to be used for VLAN networks. All physical networks listed in network_vlan_ranges on the server should have mappings to appropriate interfaces on each agent. resource_provider_bandwidths = [] list value Comma-separated list of <network_device>:<egress_bw>:<ingress_bw> tuples, showing the available bandwidth for the given device in the given direction. The direction is meant from VM perspective. Bandwidth is measured in kilobits per second (kbps). The device must appear in physical_device_mappings as the value. But not all devices in physical_device_mappings must be listed here. For a device not listed here we neither create a resource provider in placement nor report inventories against. An omitted direction means we do not report an inventory for the corresponding class. resource_provider_default_hypervisor = None string value The default hypervisor name used to locate the parent of the resource provider. If this option is not set, canonical name is used resource_provider_hypervisors = {} dict value Mapping of network devices to hypervisors: <network_device>:<hypervisor>,... hypervisor name is used to locate the parent of the resource provider tree. Only needs to be set in the rare case when the hypervisor name is different from the resource_provider_default_hypervisor config option value as known by the nova-compute managing that hypervisor. resource_provider_inventory_defaults = {'allocation_ratio': 1.0, 'min_unit': 1, 'reserved': 0, 'step_size': 1} dict value Key:value pairs to specify defaults used while reporting resource provider inventories. Possible keys with their types: allocation_ratio:float, max_unit:int, min_unit:int, reserved:int, step_size:int, See also: https://docs.openstack.org/api-ref/placement/#update-resource-provider-inventories | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/configuration_reference/neutron_2 |
probe::netfilter.bridge.pre_routing | probe::netfilter.bridge.pre_routing Name probe::netfilter.bridge.pre_routing - - Called before a bridging packet is routed Synopsis netfilter.bridge.pre_routing Values llcproto_stp Constant used to signify Bridge Spanning Tree Protocol packet pf Protocol family -- always " bridge " nf_drop Constant used to signify a 'drop' verdict br_msg Message age in 1/256 secs indev Address of net_device representing input device, 0 if unknown brhdr Address of bridge header nf_queue Constant used to signify a 'queue' verdict br_fd Forward delay in 1/256 secs br_mac Bridge MAC address br_rid Identity of root bridge nf_accept Constant used to signify an 'accept' verdict br_flags BPDU flags outdev_name Name of network device packet will be routed to (if known) br_prid Protocol identifier br_rmac Root bridge MAC address br_htime Hello time in 1/256 secs br_bid Identity of bridge protocol Packet protocol br_max Max age in 1/256 secs br_type BPDU type nf_stop Constant used to signify a 'stop' verdict br_cost Total cost from transmitting bridge to root nf_stolen Constant used to signify a 'stolen' verdict length The length of the packet buffer contents, in bytes llcpdu Address of LLC Protocol Data Unit outdev Address of net_device representing output device, 0 if unknown nf_repeat Constant used to signify a 'repeat' verdict indev_name Name of network device packet was received on (if known) br_poid Port identifier br_vid Protocol version identifier | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-netfilter-bridge-pre-routing |
Chapter 93. MLLP | Chapter 93. MLLP Both producer and consumer are supported The MLLP component is specifically designed to handle the nuances of the MLLP protocol and provide the functionality required by Healthcare providers to communicate with other systems using the MLLP protocol. The MLLP component provides a simple configuration URI, automated HL7 acknowledgment generation and automatic acknowledgment interrogation. The MLLP protocol does not typically use a large number of concurrent TCP connections - a single active TCP connection is the normal case. Therefore, the MLLP component uses a simple thread-per-connection model based on standard Java Sockets. This keeps the implementation simple and eliminates the dependencies on only Camel itself. The component supports the following: A Camel consumer using a TCP Server A Camel producer using a TCP Client The MLLP component use byte[] payloads, and relies on Camel type conversion to convert byte[] to other types. 93.1. Dependencies When using mllp with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-mllp-starter</artifactId> </dependency> 93.2. Configuring Options Camel components are configured on two levels: Component level Endpoint level 93.2.1. Component Level Options The component level is the highest level. The configurations you define at this level are inherited by all the endpoints. For example, a component can have security settings, credentials for authentication, urls for network connection, and so on. Since components typically have pre-configured defaults for the most common cases, you may need to only configure a few component options, or maybe none at all. You can configure components with Component DSL in a configuration file (application.properties|yaml), or directly with Java code. 93.2.2. Endpoint Level Options At the Endpoint level you have many options, which you can use to configure what you want the endpoint to do. The options are categorized according to whether the endpoint is used as a consumer (from) or as a producer (to) or used for both. You can configure endpoints directly in the endpoint URI as path and query parameters. You can also use Endpoint DSL and DataFormat DSL as type safe ways of configuring endpoints and data formats in Java. When configuring options, use Property Placeholders for urls, port numbers, sensitive information, and other settings. Placeholders allows you to externalize the configuration from your code, giving you more flexible and reusable code. 93.3. Component Options The MLLP component supports 30 options, which are listed below. Name Description Default Type autoAck (common) Enable/Disable the automatic generation of a MLLP Acknowledgement MLLP Consumers only. true boolean charsetName (common) Sets the default charset to use. String configuration (common) Sets the default configuration to use when creating MLLP endpoints. MllpConfiguration hl7Headers (common) Enable/Disable the automatic generation of message headers from the HL7 Message MLLP Consumers only. true boolean requireEndOfData (common) Enable/Disable strict compliance to the MLLP standard. The MLLP standard specifies START_OF_BLOCKhl7 payloadEND_OF_BLOCKEND_OF_DATA, however, some systems do not send the final END_OF_DATA byte. This setting controls whether or not the final END_OF_DATA byte is required or optional. true boolean stringPayload (common) Enable/Disable converting the payload to a String. If enabled, HL7 Payloads received from external systems will be validated converted to a String. If the charsetName property is set, that character set will be used for the conversion. If the charsetName property is not set, the value of MSH-18 will be used to determine th appropriate character set. If MSH-18 is not set, then the default ISO-8859-1 character set will be use. true boolean validatePayload (common) Enable/Disable the validation of HL7 Payloads If enabled, HL7 Payloads received from external systems will be validated (see Hl7Util.generateInvalidPayloadExceptionMessage for details on the validation). If and invalid payload is detected, a MllpInvalidMessageException (for consumers) or a MllpInvalidAcknowledgementException will be thrown. false boolean acceptTimeout (consumer) Timeout (in milliseconds) while waiting for a TCP connection TCP Server Only. 60000 int backlog (consumer) The maximum queue length for incoming connection indications (a request to connect) is set to the backlog parameter. If a connection indication arrives when the queue is full, the connection is refused. 5 Integer bindRetryInterval (consumer) TCP Server Only - The number of milliseconds to wait between bind attempts. 5000 int bindTimeout (consumer) TCP Server Only - The number of milliseconds to retry binding to a server port. 30000 int bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to receive incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. If disabled, the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions by logging them at WARN or ERROR level and ignored. true boolean lenientBind (consumer) TCP Server Only - Allow the endpoint to start before the TCP ServerSocket is bound. In some environments, it may be desirable to allow the endpoint to start before the TCP ServerSocket is bound. false boolean maxConcurrentConsumers (consumer) The maximum number of concurrent MLLP Consumer connections that will be allowed. If a new connection is received and the maximum is number are already established, the new connection will be reset immediately. 5 int reuseAddress (consumer) Enable/disable the SO_REUSEADDR socket option. false Boolean exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut InOut ExchangePattern connectTimeout (producer) Timeout (in milliseconds) for establishing for a TCP connection TCP Client only. 30000 int idleTimeoutStrategy (producer) decide what action to take when idle timeout occurs. Possible values are : RESET: set SO_LINGER to 0 and reset the socket CLOSE: close the socket gracefully default is RESET. Enum values: RESET CLOSE RESET MllpIdleTimeoutStrategy keepAlive (producer) Enable/disable the SO_KEEPALIVE socket option. true Boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean tcpNoDelay (producer) Enable/disable the TCP_NODELAY socket option. true Boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean defaultCharset (advanced) Set the default character set to use for byte to/from String conversions. ISO-8859-1 String logPhi (advanced) Whether to log PHI. true Boolean logPhiMaxBytes (advanced) Set the maximum number of bytes of PHI that will be logged in a log entry. 5120 Integer readTimeout (advanced) The SO_TIMEOUT value (in milliseconds) used after the start of an MLLP frame has been received. 5000 int receiveBufferSize (advanced) Sets the SO_RCVBUF option to the specified value (in bytes). 8192 Integer receiveTimeout (advanced) The SO_TIMEOUT value (in milliseconds) used when waiting for the start of an MLLP frame. 15000 int sendBufferSize (advanced) Sets the SO_SNDBUF option to the specified value (in bytes). 8192 Integer idleTimeout (tcp) The approximate idle time allowed before the Client TCP Connection will be reset. A null value or a value less than or equal to zero will disable the idle timeout. Integer 93.4. Endpoint Options The MLLP endpoint is configured using URI syntax: with the following path and query parameters: 93.4.1. Path Parameters (2 parameters) Name Description Default Type hostname (common) Required Hostname or IP for connection for the TCP connection. The default value is null, which means any local IP address. String port (common) Required Port number for the TCP connection. int 93.4.2. Query Parameters (26 parameters) Name Description Default Type autoAck (common) Enable/Disable the automatic generation of a MLLP Acknowledgement MLLP Consumers only. true boolean charsetName (common) Sets the default charset to use. String hl7Headers (common) Enable/Disable the automatic generation of message headers from the HL7 Message MLLP Consumers only. true boolean requireEndOfData (common) Enable/Disable strict compliance to the MLLP standard. The MLLP standard specifies START_OF_BLOCKhl7 payloadEND_OF_BLOCKEND_OF_DATA, however, some systems do not send the final END_OF_DATA byte. This setting controls whether or not the final END_OF_DATA byte is required or optional. true boolean stringPayload (common) Enable/Disable converting the payload to a String. If enabled, HL7 Payloads received from external systems will be validated converted to a String. If the charsetName property is set, that character set will be used for the conversion. If the charsetName property is not set, the value of MSH-18 will be used to determine th appropriate character set. If MSH-18 is not set, then the default ISO-8859-1 character set will be use. true boolean validatePayload (common) Enable/Disable the validation of HL7 Payloads If enabled, HL7 Payloads received from external systems will be validated (see Hl7Util.generateInvalidPayloadExceptionMessage for details on the validation). If and invalid payload is detected, a MllpInvalidMessageException (for consumers) or a MllpInvalidAcknowledgementException will be thrown. false boolean acceptTimeout (consumer) Timeout (in milliseconds) while waiting for a TCP connection TCP Server Only. 60000 int backlog (consumer) The maximum queue length for incoming connection indications (a request to connect) is set to the backlog parameter. If a connection indication arrives when the queue is full, the connection is refused. 5 Integer bindRetryInterval (consumer) TCP Server Only - The number of milliseconds to wait between bind attempts. 5000 int bindTimeout (consumer) TCP Server Only - The number of milliseconds to retry binding to a server port. 30000 int bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to receive incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. If disabled, the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions by logging them at WARN or ERROR level and ignored. true boolean lenientBind (consumer) TCP Server Only - Allow the endpoint to start before the TCP ServerSocket is bound. In some environments, it may be desirable to allow the endpoint to start before the TCP ServerSocket is bound. false boolean maxConcurrentConsumers (consumer) The maximum number of concurrent MLLP Consumer connections that will be allowed. If a new connection is received and the maximum is number are already established, the new connection will be reset immediately. 5 int reuseAddress (consumer) Enable/disable the SO_REUSEADDR socket option. false Boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut InOut ExchangePattern connectTimeout (producer) Timeout (in milliseconds) for establishing for a TCP connection TCP Client only. 30000 int idleTimeoutStrategy (producer) decide what action to take when idle timeout occurs. Possible values are : RESET: set SO_LINGER to 0 and reset the socket CLOSE: close the socket gracefully default is RESET. Enum values: RESET CLOSE RESET MllpIdleTimeoutStrategy keepAlive (producer) Enable/disable the SO_KEEPALIVE socket option. true Boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean tcpNoDelay (producer) Enable/disable the TCP_NODELAY socket option. true Boolean readTimeout (advanced) The SO_TIMEOUT value (in milliseconds) used after the start of an MLLP frame has been received. 5000 int receiveBufferSize (advanced) Sets the SO_RCVBUF option to the specified value (in bytes). 8192 Integer receiveTimeout (advanced) The SO_TIMEOUT value (in milliseconds) used when waiting for the start of an MLLP frame. 15000 int sendBufferSize (advanced) Sets the SO_SNDBUF option to the specified value (in bytes). 8192 Integer idleTimeout (tcp) The approximate idle time allowed before the Client TCP Connection will be reset. A null value or a value less than or equal to zero will disable the idle timeout. Integer 93.5. MLLP Consumer The MLLP Consumer supports receiving MLLP-framed messages and sending HL7 Acknowledgements. The MLLP Consumer can automatically generate the HL7 Acknowledgement (HL7 Application Acknowledgements only - AA, AE and AR), or the acknowledgement can be specified using the CamelMllpAcknowledgement exchange property. Additionally, the type of acknowledgement that will be generated can be controlled by setting the CamelMllpAcknowledgementType exchange property. The MLLP Consumer can read messages without sending any HL7 Acknowledgement if the automatic acknowledgement is disabled and exchange pattern is InOnly. 93.5.1. Message Headers The MLLP Consumer adds these headers on the Camel message: Key Description CamelMllpLocalAddress The local TCP Address of the Socket CamelMllpRemoteAddress The local TCP Address of the Socket CamelMllpSendingApplication MSH-3 value CamelMllpSendingFacility MSH-4 value CamelMllpReceivingApplication MSH-5 value CamelMllpReceivingFacility MSH-6 value CamelMllpTimestamp MSH-7 value CamelMllpSecurity MSH-8 value CamelMllpMessageType MSH-9 value CamelMllpEventType MSH-9-1 value CamelMllpTriggerEvent MSH-9-2 value CamelMllpMessageControlId MSH-10 value CamelMllpProcessingId MSH-11 value CamelMllpVersionId MSH-12 value CamelMllpCharset MSH-18 value All headers are String types. If a header value is missing, its value is null. 93.5.2. Exchange Properties The type of acknowledgment the MLLP Consumer generates and state of the TCP Socket can be controlled by these properties on the Camel exchange: Key Type Description CamelMllpAcknowledgement byte[] If present, this property will we sent to client as the MLLP Acknowledgement CamelMllpAcknowledgementString String If present and CamelMllpAcknowledgement is not present, this property will we sent to client as the MLLP Acknowledgement CamelMllpAcknowledgementMsaText String If neither CamelMllpAcknowledgement or CamelMllpAcknowledgementString are present and autoAck is true, this property can be used to specify the contents of MSA-3 in the generated HL7 acknowledgement CamelMllpAcknowledgementType String If neither CamelMllpAcknowledgement or CamelMllpAcknowledgementString are present and autoAck is true, this property can be used to specify the HL7 acknowledgement type (i.e. AA, AE, AR) CamelMllpAutoAcknowledge Boolean Overrides the autoAck query parameter CamelMllpCloseConnectionBeforeSend Boolean If true, the Socket will be closed before sending data CamelMllpResetConnectionBeforeSend Boolean If true, the Socket will be reset before sending data CamelMllpCloseConnectionAfterSend Boolean If true, the Socket will be closed immediately after sending data CamelMllpResetConnectionAfterSend Boolean If true, the Socket will be reset immediately after sending any data 93.6. MLLP Producer The MLLP Producer supports sending MLLP-framed messages and receiving HL7 Acknowledgements. The MLLP Producer interrogates the HL7 Acknowledgments and raises exceptions if a negative acknowledgement is received. The received acknowledgement is interrogated and an exception is raised in the event of a negative acknowledgement. The MLLP Producer can ignore acknowledgements when configured with InOnly exchange pattern. 93.6.1. Message Headers The MLLP Producer adds these headers on the Camel message: Key Description CamelMllpLocalAddress The local TCP Address of the Socket CamelMllpRemoteAddress The remote TCP Address of the Socket CamelMllpAcknowledgement The HL7 Acknowledgment byte[] received CamelMllpAcknowledgementString The HL7 Acknowledgment received, converted to a String 93.6.2. Exchange Properties The state of the TCP Socket can be controlled by these properties on the Camel exchange: Key Type Description CamelMllpCloseConnectionBeforeSend Boolean If true, the Socket will be closed before sending data CamelMllpResetConnectionBeforeSend Boolean If true, the Socket will be reset before sending data CamelMllpCloseConnectionAfterSend Boolean If true, the Socket will be closed immediately after sending data CamelMllpResetConnectionAfterSend Boolean If true, the Socket will be reset immediately after sending any data 93.7. Spring Boot Auto-Configuration The component supports 31 options, which are listed below. Name Description Default Type camel.component.mllp.accept-timeout Timeout (in milliseconds) while waiting for a TCP connection TCP Server Only. 60000 Integer camel.component.mllp.auto-ack Enable/Disable the automatic generation of a MLLP Acknowledgement MLLP Consumers only. true Boolean camel.component.mllp.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.mllp.backlog The maximum queue length for incoming connection indications (a request to connect) is set to the backlog parameter. If a connection indication arrives when the queue is full, the connection is refused. 5 Integer camel.component.mllp.bind-retry-interval TCP Server Only - The number of milliseconds to wait between bind attempts. 5000 Integer camel.component.mllp.bind-timeout TCP Server Only - The number of milliseconds to retry binding to a server port. 30000 Integer camel.component.mllp.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to receive incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. If disabled, the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions by logging them at WARN or ERROR level and ignored. true Boolean camel.component.mllp.charset-name Sets the default charset to use. String camel.component.mllp.configuration Sets the default configuration to use when creating MLLP endpoints. The option is a org.apache.camel.component.mllp.MllpConfiguration type. MllpConfiguration camel.component.mllp.connect-timeout Timeout (in milliseconds) for establishing for a TCP connection TCP Client only. 30000 Integer camel.component.mllp.default-charset Set the default character set to use for byte to/from String conversions. ISO-8859-1 String camel.component.mllp.enabled Whether to enable auto configuration of the mllp component. This is enabled by default. Boolean camel.component.mllp.exchange-pattern Sets the exchange pattern when the consumer creates an exchange. ExchangePattern camel.component.mllp.hl7-headers Enable/Disable the automatic generation of message headers from the HL7 Message MLLP Consumers only. true Boolean camel.component.mllp.idle-timeout The approximate idle time allowed before the Client TCP Connection will be reset. A null value or a value less than or equal to zero will disable the idle timeout. Integer camel.component.mllp.idle-timeout-strategy decide what action to take when idle timeout occurs. Possible values are : RESET: set SO_LINGER to 0 and reset the socket CLOSE: close the socket gracefully default is RESET. MllpIdleTimeoutStrategy camel.component.mllp.keep-alive Enable/disable the SO_KEEPALIVE socket option. true Boolean camel.component.mllp.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.mllp.lenient-bind TCP Server Only - Allow the endpoint to start before the TCP ServerSocket is bound. In some environments, it may be desirable to allow the endpoint to start before the TCP ServerSocket is bound. false Boolean camel.component.mllp.log-phi Whether to log PHI. true Boolean camel.component.mllp.log-phi-max-bytes Set the maximum number of bytes of PHI that will be logged in a log entry. 5120 Integer camel.component.mllp.max-concurrent-consumers The maximum number of concurrent MLLP Consumer connections that will be allowed. If a new connection is received and the maximum is number are already established, the new connection will be reset immediately. 5 Integer camel.component.mllp.read-timeout The SO_TIMEOUT value (in milliseconds) used after the start of an MLLP frame has been received. 5000 Integer camel.component.mllp.receive-buffer-size Sets the SO_RCVBUF option to the specified value (in bytes). 8192 Integer camel.component.mllp.receive-timeout The SO_TIMEOUT value (in milliseconds) used when waiting for the start of an MLLP frame. 15000 Integer camel.component.mllp.require-end-of-data Enable/Disable strict compliance to the MLLP standard. The MLLP standard specifies START_OF_BLOCKhl7 payloadEND_OF_BLOCKEND_OF_DATA, however, some systems do not send the final END_OF_DATA byte. This setting controls whether or not the final END_OF_DATA byte is required or optional. true Boolean camel.component.mllp.reuse-address Enable/disable the SO_REUSEADDR socket option. false Boolean camel.component.mllp.send-buffer-size Sets the SO_SNDBUF option to the specified value (in bytes). 8192 Integer camel.component.mllp.string-payload Enable/Disable converting the payload to a String. If enabled, HL7 Payloads received from external systems will be validated converted to a String. If the charsetName property is set, that character set will be used for the conversion. If the charsetName property is not set, the value of MSH-18 will be used to determine th appropriate character set. If MSH-18 is not set, then the default ISO-8859-1 character set will be use. true Boolean camel.component.mllp.tcp-no-delay Enable/disable the TCP_NODELAY socket option. true Boolean camel.component.mllp.validate-payload Enable/Disable the validation of HL7 Payloads If enabled, HL7 Payloads received from external systems will be validated (see Hl7Util.generateInvalidPayloadExceptionMessage for details on the validation). If and invalid payload is detected, a MllpInvalidMessageException (for consumers) or a MllpInvalidAcknowledgementException will be thrown. false Boolean | [
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-mllp-starter</artifactId> </dependency>",
"mllp:hostname:port"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-mllp-component-starter |
Serverless | Serverless OpenShift Container Platform 4.9 OpenShift Serverless installation, usage, and release notes Red Hat OpenShift Documentation Team | [
"apiVersion: operator.knative.dev/v1beta1 kind: KnativeEventing spec: config: features: new-trigger-filters: enabled",
"apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing",
"apiVersion: operator.knative.dev/v1beta1 kind: KnativeEventing",
"oc delete mutatingwebhookconfiguration kafkabindings.webhook.kafka.sources.knative.dev",
"apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving spec: deployments: - name: activator resources: - container: activator requests: cpu: 300m memory: 60Mi limits: cpu: 1000m memory: 1000Mi",
"apiVersion: serving.knative.dev/v1alpha1 kind: DomainMapping metadata: name: <domain-name> namespace: knative-eventing spec: ref: name: broker-ingress kind: Service apiVersion: v1",
"kn event send --to-url https://ce-api.foo.example.com/",
"kn event send --to Service:serving.knative.dev/v1:event-display",
"[analyzer] no stack metadata found at path '' [analyzer] ERROR: failed to : set API for buildpack 'paketo-buildpacks/[email protected]': buildpack API version '0.7' is incompatible with the lifecycle",
"Error: failed to get credentials: failed to verify credentials: status code: 404",
"buildEnvs: []",
"buildEnvs: - name: BP_NODE_RUN_SCRIPTS value: build",
"ERROR: failed to image: error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.40/info\": EOF",
"ExecStart=/usr/bin/podman USDLOGGING system service --time=0",
"systemctl --user daemon-reload",
"systemctl restart --user podman.socket",
"podman system service --time=0 tcp:127.0.0.1:5534 & export DOCKER_HOST=tcp://127.0.0.1:5534",
"ERROR: failed to image: error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.40/info\": EOF",
"ExecStart=/usr/bin/podman USDLOGGING system service --time=0",
"systemctl --user daemon-reload",
"systemctl restart --user podman.socket",
"podman system service --time=0 tcp:127.0.0.1:5534 & export DOCKER_HOST=tcp://127.0.0.1:5534",
"spec: config: network: defaultExternalScheme: \"http\"",
"spec: config: network: defaultExternalScheme: \"https\"",
"spec: ingress: kourier: service-type: LoadBalancer",
"--- apiVersion: v1 kind: Namespace metadata: name: openshift-serverless --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: serverless-operators namespace: openshift-serverless spec: {} --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: serverless-operator namespace: openshift-serverless spec: channel: stable 1 name: serverless-operator 2 source: redhat-operators 3 sourceNamespace: openshift-marketplace 4",
"oc apply -f serverless-subscription.yaml",
"oc get csv",
"NAME DISPLAY VERSION REPLACES PHASE serverless-operator.v1.25.0 Red Hat OpenShift Serverless 1.25.0 serverless-operator.v1.24.0 Succeeded",
"kn: No such file or directory",
"tar -xf <file>",
"echo USDPATH",
"oc get ConsoleCLIDownload",
"NAME DISPLAY NAME AGE kn kn - OpenShift Serverless Command Line Interface (CLI) 2022-09-20T08:41:18Z oc-cli-downloads oc - OpenShift Command Line Interface (CLI) 2022-09-20T08:00:20Z",
"oc get route -n openshift-serverless",
"NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD kn kn-openshift-serverless.apps.example.com knative-openshift-metrics-3 http-cli edge/Redirect None",
"subscription-manager register",
"subscription-manager refresh",
"subscription-manager attach --pool=<pool_id> 1",
"subscription-manager repos --enable=\"openshift-serverless-1-for-rhel-8-x86_64-rpms\"",
"subscription-manager repos --enable=\"openshift-serverless-1-for-rhel-8-s390x-rpms\"",
"subscription-manager repos --enable=\"openshift-serverless-1-for-rhel-8-ppc64le-rpms\"",
"yum install openshift-serverless-clients",
"kn: No such file or directory",
"tar -xf <filename>",
"echo USDPATH",
"echo USDPATH",
"C:\\> path",
"apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving",
"oc apply -f serving.yaml",
"oc get knativeserving.operator.knative.dev/knative-serving -n knative-serving --template='{{range .status.conditions}}{{printf \"%s=%s\\n\" .type .status}}{{end}}'",
"DependenciesInstalled=True DeploymentsAvailable=True InstallSucceeded=True Ready=True",
"oc get pods -n knative-serving",
"NAME READY STATUS RESTARTS AGE activator-67ddf8c9d7-p7rm5 2/2 Running 0 4m activator-67ddf8c9d7-q84fz 2/2 Running 0 4m autoscaler-5d87bc6dbf-6nqc6 2/2 Running 0 3m59s autoscaler-5d87bc6dbf-h64rl 2/2 Running 0 3m59s autoscaler-hpa-77f85f5cc4-lrts7 2/2 Running 0 3m57s autoscaler-hpa-77f85f5cc4-zx7hl 2/2 Running 0 3m56s controller-5cfc7cb8db-nlccl 2/2 Running 0 3m50s controller-5cfc7cb8db-rmv7r 2/2 Running 0 3m18s domain-mapping-86d84bb6b4-r746m 2/2 Running 0 3m58s domain-mapping-86d84bb6b4-v7nh8 2/2 Running 0 3m58s domainmapping-webhook-769d679d45-bkcnj 2/2 Running 0 3m58s domainmapping-webhook-769d679d45-fff68 2/2 Running 0 3m58s storage-version-migration-serving-serving-0.26.0--1-6qlkb 0/1 Completed 0 3m56s webhook-5fb774f8d8-6bqrt 2/2 Running 0 3m57s webhook-5fb774f8d8-b8lt5 2/2 Running 0 3m57s",
"oc get pods -n knative-serving-ingress",
"NAME READY STATUS RESTARTS AGE net-kourier-controller-7d4b6c5d95-62mkf 1/1 Running 0 76s net-kourier-controller-7d4b6c5d95-qmgm2 1/1 Running 0 76s 3scale-kourier-gateway-6688b49568-987qz 1/1 Running 0 75s 3scale-kourier-gateway-6688b49568-b5tnp 1/1 Running 0 75s",
"apiVersion: operator.knative.dev/v1beta1 kind: KnativeEventing metadata: name: knative-eventing namespace: knative-eventing",
"oc apply -f eventing.yaml",
"oc get knativeeventing.operator.knative.dev/knative-eventing -n knative-eventing --template='{{range .status.conditions}}{{printf \"%s=%s\\n\" .type .status}}{{end}}'",
"InstallSucceeded=True Ready=True",
"oc get pods -n knative-eventing",
"NAME READY STATUS RESTARTS AGE broker-controller-58765d9d49-g9zp6 1/1 Running 0 7m21s eventing-controller-65fdd66b54-jw7bh 1/1 Running 0 7m31s eventing-webhook-57fd74b5bd-kvhlz 1/1 Running 0 7m31s imc-controller-5b75d458fc-ptvm2 1/1 Running 0 7m19s imc-dispatcher-64f6d5fccb-kkc4c 1/1 Running 0 7m18s",
"apiVersion: operator.serverless.openshift.io/v1alpha1 kind: KnativeKafka metadata: name: knative-kafka namespace: knative-eventing spec: channel: enabled: true 1 bootstrapServers: <bootstrap_servers> 2 source: enabled: true 3 broker: enabled: true 4 defaultConfig: bootstrapServers: <bootstrap_servers> 5 numPartitions: <num_partitions> 6 replicationFactor: <replication_factor> 7 sink: enabled: true 8",
"oc get pods -n knative-eventing",
"NAME READY STATUS RESTARTS AGE kafka-broker-dispatcher-7769fbbcbb-xgffn 2/2 Running 0 44s kafka-broker-receiver-5fb56f7656-fhq8d 2/2 Running 0 44s kafka-channel-dispatcher-84fd6cb7f9-k2tjv 2/2 Running 0 44s kafka-channel-receiver-9b7f795d5-c76xr 2/2 Running 0 44s kafka-controller-6f95659bf6-trd6r 2/2 Running 0 44s kafka-source-dispatcher-6bf98bdfff-8bcsn 2/2 Running 0 44s kafka-webhook-eventing-68dc95d54b-825xs 2/2 Running 0 44s",
"systemctl start --user podman.socket",
"export DOCKER_HOST=\"unix://USD{XDG_RUNTIME_DIR}/podman/podman.sock\"",
"kn func build -v",
"podman machine init --memory=8192 --cpus=2 --disk-size=20",
"podman machine start Starting machine \"podman-machine-default\" Waiting for VM Mounting volume... /Users/myuser:/Users/user [...truncated output...] You can still connect Docker API clients by setting DOCKER_HOST using the following command in your terminal session: export DOCKER_HOST='unix:///Users/myuser/.local/share/containers/podman/machine/podman-machine-default/podman.sock' Machine \"podman-machine-default\" started successfully",
"export DOCKER_HOST='unix:///Users/myuser/.local/share/containers/podman/machine/podman-machine-default/podman.sock'",
"kn func build -v",
"The installed KnativeServing version is v1.5.0.",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: serverless-operator namespace: openshift-serverless spec: channel: stable name: serverless-operator source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Manual startingCSV: serverless-operator.v1.26.0",
"oc apply -f serverless-subscription.yaml",
"apiVersion: serving.knative.dev/v1 kind: Service metadata: name: hello 1 namespace: default 2 spec: template: spec: containers: - image: docker.io/openshift/hello-openshift 3 env: - name: RESPONSE 4 value: \"Hello Serverless!\"",
"kn service create <service-name> --image <image> --tag <tag-value>",
"kn service create event-display --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest",
"Creating service 'event-display' in namespace 'default': 0.271s The Route is still working to reflect the latest desired specification. 0.580s Configuration \"event-display\" is waiting for a Revision to become ready. 3.857s 3.861s Ingress has not yet been reconciled. 4.270s Ready to serve. Service 'event-display' created with latest revision 'event-display-bxshg-1' and URL: http://event-display-default.apps-crc.testing",
"apiVersion: serving.knative.dev/v1 kind: Service metadata: name: event-delivery namespace: default spec: template: spec: containers: - image: quay.io/openshift-knative/knative-eventing-sources-event-display:latest env: - name: RESPONSE value: \"Hello Serverless!\"",
"oc apply -f <filename>",
"apiVersion: serving.knative.dev/v1 kind: Service metadata: name: hello 1 namespace: default 2 spec: template: spec: containers: - image: docker.io/openshift/hello-openshift 3 env: - name: RESPONSE 4 value: \"Hello Serverless!\"",
"kn service create event-display --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest --target ./ --namespace test",
"Service 'event-display' created in namespace 'test'.",
"tree ./",
"./ βββ test βββ ksvc βββ event-display.yaml 2 directories, 1 file",
"cat test/ksvc/event-display.yaml",
"apiVersion: serving.knative.dev/v1 kind: Service metadata: creationTimestamp: null name: event-display namespace: test spec: template: metadata: annotations: client.knative.dev/user-image: quay.io/openshift-knative/knative-eventing-sources-event-display:latest creationTimestamp: null spec: containers: - image: quay.io/openshift-knative/knative-eventing-sources-event-display:latest name: \"\" resources: {} status: {}",
"kn service describe event-display --target ./ --namespace test",
"Name: event-display Namespace: test Age: URL: Revisions: Conditions: OK TYPE AGE REASON",
"kn service create -f test/ksvc/event-display.yaml",
"Creating service 'event-display' in namespace 'test': 0.058s The Route is still working to reflect the latest desired specification. 0.098s 0.168s Configuration \"event-display\" is waiting for a Revision to become ready. 23.377s 23.419s Ingress has not yet been reconciled. 23.534s Waiting for load balancer to be ready 23.723s Ready to serve. Service 'event-display' created to latest revision 'event-display-00001' is available at URL: http://event-display-test.apps.example.com",
"oc get ksvc <service_name>",
"NAME URL LATESTCREATED LATESTREADY READY REASON event-delivery http://event-delivery-default.example.com event-delivery-4wsd2 event-delivery-4wsd2 True",
"curl http://event-delivery-default.example.com",
"curl https://event-delivery-default.example.com",
"Hello Serverless!",
"curl https://event-delivery-default.example.com --insecure",
"Hello Serverless!",
"curl https://event-delivery-default.example.com --cacert <file>",
"Hello Serverless!",
"apiVersion: serving.knative.dev/v1 kind: Service metadata: name: example-service namespace: default spec: template: metadata: annotations: autoscaling.knative.dev/min-scale: \"0\"",
"kn service create <service_name> --image <image_uri> --scale-min <integer>",
"kn service create example-service --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest --scale-min 2",
"apiVersion: serving.knative.dev/v1 kind: Service metadata: name: example-service namespace: default spec: template: metadata: annotations: autoscaling.knative.dev/max-scale: \"10\"",
"kn service create <service_name> --image <image_uri> --scale-max <integer>",
"kn service create example-service --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest --scale-max 10",
"apiVersion: serving.knative.dev/v1 kind: Service metadata: name: example-service namespace: default spec: template: metadata: annotations: autoscaling.knative.dev/target: \"200\"",
"kn service create <service_name> --image <image_uri> --concurrency-target <integer>",
"kn service create example-service --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest --concurrency-target 50",
"apiVersion: serving.knative.dev/v1 kind: Service metadata: name: example-service namespace: default spec: template: spec: containerConcurrency: 50",
"kn service create <service_name> --image <image_uri> --concurrency-limit <integer>",
"kn service create example-service --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest --concurrency-limit 50",
"apiVersion: serving.knative.dev/v1 kind: Service metadata: name: example-service namespace: default spec: template: metadata: annotations: autoscaling.knative.dev/target-utilization-percentage: \"70\"",
"apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving spec: config: autoscaler: enable-scale-to-zero: \"false\" 1",
"apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving spec: config: autoscaler: scale-to-zero-grace-period: \"30s\" 1",
"apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: ks namespace: knative-serving spec: high-availability: replicas: 2 deployments: - name: net-kourier-controller readinessProbes: 1 - container: controller timeoutSeconds: 10 - name: webhook resources: - container: webhook requests: cpu: 300m memory: 60Mi limits: cpu: 1000m memory: 1000Mi replicas: 3 labels: example-label: label annotations: example-annotation: annotation nodeSelector: disktype: hdd",
"apiVersion: serving.knative.dev/v1 kind: Service spec: template: spec: containers: - name: first-container 1 image: gcr.io/knative-samples/helloworld-go ports: - containerPort: 8080 2 - name: second-container 3 image: gcr.io/knative-samples/helloworld-java",
"apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving spec: config: features: kubernetes.podspec-volumes-emptydir: enabled",
"spec: config: features: \"kubernetes.podspec-persistent-volume-claim\": enabled \"kubernetes.podspec-persistent-volume-write\": enabled",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: example-pv-claim namespace: my-ns spec: accessModes: - ReadWriteMany storageClassName: ocs-storagecluster-cephfs resources: requests: storage: 1Gi",
"apiVersion: serving.knative.dev/v1 kind: Service metadata: namespace: my-ns spec: template: spec: containers: volumeMounts: 1 - mountPath: /data name: mydata readOnly: false volumes: - name: mydata persistentVolumeClaim: 2 claimName: example-pv-claim readOnly: false 3",
"apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving spec: config: features: kubernetes.podspec-init-containers: enabled",
"oc -n knative-serving create secret generic custom-secret --from-file=<secret_name>.crt=<path_to_certificate>",
"apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving spec: controller-custom-certs: name: custom-secret type: Secret",
"spec: config: network: internal-encryption: \"true\"",
"oc delete pod -n knative-serving --selector app=activator",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default namespace: example-namespace spec: podSelector: ingress: []",
"oc label namespace knative-serving knative.openshift.io/system-namespace=true",
"oc label namespace knative-serving-ingress knative.openshift.io/system-namespace=true",
"oc label namespace knative-eventing knative.openshift.io/system-namespace=true",
"oc label namespace knative-kafka knative.openshift.io/system-namespace=true",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: <network_policy_name> 1 namespace: <namespace> 2 spec: ingress: - from: - namespaceSelector: matchLabels: knative.openshift.io/system-namespace: \"true\" podSelector: {} policyTypes: - Ingress",
"apiVersion: serving.knative.dev/v1 kind: Service metadata: name: example-service namespace: default spec: traffic: - latestRevision: true percent: 100 status: traffic: - percent: 100 revisionName: example-service",
"apiVersion: serving.knative.dev/v1 kind: Service metadata: name: example-service namespace: default spec: traffic: - tag: current revisionName: example-service percent: 100 - tag: latest latestRevision: true percent: 0",
"apiVersion: serving.knative.dev/v1 kind: Service metadata: name: example-service namespace: default spec: traffic: - tag: current revisionName: example-service-1 percent: 50 - tag: candidate revisionName: example-service-2 percent: 50 - tag: latest latestRevision: true percent: 0",
"kn service update <service_name> --traffic <revision>=<percentage>",
"kn service update example-service --traffic @latest=20,stable=80",
"kn service update example-service --traffic @latest=10,stable=60",
"kn service update <service_name> --tag @latest=example-tag",
"kn service update <service_name> --untag example-tag",
"oc get ksvc <service_name> -o=jsonpath='{.status.latestCreatedRevisionName}'",
"oc get ksvc example-service -o=jsonpath='{.status.latestCreatedRevisionName}'",
"example-service-00001",
"spec: traffic: - revisionName: <first_revision_name> percent: 100 # All traffic goes to this revision",
"oc get ksvc <service_name>",
"oc get ksvc <service_name> -o=jsonpath='{.status.latestCreatedRevisionName}'",
"spec: traffic: - revisionName: <first_revision_name> percent: 100 # All traffic is still being routed to the first revision - revisionName: <second_revision_name> percent: 0 # No traffic is routed to the second revision tag: v2 # A named route",
"oc get ksvc <service_name> --output jsonpath=\"{.status.traffic[*].url}\"",
"spec: traffic: - revisionName: <first_revision_name> percent: 50 - revisionName: <second_revision_name> percent: 50 tag: v2",
"spec: traffic: - revisionName: <first_revision_name> percent: 0 - revisionName: <second_revision_name> percent: 100 tag: v2",
"apiVersion: serving.knative.dev/v1 kind: Service metadata: name: <service_name> labels: <label_name>: <label_value> annotations: <annotation_name>: <annotation_value>",
"kn service create <service_name> --image=<image> --annotation <annotation_name>=<annotation_value> --label <label_value>=<label_value>",
"oc get routes.route.openshift.io -l serving.knative.openshift.io/ingressName=<service_name> \\ 1 -l serving.knative.openshift.io/ingressNamespace=<service_namespace> \\ 2 -n knative-serving-ingress -o yaml | grep -e \"<label_name>: \\\"<label_value>\\\"\" -e \"<annotation_name>: <annotation_value>\" 3",
"apiVersion: serving.knative.dev/v1 kind: Service metadata: name: <service_name> annotations: serving.knative.openshift.io/disableRoute: \"true\" spec: template: spec: containers: - image: <image>",
"oc apply -f <filename>",
"kn service create <service_name> --image=gcr.io/knative-samples/helloworld-go --annotation serving.knative.openshift.io/disableRoute=true",
"USD oc get routes.route.openshift.io -l serving.knative.openshift.io/ingressName=USDKSERVICE_NAME -l serving.knative.openshift.io/ingressNamespace=USDKSERVICE_NAMESPACE -n knative-serving-ingress",
"No resources found in knative-serving-ingress namespace.",
"apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/timeout: 600s 1 name: <route_name> 2 namespace: knative-serving-ingress 3 spec: host: <service_host> 4 port: targetPort: http2 to: kind: Service name: kourier weight: 100 tls: insecureEdgeTerminationPolicy: Allow termination: edge 5 key: |- -----BEGIN PRIVATE KEY----- [...] -----END PRIVATE KEY----- certificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- caCertificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE---- wildcardPolicy: None",
"oc apply -f <filename>",
"apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving spec: config: network: httpProtocol: \"redirected\"",
"spec: config: network: default-external-scheme: \"https\"",
"spec: config: network: default-external-scheme: \"http\"",
"apiVersion: serving.knative.dev/v1 kind: Service metadata: name: example namespace: default annotations: networking.knative.dev/http-option: \"redirected\" spec:",
"oc label ksvc <service_name> networking.knative.dev/visibility=cluster-local",
"oc get ksvc",
"NAME URL LATESTCREATED LATESTREADY READY REASON hello http://hello.default.svc.cluster.local hello-tx2g7 hello-tx2g7 True",
"export san=\"knative\"",
"openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -subj '/O=Example/CN=Example' -keyout ca.key -out ca.crt",
"openssl req -out tls.csr -newkey rsa:2048 -nodes -keyout tls.key -subj \"/CN=Example/O=Example\" -addext \"subjectAltName = DNS:USDsan\"",
"openssl x509 -req -extfile <(printf \"subjectAltName=DNS:USDsan\") -days 365 -in tls.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out tls.crt",
"oc create -n knative-serving-ingress secret tls server-certs --key=tls.key --cert=tls.crt --dry-run=client -o yaml | oc apply -f -",
"spec: config: kourier: cluster-cert-secret: server-certs",
"spec: ingress: kourier: service-type: ClusterIP",
"spec: ingress: kourier: service-type: LoadBalancer",
"oc annotate knativeserving <your_knative_CR> -n knative-serving serverless.openshift.io/default-enable-http2=true",
"oc get svc -n knative-serving-ingress kourier -o jsonpath=\"{.spec.ports[0].appProtocol}\"",
"h2c",
"import \"google.golang.org/grpc\" grpc.Dial( YOUR_URL, 1 grpc.WithTransportCredentials(insecure.NewCredentials())), 2 )",
"spec: ingress: kourier: service-type: LoadBalancer",
"oc -n knative-serving-ingress get svc kourier",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kourier LoadBalancer 172.30.51.103 a83e86291bcdd11e993af02b7a65e514-33544245.us-east-1.elb.amazonaws.com 80:31380/TCP,443:31390/TCP 67m",
"curl -H \"Host: hello-default.example.com\" a83e86291bcdd11e993af02b7a65e514-33544245.us-east-1.elb.amazonaws.com",
"Hello Serverless!",
"import \"google.golang.org/grpc\" grpc.Dial( \"a83e86291bcdd11e993af02b7a65e514-33544245.us-east-1.elb.amazonaws.com:80\", grpc.WithAuthority(\"hello-default.example.com:80\"), grpc.WithInsecure(), )",
"apiVersion: serving.knative.dev/v1 kind: Service metadata: name: <service_name> spec: template: metadata: annotations: sidecar.istio.io/inject: \"true\" 1 sidecar.istio.io/rewriteAppHTTPProbers: \"true\" 2",
"oc apply -f <filename>",
"apiVersion: security.istio.io/v1beta1 kind: RequestAuthentication metadata: name: jwt-example namespace: <namespace> spec: jwtRules: - issuer: [email protected] jwksUri: https://raw.githubusercontent.com/istio/istio/release-1.8/security/tools/jwt/samples/jwks.json",
"oc apply -f <filename>",
"apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: allowlist-by-paths namespace: <namespace> spec: action: ALLOW rules: - to: - operation: paths: - /metrics 1 - /healthz 2",
"oc apply -f <filename>",
"apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: require-jwt namespace: <namespace> spec: action: ALLOW rules: - from: - source: requestPrincipals: [\"[email protected]/[email protected]\"]",
"oc apply -f <filename>",
"curl http://hello-example-1-default.apps.mycluster.example.com/",
"RBAC: access denied",
"TOKEN=USD(curl https://raw.githubusercontent.com/istio/istio/release-1.8/security/tools/jwt/samples/demo.jwt -s) && echo \"USDTOKEN\" | cut -d '.' -f2 - | base64 --decode -",
"curl -H \"Authorization: Bearer USDTOKEN\" http://hello-example-1-default.apps.example.com",
"Hello OpenShift!",
"apiVersion: serving.knative.dev/v1 kind: Service metadata: name: <service_name> spec: template: metadata: annotations: sidecar.istio.io/inject: \"true\" 1 sidecar.istio.io/rewriteAppHTTPProbers: \"true\" 2",
"oc apply -f <filename>",
"apiVersion: authentication.istio.io/v1alpha1 kind: Policy metadata: name: default namespace: <namespace> spec: origins: - jwt: issuer: [email protected] jwksUri: \"https://raw.githubusercontent.com/istio/istio/release-1.6/security/tools/jwt/samples/jwks.json\" triggerRules: - excludedPaths: - prefix: /metrics 1 - prefix: /healthz 2 principalBinding: USE_ORIGIN",
"oc apply -f <filename>",
"curl http://hello-example-default.apps.mycluster.example.com/",
"Origin authentication failed.",
"TOKEN=USD(curl https://raw.githubusercontent.com/istio/istio/release-1.6/security/tools/jwt/samples/demo.jwt -s) && echo \"USDTOKEN\" | cut -d '.' -f2 - | base64 --decode -",
"curl http://hello-example-default.apps.mycluster.example.com/ -H \"Authorization: Bearer USDTOKEN\"",
"Hello OpenShift!",
"apiVersion: serving.knative.dev/v1alpha1 kind: DomainMapping metadata: name: <domain_name> 1 namespace: <namespace> 2 spec: ref: name: <target_name> 3 kind: <target_type> 4 apiVersion: serving.knative.dev/v1",
"apiVersion: serving.knative.dev/v1alpha1 kind: DomainMapping metadata: name: example-domain namespace: default spec: ref: name: example-service kind: Service apiVersion: serving.knative.dev/v1",
"apiVersion: serving.knative.dev/v1alpha1 kind: DomainMapping metadata: name: example-domain namespace: default spec: ref: name: example-route kind: Route apiVersion: serving.knative.dev/v1",
"oc apply -f <filename>",
"kn domain create <domain_mapping_name> --ref <target_name>",
"kn domain create example-domain-map --ref example-service",
"kn domain create <domain_mapping_name> --ref <ksvc:service_name:service_namespace>",
"kn domain create example-domain-map --ref ksvc:example-service:example-namespace",
"kn domain create <domain_mapping_name> --ref <kroute:route_name>",
"kn domain create example-domain-map --ref kroute:example-route",
"apiVersion: serving.knative.dev/v1alpha1 kind: DomainMapping metadata: name: <domain_name> 1 namespace: <namespace> 2 spec: ref: name: <target_name> 3 kind: <target_type> 4 apiVersion: serving.knative.dev/v1",
"apiVersion: serving.knative.dev/v1alpha1 kind: DomainMapping metadata: name: custom-ksvc-domain.example.com namespace: default spec: ref: name: example-service kind: Service apiVersion: serving.knative.dev/v1",
"curl custom-ksvc-domain.example.com",
"Hello OpenShift!",
"oc create secret tls <tls_secret_name> --cert=<path_to_certificate_file> --key=<path_to_key_file>",
"oc label secret <tls_secret_name> networking.internal.knative.dev/certificate-uid=\"<id>\"",
"apiVersion: serving.knative.dev/v1alpha1 kind: DomainMapping metadata: name: <domain_name> namespace: <namespace> spec: ref: name: <service_name> kind: Service apiVersion: serving.knative.dev/v1 TLS block specifies the secret to be used tls: secretName: <tls_secret_name>",
"oc get domainmapping <domain_name>",
"NAME URL READY REASON example.com https://example.com True",
"curl https://<domain_name>",
"apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving spec: deployments: - env: - container: controller envVars: - name: ENABLE_SECRET_INFORMER_FILTERING_BY_CERT_UID value: 'true' name: net-kourier-controller",
"apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving spec: high-availability: replicas: 3",
"apiVersion: v1 kind: ServiceAccount metadata: name: events-sa namespace: default 1 --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: event-watcher namespace: default 2 rules: - apiGroups: - \"\" resources: - events verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: k8s-ra-event-watcher namespace: default 3 roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: event-watcher subjects: - kind: ServiceAccount name: events-sa namespace: default 4",
"oc apply -f <filename>",
"apiVersion: v1 kind: ServiceAccount metadata: name: events-sa namespace: default 1 --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: event-watcher namespace: default 2 rules: - apiGroups: - \"\" resources: - events verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: k8s-ra-event-watcher namespace: default 3 roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: event-watcher subjects: - kind: ServiceAccount name: events-sa namespace: default 4",
"oc apply -f <filename>",
"kn source apiserver create <event_source_name> --sink broker:<broker_name> --resource \"event:v1\" --service-account <service_account_name> --mode Resource",
"kn service create <service_name> --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest",
"kn trigger create <trigger_name> --sink ksvc:<service_name>",
"oc create deployment hello-node --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest",
"kn source apiserver describe <source_name>",
"Name: mysource Namespace: default Annotations: sources.knative.dev/creator=developer, sources.knative.dev/lastModifier=developer Age: 3m ServiceAccountName: events-sa Mode: Resource Sink: Name: default Namespace: default Kind: Broker (eventing.knative.dev/v1) Resources: Kind: event (v1) Controller: false Conditions: OK TYPE AGE REASON ++ Ready 3m ++ Deployed 3m ++ SinkProvided 3m ++ SufficientPermissions 3m ++ EventTypesProvided 3m",
"oc get pods",
"oc logs USD(oc get pod -o name | grep event-display) -c user-container",
"β\\ufe0f cloudevents.Event Validation: valid Context Attributes, specversion: 1.0 type: dev.knative.apiserver.resource.update datacontenttype: application/json Data, { \"apiVersion\": \"v1\", \"involvedObject\": { \"apiVersion\": \"v1\", \"fieldPath\": \"spec.containers{hello-node}\", \"kind\": \"Pod\", \"name\": \"hello-node\", \"namespace\": \"default\", .. }, \"kind\": \"Event\", \"message\": \"Started container\", \"metadata\": { \"name\": \"hello-node.159d7608e3a3572c\", \"namespace\": \"default\", . }, \"reason\": \"Started\", }",
"kn trigger delete <trigger_name>",
"kn source apiserver delete <source_name>",
"oc delete -f authentication.yaml",
"kn source binding create bind-heartbeat --namespace sinkbinding-example --subject \"Job:batch/v1:app=heartbeat-cron\" --sink http://event-display.svc.cluster.local \\ 1 --ce-override \"sink=bound\"",
"apiVersion: v1 kind: ServiceAccount metadata: name: events-sa namespace: default 1 --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: event-watcher namespace: default 2 rules: - apiGroups: - \"\" resources: - events verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: k8s-ra-event-watcher namespace: default 3 roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: event-watcher subjects: - kind: ServiceAccount name: events-sa namespace: default 4",
"oc apply -f <filename>",
"apiVersion: sources.knative.dev/v1alpha1 kind: ApiServerSource metadata: name: testevents spec: serviceAccountName: events-sa mode: Resource resources: - apiVersion: v1 kind: Event sink: ref: apiVersion: eventing.knative.dev/v1 kind: Broker name: default",
"oc apply -f <filename>",
"apiVersion: serving.knative.dev/v1 kind: Service metadata: name: event-display namespace: default spec: template: spec: containers: - image: quay.io/openshift-knative/knative-eventing-sources-event-display:latest",
"oc apply -f <filename>",
"apiVersion: eventing.knative.dev/v1 kind: Trigger metadata: name: event-display-trigger namespace: default spec: broker: default subscriber: ref: apiVersion: serving.knative.dev/v1 kind: Service name: event-display",
"oc apply -f <filename>",
"oc create deployment hello-node --image=quay.io/openshift-knative/knative-eventing-sources-event-display",
"oc get apiserversource.sources.knative.dev testevents -o yaml",
"apiVersion: sources.knative.dev/v1alpha1 kind: ApiServerSource metadata: annotations: creationTimestamp: \"2020-04-07T17:24:54Z\" generation: 1 name: testevents namespace: default resourceVersion: \"62868\" selfLink: /apis/sources.knative.dev/v1alpha1/namespaces/default/apiserversources/testevents2 uid: 1603d863-bb06-4d1c-b371-f580b4db99fa spec: mode: Resource resources: - apiVersion: v1 controller: false controllerSelector: apiVersion: \"\" kind: \"\" name: \"\" uid: \"\" kind: Event labelSelector: {} serviceAccountName: events-sa sink: ref: apiVersion: eventing.knative.dev/v1 kind: Broker name: default",
"oc get pods",
"oc logs USD(oc get pod -o name | grep event-display) -c user-container",
"β\\ufe0f cloudevents.Event Validation: valid Context Attributes, specversion: 1.0 type: dev.knative.apiserver.resource.update datacontenttype: application/json Data, { \"apiVersion\": \"v1\", \"involvedObject\": { \"apiVersion\": \"v1\", \"fieldPath\": \"spec.containers{hello-node}\", \"kind\": \"Pod\", \"name\": \"hello-node\", \"namespace\": \"default\", .. }, \"kind\": \"Event\", \"message\": \"Started container\", \"metadata\": { \"name\": \"hello-node.159d7608e3a3572c\", \"namespace\": \"default\", . }, \"reason\": \"Started\", }",
"oc delete -f trigger.yaml",
"oc delete -f k8s-events.yaml",
"oc delete -f authentication.yaml",
"apiVersion: serving.knative.dev/v1 kind: Service metadata: name: event-display spec: template: spec: containers: - image: quay.io/openshift-knative/knative-eventing-sources-event-display:latest",
"kn service create event-display --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest",
"kn source ping create test-ping-source --schedule \"*/2 * * * *\" --data '{\"message\": \"Hello world!\"}' --sink ksvc:event-display",
"kn source ping describe test-ping-source",
"Name: test-ping-source Namespace: default Annotations: sources.knative.dev/creator=developer, sources.knative.dev/lastModifier=developer Age: 15s Schedule: */2 * * * * Data: {\"message\": \"Hello world!\"} Sink: Name: event-display Namespace: default Resource: Service (serving.knative.dev/v1) Conditions: OK TYPE AGE REASON ++ Ready 8s ++ Deployed 8s ++ SinkProvided 15s ++ ValidSchedule 15s ++ EventTypeProvided 15s ++ ResourcesCorrect 15s",
"watch oc get pods",
"oc logs USD(oc get pod -o name | grep event-display) -c user-container",
"β\\ufe0f cloudevents.Event Validation: valid Context Attributes, specversion: 1.0 type: dev.knative.sources.ping source: /apis/v1/namespaces/default/pingsources/test-ping-source id: 99e4f4f6-08ff-4bff-acf1-47f61ded68c9 time: 2020-04-07T16:16:00.000601161Z datacontenttype: application/json Data, { \"message\": \"Hello world!\" }",
"kn delete pingsources.sources.knative.dev <ping_source_name>",
"kn source binding create bind-heartbeat --namespace sinkbinding-example --subject \"Job:batch/v1:app=heartbeat-cron\" --sink http://event-display.svc.cluster.local \\ 1 --ce-override \"sink=bound\"",
"apiVersion: sources.knative.dev/v1 kind: PingSource metadata: name: test-ping-source spec: schedule: \"*/2 * * * *\" 1 data: '{\"message\": \"Hello world!\"}' 2 sink: 3 ref: apiVersion: serving.knative.dev/v1 kind: Service name: event-display",
"apiVersion: serving.knative.dev/v1 kind: Service metadata: name: event-display spec: template: spec: containers: - image: quay.io/openshift-knative/knative-eventing-sources-event-display:latest",
"oc apply -f <filename>",
"apiVersion: sources.knative.dev/v1 kind: PingSource metadata: name: test-ping-source spec: schedule: \"*/2 * * * *\" data: '{\"message\": \"Hello world!\"}' sink: ref: apiVersion: serving.knative.dev/v1 kind: Service name: event-display",
"oc apply -f <filename>",
"oc get pingsource.sources.knative.dev <ping_source_name> -oyaml",
"apiVersion: sources.knative.dev/v1 kind: PingSource metadata: annotations: sources.knative.dev/creator: developer sources.knative.dev/lastModifier: developer creationTimestamp: \"2020-04-07T16:11:14Z\" generation: 1 name: test-ping-source namespace: default resourceVersion: \"55257\" selfLink: /apis/sources.knative.dev/v1/namespaces/default/pingsources/test-ping-source uid: 3d80d50b-f8c7-4c1b-99f7-3ec00e0a8164 spec: data: '{ value: \"hello\" }' schedule: '*/2 * * * *' sink: ref: apiVersion: serving.knative.dev/v1 kind: Service name: event-display namespace: default",
"watch oc get pods",
"oc logs USD(oc get pod -o name | grep event-display) -c user-container",
"β\\ufe0f cloudevents.Event Validation: valid Context Attributes, specversion: 1.0 type: dev.knative.sources.ping source: /apis/v1/namespaces/default/pingsources/test-ping-source id: 042ff529-240e-45ee-b40c-3a908129853e time: 2020-04-07T16:22:00.000791674Z datacontenttype: application/json Data, { \"message\": \"Hello world!\" }",
"oc delete -f <filename>",
"oc delete -f ping-source.yaml",
"kn service create event-display --image quay.io/openshift-knative/knative-eventing-sources-event-display",
"kn source kafka create <kafka_source_name> --servers <cluster_kafka_bootstrap>.kafka.svc:9092 --topics <topic_name> --consumergroup my-consumer-group --sink event-display",
"kn source kafka describe <kafka_source_name>",
"Name: example-kafka-source Namespace: kafka Age: 1h BootstrapServers: example-cluster-kafka-bootstrap.kafka.svc:9092 Topics: example-topic ConsumerGroup: example-consumer-group Sink: Name: event-display Namespace: default Resource: Service (serving.knative.dev/v1) Conditions: OK TYPE AGE REASON ++ Ready 1h ++ Deployed 1h ++ SinkProvided 1h",
"oc -n kafka run kafka-producer -ti --image=quay.io/strimzi/kafka:latest-kafka-2.7.0 --rm=true --restart=Never -- bin/kafka-console-producer.sh --broker-list <cluster_kafka_bootstrap>:9092 --topic my-topic",
"oc logs USD(oc get pod -o name | grep event-display) -c user-container",
"β\\ufe0f cloudevents.Event Validation: valid Context Attributes, specversion: 1.0 type: dev.knative.kafka.event source: /apis/v1/namespaces/default/kafkasources/example-kafka-source#example-topic subject: partition:46#0 id: partition:46/offset:0 time: 2021-03-10T11:21:49.4Z Extensions, traceparent: 00-161ff3815727d8755848ec01c866d1cd-7ff3916c44334678-00 Data, Hello!",
"kn source binding create bind-heartbeat --namespace sinkbinding-example --subject \"Job:batch/v1:app=heartbeat-cron\" --sink http://event-display.svc.cluster.local \\ 1 --ce-override \"sink=bound\"",
"apiVersion: sources.knative.dev/v1beta1 kind: KafkaSource metadata: name: <source_name> spec: consumerGroup: <group_name> 1 bootstrapServers: - <list_of_bootstrap_servers> topics: - <list_of_topics> 2 sink: - <list_of_sinks> 3",
"apiVersion: sources.knative.dev/v1beta1 kind: KafkaSource metadata: name: kafka-source spec: consumerGroup: knative-group bootstrapServers: - my-cluster-kafka-bootstrap.kafka:9092 topics: - knative-demo-topic sink: ref: apiVersion: serving.knative.dev/v1 kind: Service name: event-display",
"oc apply -f <filename>",
"oc get pods",
"NAME READY STATUS RESTARTS AGE kafkasource-kafka-source-5ca0248f-... 1/1 Running 0 13m",
"oc create secret -n <namespace> generic <kafka_auth_secret> --from-file=ca.crt=caroot.pem --from-literal=password=\"SecretPassword\" --from-literal=saslType=\"SCRAM-SHA-512\" \\ 1 --from-literal=user=\"my-sasl-user\"",
"apiVersion: sources.knative.dev/v1beta1 kind: KafkaSource metadata: name: example-source spec: net: sasl: enable: true user: secretKeyRef: name: <kafka_auth_secret> key: user password: secretKeyRef: name: <kafka_auth_secret> key: password type: secretKeyRef: name: <kafka_auth_secret> key: saslType tls: enable: true caCert: 1 secretKeyRef: name: <kafka_auth_secret> key: ca.crt",
"apiVersion: serving.knative.dev/v1 kind: Service metadata: name: event-display spec: template: spec: containers: - image: quay.io/openshift-knative/knative-eventing-sources-event-display:latest",
"oc apply -f <filename>",
"apiVersion: sources.knative.dev/v1alpha1 kind: SinkBinding metadata: name: bind-heartbeat spec: subject: apiVersion: batch/v1 kind: Job 1 selector: matchLabels: app: heartbeat-cron sink: ref: apiVersion: serving.knative.dev/v1 kind: Service name: event-display",
"oc apply -f <filename>",
"apiVersion: batch/v1 kind: CronJob metadata: name: heartbeat-cron spec: # Run every minute schedule: \"* * * * *\" jobTemplate: metadata: labels: app: heartbeat-cron bindings.knative.dev/include: \"true\" spec: template: spec: restartPolicy: Never containers: - name: single-heartbeat image: quay.io/openshift-knative/heartbeats:latest args: - --period=1 env: - name: ONE_SHOT value: \"true\" - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace",
"jobTemplate: metadata: labels: app: heartbeat-cron bindings.knative.dev/include: \"true\"",
"oc apply -f <filename>",
"oc get sinkbindings.sources.knative.dev bind-heartbeat -oyaml",
"spec: sink: ref: apiVersion: serving.knative.dev/v1 kind: Service name: event-display namespace: default subject: apiVersion: batch/v1 kind: Job namespace: default selector: matchLabels: app: heartbeat-cron",
"oc get pods",
"oc logs USD(oc get pod -o name | grep event-display) -c user-container",
"β\\ufe0f cloudevents.Event Validation: valid Context Attributes, specversion: 1.0 type: dev.knative.eventing.samples.heartbeat source: https://knative.dev/eventing-contrib/cmd/heartbeats/#event-test/mypod id: 2b72d7bf-c38f-4a98-a433-608fbcdd2596 time: 2019-10-18T15:23:20.809775386Z contenttype: application/json Extensions, beats: true heart: yes the: 42 Data, { \"id\": 1, \"label\": \"\" }",
"kn service create event-display --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest",
"kn source binding create bind-heartbeat --subject Job:batch/v1:app=heartbeat-cron --sink ksvc:event-display",
"apiVersion: batch/v1 kind: CronJob metadata: name: heartbeat-cron spec: # Run every minute schedule: \"* * * * *\" jobTemplate: metadata: labels: app: heartbeat-cron bindings.knative.dev/include: \"true\" spec: template: spec: restartPolicy: Never containers: - name: single-heartbeat image: quay.io/openshift-knative/heartbeats:latest args: - --period=1 env: - name: ONE_SHOT value: \"true\" - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace",
"jobTemplate: metadata: labels: app: heartbeat-cron bindings.knative.dev/include: \"true\"",
"oc apply -f <filename>",
"kn source binding describe bind-heartbeat",
"Name: bind-heartbeat Namespace: demo-2 Annotations: sources.knative.dev/creator=minikube-user, sources.knative.dev/lastModifier=minikub Age: 2m Subject: Resource: job (batch/v1) Selector: app: heartbeat-cron Sink: Name: event-display Resource: Service (serving.knative.dev/v1) Conditions: OK TYPE AGE REASON ++ Ready 2m",
"oc get pods",
"oc logs USD(oc get pod -o name | grep event-display) -c user-container",
"β\\ufe0f cloudevents.Event Validation: valid Context Attributes, specversion: 1.0 type: dev.knative.eventing.samples.heartbeat source: https://knative.dev/eventing-contrib/cmd/heartbeats/#event-test/mypod id: 2b72d7bf-c38f-4a98-a433-608fbcdd2596 time: 2019-10-18T15:23:20.809775386Z contenttype: application/json Extensions, beats: true heart: yes the: 42 Data, { \"id\": 1, \"label\": \"\" }",
"kn source binding create bind-heartbeat --namespace sinkbinding-example --subject \"Job:batch/v1:app=heartbeat-cron\" --sink http://event-display.svc.cluster.local \\ 1 --ce-override \"sink=bound\"",
"apiVersion: serving.knative.dev/v1 kind: Service metadata: name: event-display spec: template: spec: containers: - image: quay.io/openshift-knative/knative-eventing-sources-event-display:latest",
"apiVersion: batch/v1 kind: CronJob metadata: name: heartbeat-cron spec: # Run every minute schedule: \"*/1 * * * *\" jobTemplate: metadata: labels: app: heartbeat-cron bindings.knative.dev/include: true 1 spec: template: spec: restartPolicy: Never containers: - name: single-heartbeat image: quay.io/openshift-knative/heartbeats args: - --period=1 env: - name: ONE_SHOT value: \"true\" - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace",
"apiVersion: sources.knative.dev/v1 kind: SinkBinding metadata: name: bind-heartbeat spec: subject: apiVersion: apps/v1 kind: Deployment namespace: default name: mysubject",
"apiVersion: sources.knative.dev/v1 kind: SinkBinding metadata: name: bind-heartbeat spec: subject: apiVersion: batch/v1 kind: Job namespace: default selector: matchLabels: working: example",
"apiVersion: sources.knative.dev/v1 kind: SinkBinding metadata: name: bind-heartbeat spec: subject: apiVersion: v1 kind: Pod namespace: default selector: - matchExpression: key: working operator: In values: - example - sample",
"apiVersion: sources.knative.dev/v1 kind: SinkBinding metadata: name: bind-heartbeat spec: ceOverrides: extensions: extra: this is an extra attribute additional: 42",
"{ \"extensions\": { \"extra\": \"this is an extra attribute\", \"additional\": \"42\" } }",
"oc label namespace <namespace> bindings.knative.dev/include=true",
"package main import ( \"context\" \"encoding/json\" \"flag\" \"fmt\" \"log\" \"os\" \"strconv\" \"time\" duckv1 \"knative.dev/pkg/apis/duck/v1\" cloudevents \"github.com/cloudevents/sdk-go/v2\" \"github.com/kelseyhightower/envconfig\" ) type Heartbeat struct { Sequence int `json:\"id\"` Label string `json:\"label\"` } var ( eventSource string eventType string sink string label string periodStr string ) func init() { flag.StringVar(&eventSource, \"eventSource\", \"\", \"the event-source (CloudEvents)\") flag.StringVar(&eventType, \"eventType\", \"dev.knative.eventing.samples.heartbeat\", \"the event-type (CloudEvents)\") flag.StringVar(&sink, \"sink\", \"\", \"the host url to heartbeat to\") flag.StringVar(&label, \"label\", \"\", \"a special label\") flag.StringVar(&periodStr, \"period\", \"5\", \"the number of seconds between heartbeats\") } type envConfig struct { // Sink URL where to send heartbeat cloud events Sink string `envconfig:\"K_SINK\"` // CEOverrides are the CloudEvents overrides to be applied to the outbound event. CEOverrides string `envconfig:\"K_CE_OVERRIDES\"` // Name of this pod. Name string `envconfig:\"POD_NAME\" required:\"true\"` // Namespace this pod exists in. Namespace string `envconfig:\"POD_NAMESPACE\" required:\"true\"` // Whether to run continuously or exit. OneShot bool `envconfig:\"ONE_SHOT\" default:\"false\"` } func main() { flag.Parse() var env envConfig if err := envconfig.Process(\"\", &env); err != nil { log.Printf(\"[ERROR] Failed to process env var: %s\", err) os.Exit(1) } if env.Sink != \"\" { sink = env.Sink } var ceOverrides *duckv1.CloudEventOverrides if len(env.CEOverrides) > 0 { overrides := duckv1.CloudEventOverrides{} err := json.Unmarshal([]byte(env.CEOverrides), &overrides) if err != nil { log.Printf(\"[ERROR] Unparseable CloudEvents overrides %s: %v\", env.CEOverrides, err) os.Exit(1) } ceOverrides = &overrides } p, err := cloudevents.NewHTTP(cloudevents.WithTarget(sink)) if err != nil { log.Fatalf(\"failed to create http protocol: %s\", err.Error()) } c, err := cloudevents.NewClient(p, cloudevents.WithUUIDs(), cloudevents.WithTimeNow()) if err != nil { log.Fatalf(\"failed to create client: %s\", err.Error()) } var period time.Duration if p, err := strconv.Atoi(periodStr); err != nil { period = time.Duration(5) * time.Second } else { period = time.Duration(p) * time.Second } if eventSource == \"\" { eventSource = fmt.Sprintf(\"https://knative.dev/eventing-contrib/cmd/heartbeats/#%s/%s\", env.Namespace, env.Name) log.Printf(\"Heartbeats Source: %s\", eventSource) } if len(label) > 0 && label[0] == '\"' { label, _ = strconv.Unquote(label) } hb := &Heartbeat{ Sequence: 0, Label: label, } ticker := time.NewTicker(period) for { hb.Sequence++ event := cloudevents.NewEvent(\"1.0\") event.SetType(eventType) event.SetSource(eventSource) event.SetExtension(\"the\", 42) event.SetExtension(\"heart\", \"yes\") event.SetExtension(\"beats\", true) if ceOverrides != nil && ceOverrides.Extensions != nil { for n, v := range ceOverrides.Extensions { event.SetExtension(n, v) } } if err := event.SetData(cloudevents.ApplicationJSON, hb); err != nil { log.Printf(\"failed to set cloudevents data: %s\", err.Error()) } log.Printf(\"sending cloudevent to %s\", sink) if res := c.Send(context.Background(), event); !cloudevents.IsACK(res) { log.Printf(\"failed to send cloudevent: %v\", res) } if env.OneShot { return } // Wait for next tick <-ticker.C } }",
"apiVersion: sources.knative.dev/v1 kind: ContainerSource metadata: name: test-heartbeats spec: template: spec: containers: # This corresponds to a heartbeats image URI that you have built and published - image: gcr.io/knative-releases/knative.dev/eventing/cmd/heartbeats name: heartbeats args: - --period=1 env: - name: POD_NAME value: \"example-pod\" - name: POD_NAMESPACE value: \"event-test\" sink: ref: apiVersion: serving.knative.dev/v1 kind: Service name: example-service",
"kn source container create <container_source_name> --image <image_uri> --sink <sink>",
"kn source container delete <container_source_name>",
"kn source container describe <container_source_name>",
"kn source container list",
"kn source container list -o yaml",
"kn source container update <container_source_name> --image <image_uri>",
"apiVersion: sources.knative.dev/v1 kind: ContainerSource metadata: name: test-heartbeats spec: template: spec: containers: - image: quay.io/openshift-knative/heartbeats:latest name: heartbeats args: - --period=1 env: - name: POD_NAME value: \"mypod\" - name: POD_NAMESPACE value: \"event-test\"",
"apiVersion: sources.knative.dev/v1 kind: ContainerSource metadata: name: test-heartbeats spec: ceOverrides: extensions: extra: this is an extra attribute additional: 42",
"{ \"extensions\": { \"extra\": \"this is an extra attribute\", \"additional\": \"42\" } }",
"kn source binding create bind-heartbeat --namespace sinkbinding-example --subject \"Job:batch/v1:app=heartbeat-cron\" --sink http://event-display.svc.cluster.local \\ 1 --ce-override \"sink=bound\"",
"apiVersion: eventing.knative.dev/v1alpha1 kind: KafkaSink metadata: name: <sink-name> namespace: <namespace> spec: topic: <topic-name> bootstrapServers: - <bootstrap-server>",
"oc apply -f <filename>",
"apiVersion: sources.knative.dev/v1alpha2 kind: ApiServerSource metadata: name: <source-name> 1 namespace: <namespace> 2 spec: serviceAccountName: <service-account-name> 3 mode: Resource resources: - apiVersion: v1 kind: Event sink: ref: apiVersion: eventing.knative.dev/v1alpha1 kind: KafkaSink name: <sink-name> 4",
"oc create secret -n <namespace> generic <secret_name> --from-literal=protocol=SASL_PLAINTEXT --from-literal=sasl.mechanism=<sasl_mechanism> --from-literal=user=<username> --from-literal=password=<password>",
"oc create secret -n <namespace> generic <secret_name> --from-literal=protocol=SASL_SSL --from-literal=sasl.mechanism=<sasl_mechanism> --from-file=ca.crt=<my_caroot.pem_file_path> \\ 1 --from-literal=user=<username> --from-literal=password=<password>",
"oc create secret -n <namespace> generic <secret_name> --from-literal=protocol=SSL --from-file=ca.crt=<my_caroot.pem_file_path> \\ 1 --from-file=user.crt=<my_cert.pem_file_path> --from-file=user.key=<my_key.pem_file_path>",
"apiVersion: eventing.knative.dev/v1alpha1 kind: KafkaSink metadata: name: <sink_name> namespace: <namespace> spec: auth: secret: ref: name: <secret_name>",
"oc apply -f <filename>",
"kn broker create <broker_name>",
"kn broker list",
"NAME URL AGE CONDITIONS READY REASON default http://broker-ingress.knative-eventing.svc.cluster.local/test/default 45s 5 OK / 5 True",
"apiVersion: eventing.knative.dev/v1 kind: Trigger metadata: annotations: eventing.knative.dev/injection: enabled name: <trigger_name> spec: broker: default subscriber: 1 ref: apiVersion: serving.knative.dev/v1 kind: Service name: <service_name>",
"oc apply -f <filename>",
"oc -n <namespace> get broker default",
"NAME READY REASON URL AGE default True http://broker-ingress.knative-eventing.svc.cluster.local/test/default 3m56s",
"oc label namespace <namespace> eventing.knative.dev/injection=enabled",
"oc -n <namespace> get broker <broker_name>",
"oc -n default get broker default",
"NAME READY REASON URL AGE default True http://broker-ingress.knative-eventing.svc.cluster.local/test/default 3m56s",
"oc label namespace <namespace> eventing.knative.dev/injection-",
"oc -n <namespace> delete broker <broker_name>",
"oc -n <namespace> get broker <broker_name>",
"oc -n default get broker default",
"No resources found. Error from server (NotFound): brokers.eventing.knative.dev \"default\" not found",
"apiVersion: operator.knative.dev/v1beta1 kind: KnativeEventing metadata: name: knative-eventing namespace: knative-eventing spec: config: 1 config-br-default-channel: channel-template-spec: | apiVersion: messaging.knative.dev/v1beta1 kind: KafkaChannel 2 spec: numPartitions: 6 3 replicationFactor: 3 4",
"oc apply -f <filename>",
"apiVersion: operator.knative.dev/v1beta1 kind: KnativeEventing metadata: name: knative-eventing namespace: knative-eventing spec: defaultBrokerClass: Kafka 1 config: 2 config-br-defaults: 3 default-br-config: | clusterDefault: 4 brokerClass: Kafka apiVersion: v1 kind: ConfigMap name: kafka-broker-config 5 namespace: knative-eventing 6 namespaceDefaults: 7 my-namespace: brokerClass: MTChannelBasedBroker apiVersion: v1 kind: ConfigMap name: config-br-default-channel 8 namespace: knative-eventing 9",
"apiVersion: eventing.knative.dev/v1 kind: Broker metadata: annotations: eventing.knative.dev/broker.class: Kafka 1 name: example-kafka-broker spec: config: apiVersion: v1 kind: ConfigMap name: kafka-broker-config 2 namespace: knative-eventing",
"oc apply -f <filename>",
"apiVersion: eventing.knative.dev/v1 kind: Broker metadata: annotations: eventing.knative.dev/broker.class: Kafka 1 kafka.eventing.knative.dev/external.topic: <topic_name> 2",
"oc apply -f <filename>",
"apiVersion: eventing.knative.dev/v1 kind: Broker metadata: annotations: eventing.knative.dev/broker.class: KafkaNamespaced 1 name: default namespace: my-namespace 2 spec: config: apiVersion: v1 kind: ConfigMap name: my-config 3",
"oc apply -f <filename>",
"apiVersion: v1 kind: ConfigMap metadata: name: my-config namespace: my-namespace data:",
"apiVersion: v1 kind: ConfigMap metadata: name: <config_map_name> 1 namespace: <namespace> 2 data: default.topic.partitions: <integer> 3 default.topic.replication.factor: <integer> 4 bootstrap.servers: <list_of_servers> 5",
"apiVersion: v1 kind: ConfigMap metadata: name: kafka-broker-config namespace: knative-eventing data: default.topic.partitions: \"10\" default.topic.replication.factor: \"3\" bootstrap.servers: \"my-cluster-kafka-bootstrap.kafka:9092\"",
"oc apply -f <config_map_filename>",
"apiVersion: eventing.knative.dev/v1 kind: Broker metadata: name: <broker_name> 1 namespace: <namespace> 2 annotations: eventing.knative.dev/broker.class: Kafka 3 spec: config: apiVersion: v1 kind: ConfigMap name: <config_map_name> 4 namespace: <namespace> 5",
"oc apply -f <broker_filename>",
"oc create secret -n knative-eventing generic <secret_name> --from-literal=protocol=SSL --from-file=ca.crt=caroot.pem --from-file=user.crt=certificate.pem --from-file=user.key=key.pem",
"apiVersion: operator.serverless.openshift.io/v1alpha1 kind: KnativeKafka metadata: namespace: knative-eventing name: knative-kafka spec: broker: enabled: true defaultConfig: authSecretName: <secret_name>",
"oc create secret -n knative-eventing generic <secret_name> --from-literal=protocol=SASL_SSL --from-literal=sasl.mechanism=<sasl_mechanism> --from-file=ca.crt=caroot.pem --from-literal=password=\"SecretPassword\" --from-literal=user=\"my-sasl-user\"",
"oc create secret -n <namespace> generic <kafka_auth_secret> --from-literal=tls.enabled=true --from-literal=password=\"SecretPassword\" --from-literal=saslType=\"SCRAM-SHA-512\" --from-literal=user=\"my-sasl-user\"",
"apiVersion: operator.serverless.openshift.io/v1alpha1 kind: KnativeKafka metadata: namespace: knative-eventing name: knative-kafka spec: broker: enabled: true defaultConfig: authSecretName: <secret_name>",
"kn broker list",
"NAME URL AGE CONDITIONS READY REASON default http://broker-ingress.knative-eventing.svc.cluster.local/test/default 45s 5 OK / 5 True",
"kn broker describe <broker_name>",
"kn broker describe default",
"Name: default Namespace: default Annotations: eventing.knative.dev/broker.class=MTChannelBasedBroker, eventing.knative.dev/creato Age: 22s Address: URL: http://broker-ingress.knative-eventing.svc.cluster.local/default/default Conditions: OK TYPE AGE REASON ++ Ready 22s ++ Addressable 22s ++ FilterReady 22s ++ IngressReady 22s ++ TriggerChannelReady 22s",
"apiVersion: eventing.knative.dev/v1 kind: Broker metadata: spec: delivery: deadLetterSink: ref: apiVersion: eventing.knative.dev/v1alpha1 kind: KafkaSink name: <sink_name> backoffDelay: <duration> backoffPolicy: <policy_type> retry: <integer>",
"apiVersion: eventing.knative.dev/v1 kind: Trigger metadata: spec: broker: <broker_name> delivery: deadLetterSink: ref: apiVersion: serving.knative.dev/v1 kind: Service name: <sink_name> backoffDelay: <duration> backoffPolicy: <policy_type> retry: <integer>",
"apiVersion: messaging.knative.dev/v1 kind: Channel metadata: spec: delivery: deadLetterSink: ref: apiVersion: serving.knative.dev/v1 kind: Service name: <sink_name> backoffDelay: <duration> backoffPolicy: <policy_type> retry: <integer>",
"apiVersion: messaging.knative.dev/v1 kind: Subscription metadata: spec: channel: apiVersion: messaging.knative.dev/v1 kind: Channel name: <channel_name> delivery: deadLetterSink: ref: apiVersion: serving.knative.dev/v1 kind: Service name: <sink_name> backoffDelay: <duration> backoffPolicy: <policy_type> retry: <integer>",
"apiVersion: eventing.knative.dev/v1 kind: Trigger metadata: name: <trigger_name> annotations: kafka.eventing.knative.dev/delivery.order: ordered",
"oc apply -f <filename>",
"apiVersion: eventing.knative.dev/v1 kind: Trigger metadata: name: <trigger_name> annotations: kafka.eventing.knative.dev/delivery.order: ordered",
"oc apply -f <filename>",
"kn trigger create <trigger_name> --broker <broker_name> --filter <key=value> --sink <sink_name>",
"kn trigger create <trigger_name> --inject-broker --filter <key=value> --sink <sink_name>",
"kn trigger list",
"NAME BROKER SINK AGE CONDITIONS READY REASON email default ksvc:edisplay 4s 5 OK / 5 True ping default ksvc:edisplay 32s 5 OK / 5 True",
"kn trigger list -o json",
"kn trigger describe <trigger_name>",
"Name: ping Namespace: default Labels: eventing.knative.dev/broker=default Annotations: eventing.knative.dev/creator=kube:admin, eventing.knative.dev/lastModifier=kube:admin Age: 2m Broker: default Filter: type: dev.knative.event Sink: Name: edisplay Namespace: default Resource: Service (serving.knative.dev/v1) Conditions: OK TYPE AGE REASON ++ Ready 2m ++ BrokerReady 2m ++ DependencyReady 2m ++ Subscribed 2m ++ SubscriberResolved 2m",
"apiVersion: eventing.knative.dev/v1 kind: Trigger metadata: name: <trigger_name> 1 spec: subscriber: ref: apiVersion: eventing.knative.dev/v1alpha1 kind: KafkaSink name: <kafka_sink_name> 2",
"kn trigger create <trigger_name> --broker <broker_name> --filter type=dev.knative.samples.helloworld --sink ksvc:<service_name>",
"kn trigger create <trigger_name> --broker <broker_name> --sink ksvc:<service_name> --filter type=dev.knative.samples.helloworld --filter source=dev.knative.samples/helloworldsource --filter myextension=my-extension-value",
"kn trigger update <trigger_name> --filter <key=value> --sink <sink_name> [flags]",
"kn trigger update <trigger_name> --filter type=knative.dev.event",
"kn trigger update <trigger_name> --filter type-",
"kn trigger update <trigger_name> --sink ksvc:my-event-sink",
"kn trigger delete <trigger_name>",
"kn trigger list",
"No triggers found.",
"apiVersion: messaging.knative.dev/v1 kind: Channel metadata: name: example-channel namespace: default spec: channelTemplate: apiVersion: messaging.knative.dev/v1 kind: InMemoryChannel",
"kn channel create <channel_name> --type <channel_type>",
"kn channel create mychannel --type messaging.knative.dev:v1:InMemoryChannel",
"Channel 'mychannel' created in namespace 'default'.",
"kn channel list",
"kn channel list NAME TYPE URL AGE READY REASON mychannel InMemoryChannel http://mychannel-kn-channel.default.svc.cluster.local 93s True",
"kn channel delete <channel_name>",
"apiVersion: messaging.knative.dev/v1 kind: Channel metadata: name: example-channel namespace: default",
"oc apply -f <filename>",
"apiVersion: messaging.knative.dev/v1beta1 kind: KafkaChannel metadata: name: example-channel namespace: default spec: numPartitions: 3 replicationFactor: 1",
"oc apply -f <filename>",
"apiVersion: operator.knative.dev/v1beta1 kind: KnativeEventing metadata: name: knative-eventing namespace: knative-eventing spec: config: 1 default-ch-webhook: 2 default-ch-config: | clusterDefault: 3 apiVersion: messaging.knative.dev/v1 kind: InMemoryChannel spec: delivery: backoffDelay: PT0.5S backoffPolicy: exponential retry: 5 namespaceDefaults: 4 my-namespace: apiVersion: messaging.knative.dev/v1beta1 kind: KafkaChannel spec: numPartitions: 1 replicationFactor: 1",
"oc create secret -n <namespace> generic <kafka_auth_secret> --from-file=ca.crt=caroot.pem --from-file=user.crt=certificate.pem --from-file=user.key=key.pem",
"oc edit knativekafka",
"apiVersion: operator.serverless.openshift.io/v1alpha1 kind: KnativeKafka metadata: namespace: knative-eventing name: knative-kafka spec: channel: authSecretName: <kafka_auth_secret> authSecretNamespace: <kafka_auth_secret_namespace> bootstrapServers: <bootstrap_servers> enabled: true source: enabled: true",
"apiVersion: operator.serverless.openshift.io/v1alpha1 kind: KnativeKafka metadata: namespace: knative-eventing name: knative-kafka spec: channel: authSecretName: tls-user authSecretNamespace: kafka bootstrapServers: eventing-kafka-bootstrap.kafka.svc:9094 enabled: true source: enabled: true",
"oc create secret -n <namespace> generic <kafka_auth_secret> --from-file=ca.crt=caroot.pem --from-literal=password=\"SecretPassword\" --from-literal=saslType=\"SCRAM-SHA-512\" --from-literal=user=\"my-sasl-user\"",
"oc create secret -n <namespace> generic <kafka_auth_secret> --from-literal=tls.enabled=true --from-literal=password=\"SecretPassword\" --from-literal=saslType=\"SCRAM-SHA-512\" --from-literal=user=\"my-sasl-user\"",
"oc edit knativekafka",
"apiVersion: operator.serverless.openshift.io/v1alpha1 kind: KnativeKafka metadata: namespace: knative-eventing name: knative-kafka spec: channel: authSecretName: <kafka_auth_secret> authSecretNamespace: <kafka_auth_secret_namespace> bootstrapServers: <bootstrap_servers> enabled: true source: enabled: true",
"apiVersion: operator.serverless.openshift.io/v1alpha1 kind: KnativeKafka metadata: namespace: knative-eventing name: knative-kafka spec: channel: authSecretName: scram-user authSecretNamespace: kafka bootstrapServers: eventing-kafka-bootstrap.kafka.svc:9093 enabled: true source: enabled: true",
"apiVersion: messaging.knative.dev/v1beta1 kind: Subscription metadata: name: my-subscription 1 namespace: default spec: channel: 2 apiVersion: messaging.knative.dev/v1beta1 kind: Channel name: example-channel delivery: 3 deadLetterSink: ref: apiVersion: serving.knative.dev/v1 kind: Service name: error-handler subscriber: 4 ref: apiVersion: serving.knative.dev/v1 kind: Service name: event-display",
"oc apply -f <filename>",
"kn subscription create <subscription_name> --channel <group:version:kind>:<channel_name> \\ 1 --sink <sink_prefix>:<sink_name> \\ 2 --sink-dead-letter <sink_prefix>:<sink_name> 3",
"kn subscription create mysubscription --channel mychannel --sink ksvc:event-display",
"Subscription 'mysubscription' created in namespace 'default'.",
"kn subscription list",
"NAME CHANNEL SUBSCRIBER REPLY DEAD LETTER SINK READY REASON mysubscription Channel:mychannel ksvc:event-display True",
"kn subscription delete <subscription_name>",
"kn subscription describe <subscription_name>",
"Name: my-subscription Namespace: default Annotations: messaging.knative.dev/creator=openshift-user, messaging.knative.dev/lastModifier=min Age: 43s Channel: Channel:my-channel (messaging.knative.dev/v1) Subscriber: URI: http://edisplay.default.example.com Reply: Name: default Resource: Broker (eventing.knative.dev/v1) DeadLetterSink: Name: my-sink Resource: Service (serving.knative.dev/v1) Conditions: OK TYPE AGE REASON ++ Ready 43s ++ AddedToChannel 43s ++ ChannelReady 43s ++ ReferencesResolved 43s",
"kn subscription list",
"NAME CHANNEL SUBSCRIBER REPLY DEAD LETTER SINK READY REASON mysubscription Channel:mychannel ksvc:event-display True",
"kn subscription update <subscription_name> --sink <sink_prefix>:<sink_name> \\ 1 --sink-dead-letter <sink_prefix>:<sink_name> 2",
"kn subscription update mysubscription --sink ksvc:event-display",
"kn source list-types",
"TYPE NAME DESCRIPTION ApiServerSource apiserversources.sources.knative.dev Watch and send Kubernetes API events to a sink PingSource pingsources.sources.knative.dev Periodically send ping events to a sink SinkBinding sinkbindings.sources.knative.dev Binding for connecting a PodSpecable to a sink",
"kn source list-types -o yaml",
"kn source list",
"NAME TYPE RESOURCE SINK READY a1 ApiServerSource apiserversources.sources.knative.dev ksvc:eshow2 True b1 SinkBinding sinkbindings.sources.knative.dev ksvc:eshow3 False p1 PingSource pingsources.sources.knative.dev ksvc:eshow1 True",
"kn source list --type <event_source_type>",
"kn source list --type PingSource",
"NAME TYPE RESOURCE SINK READY p1 PingSource pingsources.sources.knative.dev ksvc:eshow1 True",
"apiVersion: operator.knative.dev/v1beta1 kind: KnativeEventing metadata: name: knative-eventing namespace: knative-eventing spec: deployments: - name: eventing-controller readinessProbes: 1 - container: controller timeoutSeconds: 10 resources: - container: eventing-controller requests: cpu: 300m memory: 100Mi limits: cpu: 1000m memory: 250Mi replicas: 3 labels: example-label: label annotations: example-annotation: annotation nodeSelector: disktype: hdd",
"apiVersion: operator.knative.dev/v1beta1 kind: KnativeEventing metadata: name: knative-eventing namespace: knative-eventing spec: high-availability: replicas: 3",
"apiVersion: operator.serverless.openshift.io/v1alpha1 kind: KnativeKafka metadata: name: knative-kafka namespace: knative-eventing spec: high-availability: replicas: 3",
"systemctl start --user podman.socket",
"export DOCKER_HOST=\"unix://USD{XDG_RUNTIME_DIR}/podman/podman.sock\"",
"kn func build -v",
"podman machine init --memory=8192 --cpus=2 --disk-size=20",
"podman machine start Starting machine \"podman-machine-default\" Waiting for VM Mounting volume... /Users/myuser:/Users/user [...truncated output...] You can still connect Docker API clients by setting DOCKER_HOST using the following command in your terminal session: export DOCKER_HOST='unix:///Users/myuser/.local/share/containers/podman/machine/podman-machine-default/podman.sock' Machine \"podman-machine-default\" started successfully",
"export DOCKER_HOST='unix:///Users/myuser/.local/share/containers/podman/machine/podman-machine-default/podman.sock'",
"kn func build -v",
"kn func create -r <repository> -l <runtime> -t <template> <path>",
"kn func create -l typescript -t cloudevents examplefunc",
"Created typescript function in /home/user/demo/examplefunc",
"kn func create -r https://github.com/boson-project/templates/ -l node -t hello-world examplefunc",
"Created node function in /home/user/demo/examplefunc",
"kn func run",
"kn func run --path=<directory_path>",
"kn func run --build",
"kn func run --build=false",
"kn func help run",
"kn func build",
"kn func build",
"Building function image Function image has been built, image: registry.redhat.io/example/example-function:latest",
"kn func build --registry quay.io/username",
"Building function image Function image has been built, image: quay.io/username/example-function:latest",
"kn func build --push",
"kn func help build",
"kn func deploy [-n <namespace> -p <path> -i <image>]",
"Function deployed at: http://func.example.com",
"kn func invoke",
"kn func delete [<function_name> -n <namespace> -p <path>]",
"oc apply -f https://raw.githubusercontent.com/openshift-knative/kn-plugin-func/serverless-1.28.0/pipelines/resources/tekton/task/func-s2i/0.1/func-s2i.yaml",
"oc apply -f https://raw.githubusercontent.com/openshift-knative/kn-plugin-func/serverless-1.28.0/pipelines/resources/tekton/task/func-deploy/0.1/func-deploy.yaml",
"kn func create <function_name> -l <runtime>",
"git: url: <git_repository_url> 1 revision: main 2 contextDir: <directory_path> 3",
"kn func deploy --remote",
"π Creating Pipeline resources Please provide credentials for image registry used by Pipeline. ? Server: https://index.docker.io/v1/ ? Username: my-repo ? Password: ******** Function deployed at URL: http://test-function.default.svc.cluster.local",
"kn func deploy --remote \\ 1 --git-url <repo-url> \\ 2 [--git-branch <branch>] \\ 3 [--git-dir <function-dir>] 4",
"kn func deploy --remote --git-url https://example.com/alice/myfunc.git --git-branch my-feature --git-dir functions/example-func/",
". βββ func.yaml 1 βββ mvnw βββ mvnw.cmd βββ pom.xml 2 βββ README.md βββ src βββ main β βββ java β β βββ functions β β βββ Function.java 3 β β βββ Input.java β β βββ Output.java β βββ resources β βββ application.properties βββ test βββ java βββ functions 4 βββ FunctionTest.java βββ NativeFunctionIT.java",
"<dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.11</version> <scope>test</scope> </dependency> <dependency> <groupId>org.assertj</groupId> <artifactId>assertj-core</artifactId> <version>3.8.0</version> <scope>test</scope> </dependency> </dependencies>",
"public class Functions { @Funq public void processPurchase(Purchase purchase) { // process the purchase } }",
"public class Purchase { private long customerId; private long productId; // getters and setters }",
"import io.quarkus.funqy.Funq; import io.quarkus.funqy.knative.events.CloudEvent; public class Input { private String message; // getters and setters } public class Output { private String message; // getters and setters } public class Functions { @Funq public Output withBeans(Input in) { // function body } @Funq public CloudEvent<Output> withCloudEvent(CloudEvent<Input> in) { // function body } @Funq public void withBinary(byte[] in) { // function body } }",
"curl \"http://localhost:8080/withBeans\" -X POST -H \"Content-Type: application/json\" -d '{\"message\": \"Hello there.\"}'",
"curl \"http://localhost:8080/withBeans?message=Hello%20there.\" -X GET",
"curl \"http://localhost:8080/\" -X POST -H \"Content-Type: application/json\" -H \"Ce-SpecVersion: 1.0\" -H \"Ce-Type: withBeans\" -H \"Ce-Source: cURL\" -H \"Ce-Id: 42\" -d '{\"message\": \"Hello there.\"}'",
"curl http://localhost:8080/ -H \"Content-Type: application/cloudevents+json\" -d '{ \"data\": {\"message\":\"Hello there.\"}, \"datacontenttype\": \"application/json\", \"id\": \"42\", \"source\": \"curl\", \"type\": \"withBeans\", \"specversion\": \"1.0\"}'",
"curl \"http://localhost:8080/\" -X POST -H \"Content-Type: application/octet-stream\" -H \"Ce-SpecVersion: 1.0\" -H \"Ce-Type: withBinary\" -H \"Ce-Source: cURL\" -H \"Ce-Id: 42\" --data-binary '@img.jpg'",
"curl http://localhost:8080/ -H \"Content-Type: application/cloudevents+json\" -d \"{ \\\"data_base64\\\": \\\"USD(base64 --wrap=0 img.jpg)\\\", \\\"datacontenttype\\\": \\\"application/octet-stream\\\", \\\"id\\\": \\\"42\\\", \\\"source\\\": \\\"curl\\\", \\\"type\\\": \\\"withBinary\\\", \\\"specversion\\\": \\\"1.0\\\"}\"",
"public class Functions { private boolean _processPurchase(Purchase purchase) { // do stuff } public CloudEvent<Void> processPurchase(CloudEvent<Purchase> purchaseEvent) { System.out.println(\"subject is: \" + purchaseEvent.subject()); if (!_processPurchase(purchaseEvent.data())) { return CloudEventBuilder.create() .type(\"purchase.error\") .build(); } return CloudEventBuilder.create() .type(\"purchase.success\") .build(); } }",
"public class Functions { @Funq public List<Purchase> getPurchasesByName(String name) { // logic to retrieve purchases } }",
"public class Functions { public List<Integer> getIds(); public Purchase[] getPurchasesByName(String name); public String getNameById(int id); public Map<String,Integer> getNameIdMapping(); public void processImage(byte[] img); }",
"./mvnw test",
". βββ func.yaml 1 βββ index.js 2 βββ package.json 3 βββ README.md βββ test 4 βββ integration.js βββ unit.js",
"npm install --save opossum",
"function handle(context, data)",
"// Expects to receive a CloudEvent with customer data function handle(context, customer) { // process the customer const processed = handle(customer); return context.cloudEventResponse(customer) .source('/handle') .type('fn.process.customer') .response(); }",
"{ \"customerId\": \"0123456\", \"productId\": \"6543210\" }",
"function handle(context, data)",
"function handle(context, customer) { // process customer and return a new CloudEvent return new CloudEvent({ source: 'customer.processor', type: 'customer.processed' }) }",
"function handle(context, customer) { // process customer and return custom headers // the response will be '204 No content' return { headers: { customerid: customer.id } }; }",
"function handle(context, customer) { // process customer if (customer.restricted) { return { statusCode: 451 } } }",
"function handle(context, customer) { // process customer if (customer.restricted) { const err = new Error('Unavailable for legal reasons'); err.statusCode = 451; throw err; } }",
"npm test",
". βββ func.yaml 1 βββ package.json 2 βββ package-lock.json βββ README.md βββ src β βββ index.ts 3 βββ test 4 β βββ integration.ts β βββ unit.ts βββ tsconfig.json",
"npm install --save opossum",
"function handle(context:Context): string",
"// Expects to receive a CloudEvent with customer data export function handle(context: Context, cloudevent?: CloudEvent): CloudEvent { // process the customer const customer = cloudevent.data; const processed = processCustomer(customer); return context.cloudEventResponse(customer) .source('/customer/process') .type('customer.processed') .response(); }",
"// Invokable is the expeted Function signature for user functions export interface Invokable { (context: Context, cloudevent?: CloudEvent): any } // Logger can be used for structural logging to the console export interface Logger { debug: (msg: any) => void, info: (msg: any) => void, warn: (msg: any) => void, error: (msg: any) => void, fatal: (msg: any) => void, trace: (msg: any) => void, } // Context represents the function invocation context, and provides // access to the event itself as well as raw HTTP objects. export interface Context { log: Logger; req: IncomingMessage; query?: Record<string, any>; body?: Record<string, any>|string; method: string; headers: IncomingHttpHeaders; httpVersion: string; httpVersionMajor: number; httpVersionMinor: number; cloudevent: CloudEvent; cloudEventResponse(data: string|object): CloudEventResponse; } // CloudEventResponse is a convenience class used to create // CloudEvents on function returns export interface CloudEventResponse { id(id: string): CloudEventResponse; source(source: string): CloudEventResponse; type(type: string): CloudEventResponse; version(version: string): CloudEventResponse; response(): CloudEvent; }",
"{ \"customerId\": \"0123456\", \"productId\": \"6543210\" }",
"function handle(context: Context, cloudevent?: CloudEvent): CloudEvent",
"export const handle: Invokable = function ( context: Context, cloudevent?: CloudEvent ): Message { // process customer and return a new CloudEvent const customer = cloudevent.data; return HTTP.binary( new CloudEvent({ source: 'customer.processor', type: 'customer.processed' }) ); };",
"export function handle(context: Context, cloudevent?: CloudEvent): Record<string, any> { // process customer and return custom headers const customer = cloudevent.data as Record<string, any>; return { headers: { 'customer-id': customer.id } }; }",
"export function handle(context: Context, cloudevent?: CloudEvent): Record<string, any> { // process customer const customer = cloudevent.data as Record<string, any>; if (customer.restricted) { return { statusCode: 451 } } // business logic, then return { statusCode: 240 } }",
"export function handle(context: Context, cloudevent?: CloudEvent): Record<string, string> { // process customer const customer = cloudevent.data as Record<string, any>; if (customer.restricted) { const err = new Error('Unavailable for legal reasons'); err.statusCode = 451; throw err; } }",
"npm install",
"npm test",
"fn βββ func.py 1 βββ func.yaml 2 βββ requirements.txt 3 βββ test_func.py 4",
"def main(context: Context): \"\"\" The context parameter contains the Flask request object and any CloudEvent received with the request. \"\"\" print(f\"Method: {context.request.method}\") print(f\"Event data {context.cloud_event.data}\") # ... business logic here",
"def main(context: Context): body = { \"message\": \"Howdy!\" } headers = { \"content-type\": \"application/json\" } return body, 200, headers",
"@event(\"event_source\"=\"/my/function\", \"event_type\"=\"my.type\") def main(context): # business logic here data = do_something() # more data processing return data",
"pip install -r requirements.txt",
"python3 test_func.py",
"buildEnvs: - name: EXAMPLE1 value: one",
"buildEnvs: - name: EXAMPLE1 value: '{{ env:LOCAL_ENV_VAR }}'",
"name: test namespace: \"\" runtime: go envs: - name: EXAMPLE1 1 value: value - name: EXAMPLE2 2 value: '{{ env:LOCAL_ENV_VALUE }}' - name: EXAMPLE3 3 value: '{{ secret:mysecret:key }}' - name: EXAMPLE4 4 value: '{{ configMap:myconfigmap:key }}' - value: '{{ secret:mysecret2 }}' 5 - value: '{{ configMap:myconfigmap2 }}' 6",
"name: test namespace: \"\" runtime: go volumes: - secret: mysecret 1 path: /workspace/secret - configMap: myconfigmap 2 path: /workspace/configmap",
"name: test namespace: \"\" runtime: go options: scale: min: 0 max: 10 metric: concurrency target: 75 utilization: 75 resources: requests: cpu: 100m memory: 128Mi limits: cpu: 1000m memory: 256Mi concurrency: 100",
"labels: - key: role value: backend",
"labels: - key: author value: '{{ env:USER }}'",
"{{ env:ENV_VAR }}",
"name: test namespace: \"\" runtime: go envs: - name: MY_API_KEY value: '{{ env:API_KEY }}'",
"kn func config",
"kn func config ? What do you want to configure? Volumes ? What operation do you want to perform? List Configured Volumes mounts: - Secret \"mysecret\" mounted at path: \"/workspace/secret\" - Secret \"mysecret2\" mounted at path: \"/workspace/secret2\"",
"kn func config ββ> Environment variables β ββ> Add β β ββ> ConfigMap: Add all key-value pairs from a config map β β ββ> ConfigMap: Add value from a key in a config map β β ββ> Secret: Add all key-value pairs from a secret β β ββ> Secret: Add value from a key in a secret β ββ> List: List all configured environment variables β ββ> Remove: Remove a configured environment variable ββ> Volumes ββ> Add β ββ> ConfigMap: Mount a config map as a volume β ββ> Secret: Mount a secret as a volume ββ> List: List all configured volumes ββ> Remove: Remove a configured volume",
"kn func deploy -p test",
"kn func config envs [-p <function-project-path>]",
"kn func config envs add [-p <function-project-path>]",
"kn func config envs remove [-p <function-project-path>]",
"kn func config volumes [-p <function-project-path>]",
"kn func config volumes add [-p <function-project-path>]",
"kn func config volumes remove [-p <function-project-path>]",
"name: test namespace: \"\" runtime: go volumes: - secret: mysecret path: /workspace/secret",
"name: test namespace: \"\" runtime: go volumes: - configMap: addresses path: /workspace/secret-addresses",
"name: test namespace: \"\" runtime: go volumes: - configMap: myconfigmap path: /workspace/configmap",
"name: test namespace: \"\" runtime: go volumes: - configMap: addresses path: /workspace/configmap-addresses",
"name: test namespace: \"\" runtime: go envs: - name: EXAMPLE value: '{{ secret:mysecret:key }}'",
"name: test namespace: \"\" runtime: go envs: - value: '{{ configMap:userdetailssecret:userid }}'",
"name: test namespace: \"\" runtime: go envs: - name: EXAMPLE value: '{{ configMap:myconfigmap:key }}'",
"name: test namespace: \"\" runtime: go envs: - value: '{{ configMap:userdetailsmap:userid }}'",
"name: test namespace: \"\" runtime: go envs: - value: '{{ secret:mysecret }}' 1",
"name: test namespace: \"\" runtime: go envs: - value: '{{ configMap:userdetailssecret }}'",
"name: test namespace: \"\" runtime: go envs: - value: '{{ configMap:myconfigmap }}' 1",
"name: test namespace: \"\" runtime: go envs: - value: '{{ configMap:userdetailsmap }}'",
"name: test namespace: \"\" runtime: go annotations: <annotation_name>: \"<annotation_value>\" 1",
"name: test namespace: \"\" runtime: go annotations: author: \"[email protected]\"",
"function handle(context) { context.log.info(\"Processing customer\"); }",
"kn func invoke --target 'http://example.function.com'",
"{\"level\":30,\"time\":1604511655265,\"pid\":3430203,\"hostname\":\"localhost.localdomain\",\"reqId\":1,\"msg\":\"Processing customer\"}",
"function handle(context) { // Log the 'name' query parameter context.log.info(context.query.name); // Query parameters are also attached to the context context.log.info(context.name); }",
"kn func invoke --target 'http://example.com?name=tiger'",
"{\"level\":30,\"time\":1604511655265,\"pid\":3430203,\"hostname\":\"localhost.localdomain\",\"reqId\":1,\"msg\":\"tiger\"}",
"function handle(context) { // log the incoming request body's 'hello' parameter context.log.info(context.body.hello); }",
"kn func invoke -d '{\"Hello\": \"world\"}'",
"{\"level\":30,\"time\":1604511655265,\"pid\":3430203,\"hostname\":\"localhost.localdomain\",\"reqId\":1,\"msg\":\"world\"}",
"function handle(context) { context.log.info(context.headers[\"custom-header\"]); }",
"kn func invoke --target 'http://example.function.com'",
"{\"level\":30,\"time\":1604511655265,\"pid\":3430203,\"hostname\":\"localhost.localdomain\",\"reqId\":1,\"msg\":\"some-value\"}",
"export function handle(context: Context): string { // log the incoming request body's 'hello' parameter if (context.body) { context.log.info((context.body as Record<string, string>).hello); } else { context.log.info('No data received'); } return 'OK'; }",
"kn func invoke --target 'http://example.function.com'",
"{\"level\":30,\"time\":1604511655265,\"pid\":3430203,\"hostname\":\"localhost.localdomain\",\"reqId\":1,\"msg\":\"Processing customer\"}",
"export function handle(context: Context): string { // log the 'name' query parameter if (context.query) { context.log.info((context.query as Record<string, string>).name); } else { context.log.info('No data received'); } return 'OK'; }",
"kn func invoke --target 'http://example.function.com' --data '{\"name\": \"tiger\"}'",
"{\"level\":30,\"time\":1604511655265,\"pid\":3430203,\"hostname\":\"localhost.localdomain\",\"reqId\":1,\"msg\":\"tiger\"} {\"level\":30,\"time\":1604511655265,\"pid\":3430203,\"hostname\":\"localhost.localdomain\",\"reqId\":1,\"msg\":\"tiger\"}",
"export function handle(context: Context): string { // log the incoming request body's 'hello' parameter if (context.body) { context.log.info((context.body as Record<string, string>).hello); } else { context.log.info('No data received'); } return 'OK'; }",
"kn func invoke --target 'http://example.function.com' --data '{\"hello\": \"world\"}'",
"{\"level\":30,\"time\":1604511655265,\"pid\":3430203,\"hostname\":\"localhost.localdomain\",\"reqId\":1,\"msg\":\"world\"}",
"export function handle(context: Context): string { // log the incoming request body's 'hello' parameter if (context.body) { context.log.info((context.headers as Record<string, string>)['custom-header']); } else { context.log.info('No data received'); } return 'OK'; }",
"curl -H'x-custom-header: some-value'' http://example.function.com",
"{\"level\":30,\"time\":1604511655265,\"pid\":3430203,\"hostname\":\"localhost.localdomain\",\"reqId\":1,\"msg\":\"some-value\"}",
"kn service create <service-name> --image <image> --tag <tag-value>",
"kn service create event-display --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest",
"Creating service 'event-display' in namespace 'default': 0.271s The Route is still working to reflect the latest desired specification. 0.580s Configuration \"event-display\" is waiting for a Revision to become ready. 3.857s 3.861s Ingress has not yet been reconciled. 4.270s Ready to serve. Service 'event-display' created with latest revision 'event-display-bxshg-1' and URL: http://event-display-default.apps-crc.testing",
"kn service update <service_name> --env <key>=<value>",
"kn service update <service_name> --port 80",
"kn service update <service_name> --request cpu=500m --limit memory=1024Mi --limit cpu=1000m",
"kn service update <service_name> --tag <revision_name>=latest",
"kn service update <service_name> --untag testing --tag @latest=staging",
"kn service update <service_name> --tag <revision_name>=test --traffic test=10,@latest=90",
"kn service apply <service_name> --image <image>",
"kn service apply <service_name> --image <image> --env <key>=<value>",
"kn service apply <service_name> -f <filename>",
"kn service describe --verbose <service_name>",
"Name: hello Namespace: default Age: 2m URL: http://hello-default.apps.ocp.example.com Revisions: 100% @latest (hello-00001) [1] (2m) Image: docker.io/openshift/hello-openshift (pinned to aaea76) Conditions: OK TYPE AGE REASON ++ Ready 1m ++ ConfigurationsReady 1m ++ RoutesReady 1m",
"Name: hello Namespace: default Annotations: serving.knative.dev/creator=system:admin serving.knative.dev/lastModifier=system:admin Age: 3m URL: http://hello-default.apps.ocp.example.com Cluster: http://hello.default.svc.cluster.local Revisions: 100% @latest (hello-00001) [1] (3m) Image: docker.io/openshift/hello-openshift (pinned to aaea76) Env: RESPONSE=Hello Serverless! Conditions: OK TYPE AGE REASON ++ Ready 3m ++ ConfigurationsReady 3m ++ RoutesReady 3m",
"kn service describe <service_name> -o yaml",
"kn service describe <service_name> -o json",
"kn service describe <service_name> -o url",
"kn service create event-display --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest --target ./ --namespace test",
"Service 'event-display' created in namespace 'test'.",
"tree ./",
"./ βββ test βββ ksvc βββ event-display.yaml 2 directories, 1 file",
"cat test/ksvc/event-display.yaml",
"apiVersion: serving.knative.dev/v1 kind: Service metadata: creationTimestamp: null name: event-display namespace: test spec: template: metadata: annotations: client.knative.dev/user-image: quay.io/openshift-knative/knative-eventing-sources-event-display:latest creationTimestamp: null spec: containers: - image: quay.io/openshift-knative/knative-eventing-sources-event-display:latest name: \"\" resources: {} status: {}",
"kn service describe event-display --target ./ --namespace test",
"Name: event-display Namespace: test Age: URL: Revisions: Conditions: OK TYPE AGE REASON",
"kn service create -f test/ksvc/event-display.yaml",
"Creating service 'event-display' in namespace 'test': 0.058s The Route is still working to reflect the latest desired specification. 0.098s 0.168s Configuration \"event-display\" is waiting for a Revision to become ready. 23.377s 23.419s Ingress has not yet been reconciled. 23.534s Waiting for load balancer to be ready 23.723s Ready to serve. Service 'event-display' created to latest revision 'event-display-00001' is available at URL: http://event-display-test.apps.example.com",
"kn container add <container_name> --image <image_uri>",
"kn container add sidecar --image docker.io/example/sidecar",
"containers: - image: docker.io/example/sidecar name: sidecar resources: {}",
"kn container add <first_container_name> --image <image_uri> | kn container add <second_container_name> --image <image_uri> | kn service create <service_name> --image <image_uri> --extra-containers -",
"kn container add sidecar --image docker.io/example/sidecar:first | kn container add second --image docker.io/example/sidecar:second | kn service create my-service --image docker.io/example/my-app:latest --extra-containers -",
"kn service create <service_name> --image <image_uri> --extra-containers <filename>",
"kn service create my-service --image docker.io/example/my-app:latest --extra-containers my-extra-containers.yaml",
"kn domain create <domain_mapping_name> --ref <target_name>",
"kn domain create example-domain-map --ref example-service",
"kn domain create <domain_mapping_name> --ref <ksvc:service_name:service_namespace>",
"kn domain create example-domain-map --ref ksvc:example-service:example-namespace",
"kn domain create <domain_mapping_name> --ref <kroute:route_name>",
"kn domain create example-domain-map --ref kroute:example-route",
"kn domain list -n <domain_mapping_namespace>",
"kn domain describe <domain_mapping_name>",
"kn domain update --ref <target>",
"kn domain delete <domain_mapping_name>",
"plugins: path-lookup: true 1 directory: ~/.config/kn/plugins 2 eventing: sink-mappings: 3 - prefix: svc 4 group: core 5 version: v1 6 resource: services 7",
"kn event build --field <field-name>=<value> --type <type-name> --id <id> --output <format>",
"kn event build -o yaml",
"data: {} datacontenttype: application/json id: 81a402a2-9c29-4c27-b8ed-246a253c9e58 source: kn-event/v0.4.0 specversion: \"1.0\" time: \"2021-10-15T10:42:57.713226203Z\" type: dev.knative.cli.plugin.event.generic",
"kn event build --field operation.type=local-wire-transfer --field operation.amount=2345.40 --field operation.from=87656231 --field operation.to=2344121 --field automated=true --field signature='FGzCPLvYWdEgsdpb3qXkaVp7Da0=' --type org.example.bank.bar --id USD(head -c 10 < /dev/urandom | base64 -w 0) --output json",
"{ \"specversion\": \"1.0\", \"id\": \"RjtL8UH66X+UJg==\", \"source\": \"kn-event/v0.4.0\", \"type\": \"org.example.bank.bar\", \"datacontenttype\": \"application/json\", \"time\": \"2021-10-15T10:43:23.113187943Z\", \"data\": { \"automated\": true, \"operation\": { \"amount\": \"2345.40\", \"from\": 87656231, \"to\": 2344121, \"type\": \"local-wire-transfer\" }, \"signature\": \"FGzCPLvYWdEgsdpb3qXkaVp7Da0=\" } }",
"kn event send --field <field-name>=<value> --type <type-name> --id <id> --to-url <url> --to <cluster-resource> --namespace <namespace>",
"kn event send --field player.id=6354aa60-ddb1-452e-8c13-24893667de20 --field player.game=2345 --field points=456 --type org.example.gaming.foo --to-url http://ce-api.foo.example.com/",
"kn event send --type org.example.kn.ping --id USD(uuidgen) --field event.type=test --field event.data=98765 --to Service:serving.knative.dev/v1:event-display",
"kn source list-types",
"TYPE NAME DESCRIPTION ApiServerSource apiserversources.sources.knative.dev Watch and send Kubernetes API events to a sink PingSource pingsources.sources.knative.dev Periodically send ping events to a sink SinkBinding sinkbindings.sources.knative.dev Binding for connecting a PodSpecable to a sink",
"kn source list-types -o yaml",
"kn source binding create bind-heartbeat --namespace sinkbinding-example --subject \"Job:batch/v1:app=heartbeat-cron\" --sink http://event-display.svc.cluster.local \\ 1 --ce-override \"sink=bound\"",
"kn source container create <container_source_name> --image <image_uri> --sink <sink>",
"kn source container delete <container_source_name>",
"kn source container describe <container_source_name>",
"kn source container list",
"kn source container list -o yaml",
"kn source container update <container_source_name> --image <image_uri>",
"apiVersion: v1 kind: ServiceAccount metadata: name: events-sa namespace: default 1 --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: event-watcher namespace: default 2 rules: - apiGroups: - \"\" resources: - events verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: k8s-ra-event-watcher namespace: default 3 roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: event-watcher subjects: - kind: ServiceAccount name: events-sa namespace: default 4",
"oc apply -f <filename>",
"kn source apiserver create <event_source_name> --sink broker:<broker_name> --resource \"event:v1\" --service-account <service_account_name> --mode Resource",
"kn service create <service_name> --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest",
"kn trigger create <trigger_name> --sink ksvc:<service_name>",
"oc create deployment hello-node --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest",
"kn source apiserver describe <source_name>",
"Name: mysource Namespace: default Annotations: sources.knative.dev/creator=developer, sources.knative.dev/lastModifier=developer Age: 3m ServiceAccountName: events-sa Mode: Resource Sink: Name: default Namespace: default Kind: Broker (eventing.knative.dev/v1) Resources: Kind: event (v1) Controller: false Conditions: OK TYPE AGE REASON ++ Ready 3m ++ Deployed 3m ++ SinkProvided 3m ++ SufficientPermissions 3m ++ EventTypesProvided 3m",
"oc get pods",
"oc logs USD(oc get pod -o name | grep event-display) -c user-container",
"β\\ufe0f cloudevents.Event Validation: valid Context Attributes, specversion: 1.0 type: dev.knative.apiserver.resource.update datacontenttype: application/json Data, { \"apiVersion\": \"v1\", \"involvedObject\": { \"apiVersion\": \"v1\", \"fieldPath\": \"spec.containers{hello-node}\", \"kind\": \"Pod\", \"name\": \"hello-node\", \"namespace\": \"default\", .. }, \"kind\": \"Event\", \"message\": \"Started container\", \"metadata\": { \"name\": \"hello-node.159d7608e3a3572c\", \"namespace\": \"default\", . }, \"reason\": \"Started\", }",
"kn trigger delete <trigger_name>",
"kn source apiserver delete <source_name>",
"oc delete -f authentication.yaml",
"kn service create event-display --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest",
"kn source ping create test-ping-source --schedule \"*/2 * * * *\" --data '{\"message\": \"Hello world!\"}' --sink ksvc:event-display",
"kn source ping describe test-ping-source",
"Name: test-ping-source Namespace: default Annotations: sources.knative.dev/creator=developer, sources.knative.dev/lastModifier=developer Age: 15s Schedule: */2 * * * * Data: {\"message\": \"Hello world!\"} Sink: Name: event-display Namespace: default Resource: Service (serving.knative.dev/v1) Conditions: OK TYPE AGE REASON ++ Ready 8s ++ Deployed 8s ++ SinkProvided 15s ++ ValidSchedule 15s ++ EventTypeProvided 15s ++ ResourcesCorrect 15s",
"watch oc get pods",
"oc logs USD(oc get pod -o name | grep event-display) -c user-container",
"β\\ufe0f cloudevents.Event Validation: valid Context Attributes, specversion: 1.0 type: dev.knative.sources.ping source: /apis/v1/namespaces/default/pingsources/test-ping-source id: 99e4f4f6-08ff-4bff-acf1-47f61ded68c9 time: 2020-04-07T16:16:00.000601161Z datacontenttype: application/json Data, { \"message\": \"Hello world!\" }",
"kn delete pingsources.sources.knative.dev <ping_source_name>",
"kn service create event-display --image quay.io/openshift-knative/knative-eventing-sources-event-display",
"kn source kafka create <kafka_source_name> --servers <cluster_kafka_bootstrap>.kafka.svc:9092 --topics <topic_name> --consumergroup my-consumer-group --sink event-display",
"kn source kafka describe <kafka_source_name>",
"Name: example-kafka-source Namespace: kafka Age: 1h BootstrapServers: example-cluster-kafka-bootstrap.kafka.svc:9092 Topics: example-topic ConsumerGroup: example-consumer-group Sink: Name: event-display Namespace: default Resource: Service (serving.knative.dev/v1) Conditions: OK TYPE AGE REASON ++ Ready 1h ++ Deployed 1h ++ SinkProvided 1h",
"oc -n kafka run kafka-producer -ti --image=quay.io/strimzi/kafka:latest-kafka-2.7.0 --rm=true --restart=Never -- bin/kafka-console-producer.sh --broker-list <cluster_kafka_bootstrap>:9092 --topic my-topic",
"oc logs USD(oc get pod -o name | grep event-display) -c user-container",
"β\\ufe0f cloudevents.Event Validation: valid Context Attributes, specversion: 1.0 type: dev.knative.kafka.event source: /apis/v1/namespaces/default/kafkasources/example-kafka-source#example-topic subject: partition:46#0 id: partition:46/offset:0 time: 2021-03-10T11:21:49.4Z Extensions, traceparent: 00-161ff3815727d8755848ec01c866d1cd-7ff3916c44334678-00 Data, Hello!",
"kn func create -r <repository> -l <runtime> -t <template> <path>",
"kn func create -l typescript -t cloudevents examplefunc",
"Created typescript function in /home/user/demo/examplefunc",
"kn func create -r https://github.com/boson-project/templates/ -l node -t hello-world examplefunc",
"Created node function in /home/user/demo/examplefunc",
"kn func run",
"kn func run --path=<directory_path>",
"kn func run --build",
"kn func run --build=false",
"kn func help run",
"kn func build",
"kn func build",
"Building function image Function image has been built, image: registry.redhat.io/example/example-function:latest",
"kn func build --registry quay.io/username",
"Building function image Function image has been built, image: quay.io/username/example-function:latest",
"kn func build --push",
"kn func help build",
"kn func deploy [-n <namespace> -p <path> -i <image>]",
"Function deployed at: http://func.example.com",
"kn func list [-n <namespace> -p <path>]",
"NAME NAMESPACE RUNTIME URL READY example-function default node http://example-function.default.apps.ci-ln-g9f36hb-d5d6b.origin-ci-int-aws.dev.rhcloud.com True",
"kn service list -n <namespace>",
"NAME URL LATEST AGE CONDITIONS READY REASON example-function http://example-function.default.apps.ci-ln-g9f36hb-d5d6b.origin-ci-int-aws.dev.rhcloud.com example-function-gzl4c 16m 3 OK / 3 True",
"kn func info [-f <format> -n <namespace> -p <path>]",
"kn func info -p function/example-function",
"Function name: example-function Function is built in image: docker.io/user/example-function:latest Function is deployed as Knative Service: example-function Function is deployed in namespace: default Routes: http://example-function.default.apps.ci-ln-g9f36hb-d5d6b.origin-ci-int-aws.dev.rhcloud.com",
"kn func invoke",
"kn func invoke --type <event_type> --source <event_source> --data <event_data> --content-type <content_type> --id <event_ID> --format <format> --namespace <namespace>",
"kn func invoke --type ping --source example-ping --data \"Hello world!\" --content-type \"text/plain\" --id example-ID --format http --namespace my-ns",
"kn func invoke --file <path> --content-type <content-type>",
"kn func invoke --file ./test.json --content-type application/json",
"kn func invoke --path <path_to_function>",
"kn func invoke --path ./example/example-function",
"kn func invoke",
"kn func invoke --target <target>",
"kn func invoke --target remote",
"kn func invoke --target \"https://my-event-broker.example.com\"",
"kn func invoke --target local",
"kn func delete [<function_name> -n <namespace> -p <path>]",
"package main import ( \"fmt\" \"log\" \"net/http\" \"os\" \"github.com/prometheus/client_golang/prometheus\" 1 \"github.com/prometheus/client_golang/prometheus/promauto\" \"github.com/prometheus/client_golang/prometheus/promhttp\" ) var ( opsProcessed = promauto.NewCounter(prometheus.CounterOpts{ 2 Name: \"myapp_processed_ops_total\", Help: \"The total number of processed events\", }) ) func handler(w http.ResponseWriter, r *http.Request) { log.Print(\"helloworld: received a request\") target := os.Getenv(\"TARGET\") if target == \"\" { target = \"World\" } fmt.Fprintf(w, \"Hello %s!\\n\", target) opsProcessed.Inc() 3 } func main() { log.Print(\"helloworld: starting server...\") port := os.Getenv(\"PORT\") if port == \"\" { port = \"8080\" } http.HandleFunc(\"/\", handler) // Separate server for metrics requests go func() { 4 mux := http.NewServeMux() server := &http.Server{ Addr: fmt.Sprintf(\":%s\", \"9095\"), Handler: mux, } mux.Handle(\"/metrics\", promhttp.Handler()) log.Printf(\"prometheus: listening on port %s\", 9095) log.Fatal(server.ListenAndServe()) }() // Use same port as normal requests for metrics //http.Handle(\"/metrics\", promhttp.Handler()) 5 log.Printf(\"helloworld: listening on port %s\", port) log.Fatal(http.ListenAndServe(fmt.Sprintf(\":%s\", port), nil)) }",
"apiVersion: serving.knative.dev/v1 1 kind: Service metadata: name: helloworld-go spec: template: metadata: labels: app: helloworld-go annotations: spec: containers: - image: docker.io/skonto/helloworld-go:metrics resources: requests: cpu: \"200m\" env: - name: TARGET value: \"Go Sample v1\" --- apiVersion: monitoring.coreos.com/v1 2 kind: ServiceMonitor metadata: labels: name: helloworld-go-sm spec: endpoints: - port: queue-proxy-metrics scheme: http - port: app-metrics scheme: http namespaceSelector: {} selector: matchLabels: name: helloworld-go-sm --- apiVersion: v1 3 kind: Service metadata: labels: name: helloworld-go-sm name: helloworld-go-sm spec: ports: - name: queue-proxy-metrics port: 9091 protocol: TCP targetPort: 9091 - name: app-metrics port: 9095 protocol: TCP targetPort: 9095 selector: serving.knative.dev/service: helloworld-go type: ClusterIP",
"hello_route=USD(oc get ksvc helloworld-go -n ns1 -o jsonpath='{.status.url}') && curl USDhello_route",
"Hello Go Sample v1!",
"revision_app_request_count{namespace=\"ns1\", job=\"helloworld-go-sm\"}",
"myapp_processed_ops_total{namespace=\"ns1\", job=\"helloworld-go-sm\"}",
"spec: logStore: elasticsearch: resources: limits: cpu: memory: 16Gi requests: cpu: 500m memory: 16Gi type: \"elasticsearch\" collection: logs: fluentd: resources: limits: cpu: memory: requests: cpu: memory: type: \"fluentd\" visualization: kibana: resources: limits: cpu: memory: requests: cpu: memory: type: kibana",
"spec: logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: storageClassName: \"gp2\" size: \"200G\"",
"spec: logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: {}",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: \"openshift-logging\" spec: managementState: \"Managed\" logStore: type: \"elasticsearch\" retentionPolicy: application: maxAge: 1d infra: maxAge: 7d audit: maxAge: 7d elasticsearch: nodeCount: 3 resources: limits: memory: 32Gi requests: cpu: 3 memory: 32Gi storage: storageClassName: \"gp2\" size: \"200G\" redundancyPolicy: \"SingleRedundancy\" visualization: type: \"kibana\" kibana: resources: limits: memory: 1Gi requests: cpu: 500m memory: 1Gi replicas: 1 collection: logs: type: \"fluentd\" fluentd: resources: limits: memory: 1Gi requests: cpu: 200m memory: 1Gi",
"oc -n openshift-logging get route kibana",
"oc -n openshift-logging get route kibana",
"kubernetes.namespace_name:default AND kubernetes.labels.serving_knative_dev\\/service:{service_name}",
"apiVersion: opentelemetry.io/v1alpha1 kind: OpenTelemetryCollector metadata: name: cluster-collector namespace: <namespace> spec: mode: deployment config: | receivers: zipkin: processors: exporters: jaeger: endpoint: jaeger-all-in-one-inmemory-collector-headless.tracing-system.svc:14250 tls: ca_file: \"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\" logging: service: pipelines: traces: receivers: [zipkin] processors: [] exporters: [jaeger, logging]",
"oc get pods -n <namespace>",
"NAME READY STATUS RESTARTS AGE cluster-collector-collector-85c766b5c-b5g99 1/1 Running 0 5m56s jaeger-all-in-one-inmemory-ccbc9df4b-ndkl5 2/2 Running 0 15m",
"oc get svc -n <namespace> | grep headless",
"cluster-collector-collector-headless ClusterIP None <none> 9411/TCP 7m28s jaeger-all-in-one-inmemory-collector-headless ClusterIP None <none> 9411/TCP,14250/TCP,14267/TCP,14268/TCP 16m",
"apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving spec: config: tracing: backend: \"zipkin\" zipkin-endpoint: \"http://cluster-collector-collector-headless.tracing-system.svc:9411/api/v2/spans\" debug: \"false\" sample-rate: \"0.1\" 1",
"apiVersion: operator.knative.dev/v1beta1 kind: KnativeEventing metadata: name: knative-eventing namespace: knative-eventing spec: config: tracing: backend: \"zipkin\" zipkin-endpoint: \"http://cluster-collector-collector-headless.tracing-system.svc:9411/api/v2/spans\" debug: \"false\" sample-rate: \"0.1\" 1",
"apiVersion: serving.knative.dev/v1 kind: Service metadata: name: helloworld-go spec: template: metadata: labels: app: helloworld-go annotations: autoscaling.knative.dev/minScale: \"1\" autoscaling.knative.dev/target: \"1\" spec: containers: - image: quay.io/openshift-knative/helloworld:v1.2 imagePullPolicy: Always resources: requests: cpu: \"200m\" env: - name: TARGET value: \"Go Sample v1\"",
"curl https://helloworld-go.example.com",
"oc get route jaeger-all-in-one-inmemory -o jsonpath='{.spec.host}' -n <namespace>",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger namespace: default",
"apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving spec: config: tracing: sample-rate: \"0.1\" 1 backend: zipkin 2 zipkin-endpoint: \"http://jaeger-collector.default.svc.cluster.local:9411/api/v2/spans\" 3 debug: \"false\" 4",
"apiVersion: operator.knative.dev/v1beta1 kind: KnativeEventing metadata: name: knative-eventing namespace: knative-eventing spec: config: tracing: sample-rate: \"0.1\" 1 backend: zipkin 2 zipkin-endpoint: \"http://jaeger-collector.default.svc.cluster.local:9411/api/v2/spans\" 3 debug: \"false\" 4",
"oc get route jaeger -n default",
"NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD jaeger jaeger-default.apps.example.com jaeger-query <all> reencrypt None",
"openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -subj '/O=Example Inc./CN=example.com' -keyout root.key -out root.crt",
"openssl req -nodes -newkey rsa:2048 -subj \"/CN=*.apps.openshift.example.com/O=Example Inc.\" -keyout wildcard.key -out wildcard.csr",
"openssl x509 -req -days 365 -set_serial 0 -CA root.crt -CAkey root.key -in wildcard.csr -out wildcard.crt",
"oc create -n istio-system secret tls wildcard-certs --key=wildcard.key --cert=wildcard.crt",
"apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default namespace: istio-system spec: members: 1 - knative-serving - <namespace>",
"oc apply -f <filename>",
"apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: knative-ingress-gateway namespace: knative-serving spec: selector: istio: ingressgateway servers: - port: number: 443 name: https protocol: HTTPS hosts: - \"*\" tls: mode: SIMPLE credentialName: <wildcard_certs> 1 --- apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: knative-local-gateway namespace: knative-serving spec: selector: istio: ingressgateway servers: - port: number: 8081 name: http protocol: HTTP 2 hosts: - \"*\" --- apiVersion: v1 kind: Service metadata: name: knative-local-gateway namespace: istio-system labels: experimental.istio.io/disable-gateway-port-translation: \"true\" spec: type: ClusterIP selector: istio: ingressgateway ports: - name: http2 port: 80 targetPort: 8081",
"apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: knative-local-gateway namespace: knative-serving spec: selector: istio: ingressgateway servers: - port: number: 443 name: https protocol: HTTPS hosts: - \"*\" tls: mode: SIMPLE credentialName: <wildcard_certs>",
"oc apply -f <filename>",
"apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving spec: ingress: istio: enabled: true 1 deployments: 2 - name: activator annotations: \"sidecar.istio.io/inject\": \"true\" \"sidecar.istio.io/rewriteAppHTTPProbers\": \"true\" - name: autoscaler annotations: \"sidecar.istio.io/inject\": \"true\" \"sidecar.istio.io/rewriteAppHTTPProbers\": \"true\"",
"oc apply -f <filename>",
"apiVersion: serving.knative.dev/v1 kind: Service metadata: name: <service_name> namespace: <namespace> 1 annotations: serving.knative.openshift.io/enablePassthrough: \"true\" 2 spec: template: metadata: annotations: sidecar.istio.io/inject: \"true\" 3 sidecar.istio.io/rewriteAppHTTPProbers: \"true\" spec: containers: - image: <image_url>",
"oc apply -f <filename>",
"curl --cacert root.crt <service_url>",
"curl --cacert root.crt https://hello-default.apps.openshift.example.com",
"Hello Openshift!",
"apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving spec: config: observability: metrics.backend-destination: \"prometheus\"",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-monitoring-ns namespace: knative-serving spec: ingress: - from: - namespaceSelector: matchLabels: name: \"openshift-monitoring\" podSelector: {}",
"spec: proxy: networking: trafficControl: inbound: excludedPorts: - 8444",
"apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default namespace: istio-system spec: members: - <namespace> 1",
"oc apply -f <filename>",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-serving-system-namespace namespace: <namespace> 1 spec: ingress: - from: - namespaceSelector: matchLabels: knative.openshift.io/part-of: \"openshift-serverless\" podSelector: {} policyTypes: - Ingress",
"oc label namespace knative-serving knative.openshift.io/part-of=openshift-serverless",
"oc label namespace knative-serving-ingress knative.openshift.io/part-of=openshift-serverless",
"oc apply -f <filename>",
"apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving annotations: serverless.openshift.io/enable-secret-informer-filtering: \"true\" 1 spec: ingress: istio: enabled: true deployments: - annotations: sidecar.istio.io/inject: \"true\" sidecar.istio.io/rewriteAppHTTPProbers: \"true\" name: activator - annotations: sidecar.istio.io/inject: \"true\" sidecar.istio.io/rewriteAppHTTPProbers: \"true\" name: autoscaler",
"apiVersion: serving.knative.dev/v1 kind: Service metadata: name: example-service spec: labels: app: <revision_name>",
"kn service create hello --image <service-image> --limit nvidia.com/gpu=1",
"kn service update hello --limit nvidia.com/gpu=3",
"oc delete knativeeventings.operator.knative.dev knative-eventing -n knative-eventing",
"oc delete namespace knative-eventing",
"oc delete knativeservings.operator.knative.dev knative-serving -n knative-serving",
"oc delete namespace knative-serving",
"oc get subscription jaeger -n openshift-operators -o yaml | grep currentCSV",
"currentCSV: jaeger-operator.v1.8.2",
"oc delete subscription jaeger -n openshift-operators",
"subscription.operators.coreos.com \"jaeger\" deleted",
"oc delete clusterserviceversion jaeger-operator.v1.8.2 -n openshift-operators",
"clusterserviceversion.operators.coreos.com \"jaeger-operator.v1.8.2\" deleted",
"ImagePullBackOff for Back-off pulling image \"example.com/openshift4/ose-elasticsearch-operator-bundle@sha256:6d2587129c846ec28d384540322b40b05833e7e00b25cca584e004af9a1d292e\"",
"rpc error: code = Unknown desc = error pinging docker registry example.com: Get \"https://example.com/v2/\": dial tcp: lookup example.com on 10.0.0.1:53: no such host",
"oc get sub,csv -n <namespace>",
"NAME PACKAGE SOURCE CHANNEL subscription.operators.coreos.com/elasticsearch-operator elasticsearch-operator redhat-operators 5.0 NAME DISPLAY VERSION REPLACES PHASE clusterserviceversion.operators.coreos.com/elasticsearch-operator.5.0.0-65 OpenShift Elasticsearch Operator 5.0.0-65 Succeeded",
"oc delete subscription <subscription_name> -n <namespace>",
"oc delete csv <csv_name> -n <namespace>",
"oc get job,configmap -n openshift-marketplace",
"NAME COMPLETIONS DURATION AGE job.batch/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 1/1 26s 9m30s NAME DATA AGE configmap/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 3 9m30s",
"oc delete job <job_name> -n openshift-marketplace",
"oc delete configmap <configmap_name> -n openshift-marketplace",
"oc get sub,csv,installplan -n <namespace>",
"oc get crd -oname | grep 'knative.dev' | xargs oc delete",
"oc get clusterversion -o jsonpath='{.items[].spec.clusterID}{\"\\n\"}'",
"oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.9.0",
"oc adm must-gather -- /usr/bin/gather_audit_logs",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-must-gather-5drcj must-gather-bklx4 2/2 Running 0 72s openshift-must-gather-5drcj must-gather-s8sdh 2/2 Running 0 72s",
"oc adm must-gather --image=registry.redhat.io/openshift-serverless-1/svls-must-gather-rhel8:<image_version_tag>",
"oc adm must-gather --image=registry.redhat.io/openshift-serverless-1/svls-must-gather-rhel8:1.14.0"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html-single/serverless/index |
HawtIO Diagnostic Console Guide | HawtIO Diagnostic Console Guide Red Hat build of Apache Camel 4.8 Manage applications with Red Hat build of HawtIO | [
"<?xml version=\"1.0\"?> <settings> <profiles> <profile> <id>extra-repos</id> <activation> <activeByDefault>true</activeByDefault> </activation> <repositories> <repository> <id>redhat-ga-repository</id> <url>https://maven.repository.redhat.com/ga</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> <repository> <id>redhat-ea-repository</id> <url>https://maven.repository.redhat.com/earlyaccess/all</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>redhat-ga-repository</id> <url>https://maven.repository.redhat.com/ga</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> <pluginRepository> <id>redhat-ea-repository</id> <url>https://maven.repository.redhat.com/earlyaccess/all</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> </profile> </profiles> <activeProfiles> <activeProfile>extra-repos</activeProfile> </activeProfiles> </settings>",
"jbang app install -Dhawtio.jbang.version=4.1.0.redhat-00015 hawtio@hawtio/hawtio",
"hawtio",
"hawtio --port 8090",
"hawtio --help Usage: hawtio [-hjoV] [-c=<contextPath>] [-d=<plugins>] [-e=<extraClassPath>] [-H=<host>] [-k=<keyStore>] [-l=<warLocation>] [-p=<port>] [-s=<keyStorePass>] [-w=<war>] Run HawtIO -c, --context-path=<contextPath> Context path. -d, --plugins-dir=<plugins> Directory to search for .war files to install as 3rd party plugins. -e, --extra-class-path=<extraClassPath> Extra class path. -h, --help Print usage help and exit. -H, --host=<host> Hostname to listen to. -j, --join Join server thread. -k, --key-store=<keyStore> JKS keyStore with the keys for https. -l, --war-location=<warLocation> Directory to search for .war files. -o, --open-url Open the web console automatic in the web browser. -p, --port=<port> Port number. -s, --key-store-pass=<keyStorePass> Password for the JKS keyStore with the keys for https. -V, --version Print HawtIO version -w, --war=<war> War file or directory of the hawtio web application.",
"<dependencyManagement> <dependencies> <dependency> <groupId>io.hawt</groupId> <artifactId>hawtio-bom</artifactId> <version>4.1.0.redhat-00015</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> <!-- ... other BOMs or dependencies ... --> </dependencyManagement> <dependencies> <dependency> <groupId>io.hawt</groupId> <artifactId>hawtio-quarkus</artifactId> </dependency> <!-- Mandatory for enabling Camel management via JMX / HawtIO --> <dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-management</artifactId> </dependency> <!-- (Optional) Required for HawtIO Camel route diagram tab --> <dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-jaxb</artifactId> </dependency> <!-- ... other dependencies ... --> </dependencies>",
"quarkus.hawtio.authenticationEnabled = false",
"mvn compile quarkus:dev",
"<dependencyManagement> <dependencies> <dependency> <groupId>io.hawt</groupId> <artifactId>hawtio-bom</artifactId> <version>4.1.0.redhat-00015</version> <type>pom</type> <scope>import</scope> </dependency> <!-- ... other BOMs or dependencies ... --> </dependencies> </dependencyManagement> <dependencies> <dependency> <groupId>io.hawt</groupId> <artifactId>hawtio-springboot</artifactId> </dependency> <!-- Mandatory for enabling Camel management via JMX / HawtIO --> <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-management-starter</artifactId> </dependency> <!-- (Optional) Required for HawtIO Camel route diagram tab --> <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-spring-boot-xml-starter</artifactId> </dependency> <!-- ... other dependencies ... --> </dependencies>",
"spring.jmx.enabled = true management.endpoints.web.exposure.include = hawtio,jolokia",
"mvn spring-boot:run",
"management.endpoints.web.base-path = /",
"management.endpoints.web.path-mapping.hawtio = hawtio/console",
"quarkus.hawtio.disableProxy = true",
"hawtio.disableProxy = true",
"jolokia.policyLocation = file:///opt/hawtio/my-jolokia-access.xml",
"{ \"branding\": { \"appName\": \"HawtIO Management Console\", \"showAppName\": false, \"appLogoUrl\": \"hawtio-logo.svg\", \"companyLogoUrl\": \"hawtio-logo.svg\", \"css\": \"\", \"favicon\": \"favicon.ico\" }, \"login\": { \"description\": \"Login page for HawtIO Management Console.\", \"links\": [ { \"url\": \"#terms\", \"text\": \"Terms of Use\" }, { \"url\": \"#help\", \"text\": \"Help\" }, { \"url\": \"#privacy\", \"text\": \"Privacy Policy\" } ] }, \"about\": { \"title\": \"HawtIO Management Console\", \"description\": \"A HawtIO reimplementation based on TypeScript + React.\", \"imgSrc\": \"hawtio-logo.svg\", \"productInfo\": [ { \"name\": \"ABC\", \"value\": \"1.2.3\" }, { \"name\": \"XYZ\", \"value\": \"7.8.9\" } ], \"copyright\": \"(c) HawtIO project\" }, \"disabledRoutes\": [ \"/disabled\" ] }",
"\"branding\": { \"appName\": \"HawtIO Management Console\", \"showAppName\": false, \"appLogoUrl\": \"hawtio-logo.svg\", \"companyLogoUrl\": \"hawtio-logo.svg\", \"css\": \"\", \"favicon\": \"favicon.ico\" }",
"\"login\": { \"description\": \"Login page for HawtIO Management Console.\", \"links\": [ { \"url\": \"#terms\", \"text\": \"Terms of Use\" }, { \"url\": \"#help\", \"text\": \"Help\" }, { \"url\": \"#privacy\", \"text\": \"Privacy Policy\" } ] }",
"\"about\": { \"title\": \"HawtIO Management Console\", \"description\": \"A HawtIO reimplementation based on TypeScript + React.\", \"imgSrc\": \"hawtio-logo.svg\", \"productInfo\": [ { \"name\": \"ABC\", \"value\": \"1.2.3\" }, { \"name\": \"XYZ\", \"value\": \"7.8.9\" } ], \"copyright\": \"(c) HawtIO project\" }",
"\"disabledRoutes\": [ \"/disabled\" ]",
"<domain>/<prop1>=<value1>,<prop2>=<value2>,",
"\"jmx\": { \"workspace\": [ \"hawtio\", \"java.lang/type=Memory\", \"org.apache.camel\", \"no.such.domain\" ] }",
"\"online\": { \"projectSelector\": \"myproject\", \"consoleLink\": { \"text\": \"HawtIO Management Console\", \"section\": \"HawtIO\", \"imageRelativePath\": \"/online/img/favicon.ico\" } }",
"import { HawtIOPlugin, configManager } from '@hawtio/react' /** * The entry function of your plugin. */ export const plugin: HawtIOPlugin = () => { } // Register the custom plugin version to HawtIO // See package.json \"replace-version\" script for how to replace the version placeholder with a real version configManager.addProductInfo('HawtIO Sample Plugin', '__PACKAGE_VERSION_PLACEHOLDER__') /* * This example also demonstrates how branding and styles can be customised from a WAR plugin. * * The Plugin API `configManager` provides `configure(configurer: (config: Hawtconfig) => void)` method * and you can customise the `Hawtconfig` by invoking it from the plugin's `index.ts`. */ configManager.configure(config => { // Branding & styles config.branding = { appName: 'HawtIO Sample WAR Plugin', showAppName: true, appLogoUrl: '/sample-plugin/branding/Logo-RedHat-A-Reverse-RGB.png', css: '/sample-plugin/branding/app.css', favicon: '/sample-plugin/branding/favicon.ico', } // Login page config.login = { description: 'Login page for HawtIO Sample WAR Plugin application.', links: [ { url: '#terms', text: 'Terms of use' }, { url: '#help', text: 'Help' }, { url: '#privacy', text: 'Privacy policy' }, ], } // About modal if (!config.about) { config.about = {} } config.about.title = 'HawtIO Sample WAR Plugin' config.about.description = 'About page for HawtIO Sample WAR Plugin application.' config.about.imgSrc = '/sample-plugin/branding/Logo-RedHat-A-Reverse-RGB.png' if (!config.about.productInfo) { config.about.productInfo = [] } config.about.productInfo.push( { name: 'HawtIO Sample Plugin - simple-plugin', value: '1.0.0' }, { name: 'HawtIO Sample Plugin - custom-tree', value: '1.0.0' }, ) // If you want to disable specific plugins, you can specify the paths to disable them. //config.disabledRoutes = ['/simple-plugin'] })",
"quarkus.hawtio.authenticationEnabled = false",
"<dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-elytron-security-properties-file</artifactId> </dependency>",
"quarkus.security.users.embedded.enabled = true quarkus.security.users.embedded.plain-text = true quarkus.security.users.embedded.users.hawtio = s3cr3t! quarkus.security.users.embedded.roles.hawtio = admin",
"hawtio.authenticationEnabled = false",
"<dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-security</artifactId> </dependency>",
"spring.security.user.name = hawtio spring.security.user.password = s3cr3t! spring.security.user.roles = admin,viewer",
"@EnableWebSecurity public class SecurityConfig { @Bean public SecurityFilterChain filterChain(HttpSecurity http) throws Exception { http .authorizeHttpRequests(authorize -> authorize .anyRequest().authenticated() ) .formLogin(withDefaults()) .httpBasic(withDefaults()) .csrf(csrf -> csrf .csrfTokenRepository(CookieCsrfTokenRepository.withHttpOnlyFalse()) .csrfTokenRequestHandler(new SpaCsrfTokenRequestHandler()) ) .addFilterAfter(new CsrfCookieFilter(), BasicAuthenticationFilter.class); return http.build(); } }",
"import org.springframework.boot.actuate.autoconfigure.jolokia.JolokiaEndpoint; import org.springframework.boot.actuate.autoconfigure.security.servlet.EndpointRequest; @EnableWebSecurity public class SecurityConfig { @Bean public SecurityFilterChain filterChain(HttpSecurity http) throws Exception { // Disable CSRF protection for the Jolokia endpoint http.csrf().ignoringRequestMatchers(EndpointRequest.to(JolokiaEndpoint.class)); return http.build(); } }",
"<restrict> <cors> <allow-origin>http*://localhost:*</allow-origin> <allow-origin>http*://127.0.0.1:*</allow-origin> <allow-origin>http*://*.example.com</allow-origin> <allow-origin>http*://*.example.com:*</allow-origin> <strict-checking /> </cors> </restrict>",
"docker run -d --name keycloak -p 18080:8080 -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=admin quay.io/keycloak/keycloak start-dev",
"<dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-oidc</artifactId> </dependency>",
"quarkus.oidc.auth-server-url = http://localhost:18080/realms/hawtio-demo quarkus.oidc.client-id = hawtio-client quarkus.oidc.credentials.secret = secret quarkus.oidc.application-type = web-app quarkus.oidc.token-state-manager.split-tokens = true quarkus.http.auth.permission.authenticated.paths = \"/*\" quarkus.http.auth.permission.authenticated.policy = authenticated",
"{ \"realm\": \"hawtio-demo\", \"clientId\": \"hawtio-client\", \"url\": \"http://localhost:18080/\", \"jaas\": false, \"pkceMethod\": \"S256\" }",
"<dependency> <groupId>io.hawt</groupId> <artifactId>hawtio-springboot-keycloak</artifactId> <version>4.x.y</version> </dependency>",
"keycloak.realm = hawtio-demo keycloak.resource = hawtio-client keycloak.auth-server-url = http://localhost:18080/ keycloak.ssl-required = external keycloak.public-client = true keycloak.principal-attribute = preferred_username",
"{ \"realm\": \"hawtio-demo\", \"clientId\": \"hawtio-client\", \"url\": \"http://localhost:18080/\", \"jaas\": false }",
"auth can-i update pods/<pod> --as <user>",
"auth can-i get pods/<pod> --as <user>",
"project hawtio-test",
"process -f https://raw.githubusercontent.com/hawtio/hawtio-online/2.x/docker/ACL.yaml -p APP_NAME=custom-hawtio | oc create -f -",
"edit ConfigMap custom-hawtio-rbac",
"apiVersion: hawt.io/v1 kind: HawtIO metadata: name: hawtio-console spec: type: Namespace nginx: clientBodyBufferSize: 256k proxyBuffers: 16 128k subrequestOutputBufferSize: 100m",
"camel.routecontroller.enabled = true",
"mvn archetype:generate -DarchetypeGroupId=org.apache.camel.archetypes -DarchetypeArtifactId=camel-archetype-spring-boot -DarchetypeVersion=4.8.0.redhat-00022 -DgroupId=io.hawt.online.examples -DartifactId=hawtio-online-example -Dversion=1.0.0 -DinteractiveMode=false -Dpackage=io.hawtio",
"mvn spring-boot:run",
"<dependencies> <!-- Camel --> <!-- Dependency is mandatory for exposing Jolokia endpoint --> <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-jolokia-starter</artifactId> </dependency> <!-- Optional: enables debugging support for Camel --> <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-debug</artifactId> <version>4.8.0</version> </dependency> </dependencies>",
"camel.springboot.inflight-repository-browse-enabled=true",
"spec: containers: - name: my-container ports: - name: jolokia containerPort: 8778 protocol: TCP ..... ....",
"mvn clean install -DskipTests -P openshift",
"mvn io.quarkus.platform:quarkus-maven-plugin:3.14.2:create -DprojectGroupId=org.hawtio -DprojectArtifactId=quarkus-helloworld -Dextensions='openshift,camel-quarkus-quartz'",
"Set the Docker build strategy quarkus.openshift.build-strategy=docker # Expose the service to create an OpenShift Container Platform route quarkus.openshift.route.expose=true",
"package org.hawtio; import jakarta.enterprise.context.ApplicationScoped; import org.apache.camel.builder.endpoint.EndpointRouteBuilder; @ApplicationScoped public class SampleCamelRoute extends EndpointRouteBuilder { @Override public void configure() { from(quartz(\"cron\").cron(\"{{quartz.cron}}\")).routeId(\"cron\") .setBody().constant(\"Hello Camel! - cron\") .to(stream(\"out\")) .to(mock(\"result\")); from(\"quartz:simple?trigger.repeatInterval={{quartz.repeatInterval}}\").routeId(\"simple\") .setBody().constant(\"Hello Camel! - simple\") .to(stream(\"out\")) .to(mock(\"result\")); } }",
"Camel camel.context.name = SampleCamel # Uncomment the following to enable the Camel plugin Trace tab #camel.main.tracing = true #camel.main.backlogTracing = true #camel.main.useBreadcrumb = true # Uncomment to enable debugging of the application and in turn # enables the Camel plugin Debug tab even in non-development # environment #quarkus.camel.debug.enabled = true # Define properties for the Camel quartz component used in the # example quartz.cron = 0/10 * * * * ? quartz.repeatInterval = 10000",
"<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-stream</artifactId> </dependency> <dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-mock</artifactId> </dependency>",
"<resources> <resource> <directory>src/main/resources</directory> <filtering>true</filtering> </resource> </resources>",
"<properties> <!-- The current HawtIO Jolokia Version --> <jolokia-version>2.1.0</jolokia-version> <!-- =============================================================== === Jolokia agent configuration for the connection with HawtIO =============================================================== It should use HTTPS and SSL client authentication at minimum. The client principal should match those the HawtIO instance provides (the default is `hawtio-online.hawtio.svc`). --> <jolokia.protocol>https</jolokia.protocol> <jolokia.host>*</jolokia.host> <jolokia.port>8778</jolokia.port> <jolokia.useSslClientAuthentication>true</jolokia.useSslClientAuthentication> <jolokia.caCert>/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt</jolokia.caCert> <jolokia.clientPrincipal.1>cn=hawtio-online.hawtio.svc</jolokia.clientPrincipal.1> <jolokia.extendedClientCheck>true</jolokia.extendedClientCheck> <jolokia.discoveryEnabled>false</jolokia.discoveryEnabled> </properties>",
"<!-- This dependency is required for enabling Camel management via JMX / HawtIO. --> <dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-management</artifactId> </dependency> <!-- This dependency is optional for monitoring with HawtIO but is required for HawtIO view the Camel routes source XML. --> <dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-jaxb</artifactId> </dependency> <!-- Add this optional dependency, to enable Camel plugin debugging feature. --> <dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-debug</artifactId> </dependency> <!-- This dependency is required to include the Jolokia agent jvm for access to JMX beans. --> <dependency> <groupId>org.jolokia</groupId> <artifactId>jolokia-agent-jvm</artifactId> <version>USD{jolokia-version}</version> <classifier>javaagent</classifier> </dependency>",
"Enable the jolokia java-agent on the quarkus application quarkus.openshift.env.vars.JAVA_OPTS_APPEND=-javaagent:lib/main/org.jolokia.jolokia-agent-jvm-USD{jolokia-version}-javaagent.jar=protocol=USD{jolokia.protocol}\\,host=USD{jolokia.host}\\,port=USD{jolokia.port}\\,useSslClientAuthentication=USD{jolokia.useSslClientAuthentication}\\,caCert=USD{jolokia.caCert}\\,clientPrincipal.1=USD{jolokia.clientPrincipal.1}\\,extendedClientCheck=USD{jolokia.extendedClientCheck}\\,discoveryEnabled=USD{jolokia.discoveryEnabled}",
"Define the Jolokia port on the container for HawtIO access quarkus.openshift.ports.jolokia.container-port=USD{jolokia.port} quarkus.openshift.ports.jolokia.protocol=TCP",
"./mvnw clean package -Dquarkus.kubernetes.deploy=true",
"OpenID Connect configuration requred at client side URL of OpenID Connect Provider - the URL after which \".well-known/openid-configuration\" can be appended for discovery purposes provider = http://localhost:18080/realms/hawtio-demo OpenID client identifier client_id = hawtio-client response mode according to https://openid.net/specs/oauth-v2-multiple-response-types-1_0.html response_mode = fragment scope to request when performing OpenID authentication. MUST include \"openid\" and required permissions scope = openid email profile redirect URI after OpenID authentication - must also be configured at provider side redirect_uri = http://localhost:8080/hawtio challenge method according to https://datatracker.ietf.org/doc/html/rfc7636 code_challenge_method = S256 prompt hint according to https://openid.net/specs/openid-connect-core-1_0.html#AuthRequest prompt = login additional configuration for the server side if true, .well-known/openid-configuration will be fetched at server side. This is required for proper JWT access token validation oidc.cacheConfig = true time in minutes to cache public keys from jwks_uri jwks.cacheTime = 60 a path for an array of roles found in JWT payload. Property placeholders can be used for parameterized parts of the path (like for Keycloak) - but only for properties from this particular file example for properly configured Entra ID token #oidc.rolesPath = roles example for Keycloak with use-resource-role-mappings=true #oidc.rolesPath = resource_access.USD{client_id}.roles example for Keycloak with use-resource-role-mappings=false oidc.rolesPath = realm_access.roles properties for role mapping. Each property with \"roleMapping.\" prefix is used to map an original role from JWT token (found at USD{oidc.rolesPath}) to a role used by the application roleMapping.admin = admin roleMapping.user = user roleMapping.viewer = viewer roleMapping.manager = manager timeout for connection establishment (milliseconds) http.connectionTimeout = 5000 timeout for reading from established connection (milliseconds) http.readTimeout = 10000 HTTP proxy to use when connecting to OpenID Connect provider #http.proxyURL = http://127.0.0.1:3128 TLS configuration (system properties can be used, e.g., \"USD{catalina.home}/conf/hawtio.jks\") #ssl.protocol = TLSv1.3 #ssl.truststore = src/test/resources/hawtio.jks #ssl.truststorePassword = hawtio #ssl.keystore = src/test/resources/hawtio.jks #ssl.keystorePassword = hawtio #ssl.keyAlias = openid connect test provider #ssl.keyPassword = hawtio",
"-Dhawtio.rolePrincipalClasses=org.apache.activemq.artemis.spi.core.security.jaas.RolePrincipal",
"run -d --name keycloak -p 18080:8080 -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=admin quay.io/keycloak/keycloak:latest start-dev",
"{ \"exp\": 1709552728, \"iat\": 1709552428, \"jti\": \"0f33971f-c4f7-4a5c-a240-c18ba3f97aa1\", \"iss\": \"http://localhost:18080/realms/hawtio-demo\", \"aud\": \"account\", \"sub\": \"84d156fa-e4cc-4785-91c1-4e0bda4b8ed9\", \"typ\": \"Bearer\", \"azp\": \"hawtio-client\", \"session_state\": \"181a30ac-fce1-4f4f-aaee-110304ccb0e6\", \"acr\": \"1\", \"allowed-origins\": [ \"http://0.0.0.0:8181\", \"http://localhost:8080\", \"http://localhost:8181\", \"http://0.0.0.0:10001\", \"http://0.0.0.0:8080\", \"http://localhost:10001\", \"http://localhost:10000\", \"http://0.0.0.0:10000\" ], \"realm_access\": { \"roles\": [ \"viewer\", \"manager\", \"admin\", \"user\" ] }, \"resource_access\": { \"account\": { \"roles\": [ \"manage-account\", \"manage-account-links\", \"view-profile\" ] } }, \"scope\": \"openid profile email\", \"sid\": \"181a30ac-fce1-4f4f-aaee-110304ccb0e6\", \"email_verified\": false, \"name\": \"Admin HawtIO\", \"preferred_username\": \"admin\", \"given_name\": \"Admin\", \"family_name\": \"HawtIO\", \"email\": \"[email protected]\" }",
"example for Keycloak with use-resource-role-mappings=false oidc.rolesPath = realm_access.roles",
"OpenID Connect configuration requred at client side URL of OpenID Connect Provider - the URL after which \".well-known/openid-configuration\" can be appended for discovery purposes provider = https://login.microsoftonline.com/00000000-1111-2222-3333-444444444444/v2.0 OpenID client identifier client_id = 55555555-6666-7777-8888-999999999999 response mode according to https://openid.net/specs/oauth-v2-multiple-response-types-1_0.html response_mode = fragment scope to request when performing OpenID authentication. MUST include \"openid\" and required permissions scope = openid email profile redirect URI after OpenID authentication - must also be configured at provider side redirect_uri = http://localhost:8080/hawtio challenge method according to https://datatracker.ietf.org/doc/html/rfc7636 code_challenge_method = S256 prompt hint according to https://openid.net/specs/openid-connect-core-1_0.html#AuthRequest prompt = login",
"{ \"aud\": \"00000003-0000-0000-c000-000000000000\", \"iss\": \"https://sts.windows.net/8fd8ed3d-c739-410f-83ab-ac2228fa6bbf/\", \"app_displayname\": \"hawtio\", \"scp\": \"email openid profile User.Read\", }",
"scope to request when performing OpenID authentication. MUST include \"openid\" and required permissions scope = openid email profile api://hawtio-server/Jolokia.Access",
"\"optionalClaims\": { \"idToken\": [ { \"name\": \"groups\", \"source\": null, \"essential\": false, \"additionalProperties\": [] } ], \"accessToken\": [ { \"name\": \"groups\", \"source\": null, \"essential\": false, \"additionalProperties\": [ \"sam_account_name\" ] },",
"{ \"aud\": \"aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee\", \"iss\": \"https://sts.windows.net/.../\", \"iat\": 1709626257, \"nbf\": 1709626257, \"exp\": 1709630939, \"appid\": \"55555555-6666-7777-8888-999999999999\", \"groups\": [ ], \"name\": \"hawtio-viewer\", \"roles\": [ \"HawtIO.User\" ], \"scp\": \"Jolokia.Access\",",
"a path for an array of roles found in JWT payload. Property placeholders can be used for parameterized parts of the path (like for Keycloak) - but only for properties from this particular file example for properly configured Entra ID token #oidc.rolesPath = roles properties for role mapping. Each property with \"roleMapping.\" prefix is used to map an original role from JWT token (found at USD{oidc.rolesPath}) to a role used by the application roleMapping.HawtIO.User = user"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html-single/hawtio_diagnostic_console_guide/%7Bhawtio-examples-url%7Dquarkus |
Chapter 12. KafkaListenerAuthenticationCustom schema reference | Chapter 12. KafkaListenerAuthenticationCustom schema reference Used in: GenericKafkaListener Full list of KafkaListenerAuthenticationCustom schema properties Configures custom authentication for listeners. To configure custom authentication, set the type property to custom . Custom authentication allows for any type of Kafka-supported authentication to be used. Example custom OAuth authentication configuration spec: kafka: config: principal.builder.class: SimplePrincipal.class listeners: - name: oauth-bespoke port: 9093 type: internal tls: true authentication: type: custom sasl: true listenerConfig: oauthbearer.sasl.client.callback.handler.class: client.class oauthbearer.sasl.server.callback.handler.class: server.class oauthbearer.sasl.login.callback.handler.class: login.class oauthbearer.connections.max.reauth.ms: 999999999 sasl.enabled.mechanisms: oauthbearer oauthbearer.sasl.jaas.config: | org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required ; secrets: - name: example A protocol map is generated that uses the sasl and tls values to determine which protocol to map to the listener. SASL = True, TLS = True SASL_SSL SASL = False, TLS = True SSL SASL = True, TLS = False SASL_PLAINTEXT SASL = False, TLS = False PLAINTEXT Secrets are mounted to /opt/kafka/custom-authn-secrets/custom-listener-<listener_name>-<port>/<secret_name> in the Kafka broker nodes' containers. For example, the mounted secret ( example ) in the example configuration would be located at /opt/kafka/custom-authn-secrets/custom-listener-oauth-bespoke-9093/example . 12.1. Setting a custom principal builder You can set a custom principal builder in the Kafka cluster configuration. However, the principal builder is subject to the following requirements: The specified principal builder class must exist on the image. Before building your own, check if one already exists. You'll need to rebuild the Streams for Apache Kafka images with the required classes. No other listener is using oauth type authentication. This is because an OAuth listener appends its own principle builder to the Kafka configuration. The specified principal builder is compatible with Streams for Apache Kafka. Custom principal builders must support peer certificates for authentication, as Streams for Apache Kafka uses these to manage the Kafka cluster. Note Kafka's default principal builder class supports the building of principals based on the names of peer certificates. The custom principal builder should provide a principal of type user using the name of the SSL peer certificate. The following example shows a custom principal builder that satisfies the OAuth requirements of Streams for Apache Kafka. Example principal builder for custom OAuth configuration public final class CustomKafkaPrincipalBuilder implements KafkaPrincipalBuilder { public KafkaPrincipalBuilder() {} @Override public KafkaPrincipal build(AuthenticationContext context) { if (context instanceof SslAuthenticationContext) { SSLSession sslSession = ((SslAuthenticationContext) context).session(); try { return new KafkaPrincipal( KafkaPrincipal.USER_TYPE, sslSession.getPeerPrincipal().getName()); } catch (SSLPeerUnverifiedException e) { throw new IllegalArgumentException("Cannot use an unverified peer for authentication", e); } } // Create your own KafkaPrincipal here ... } } 12.2. KafkaListenerAuthenticationCustom schema properties The type property is a discriminator that distinguishes use of the KafkaListenerAuthenticationCustom type from KafkaListenerAuthenticationTls , KafkaListenerAuthenticationScramSha512 , KafkaListenerAuthenticationOAuth . It must have the value custom for the type KafkaListenerAuthenticationCustom . Property Property type Description type string Must be custom . sasl boolean Enable or disable SASL on this listener. listenerConfig map Configuration to be used for a specific listener. All values are prefixed with listener.name.<listener_name> . secrets GenericSecretSource array Secrets to be mounted to /opt/kafka/custom-authn-secrets/custom-listener-<listener_name>-<port>/<secret_name> . | [
"spec: kafka: config: principal.builder.class: SimplePrincipal.class listeners: - name: oauth-bespoke port: 9093 type: internal tls: true authentication: type: custom sasl: true listenerConfig: oauthbearer.sasl.client.callback.handler.class: client.class oauthbearer.sasl.server.callback.handler.class: server.class oauthbearer.sasl.login.callback.handler.class: login.class oauthbearer.connections.max.reauth.ms: 999999999 sasl.enabled.mechanisms: oauthbearer oauthbearer.sasl.jaas.config: | org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required ; secrets: - name: example",
"public final class CustomKafkaPrincipalBuilder implements KafkaPrincipalBuilder { public KafkaPrincipalBuilder() {} @Override public KafkaPrincipal build(AuthenticationContext context) { if (context instanceof SslAuthenticationContext) { SSLSession sslSession = ((SslAuthenticationContext) context).session(); try { return new KafkaPrincipal( KafkaPrincipal.USER_TYPE, sslSession.getPeerPrincipal().getName()); } catch (SSLPeerUnverifiedException e) { throw new IllegalArgumentException(\"Cannot use an unverified peer for authentication\", e); } } // Create your own KafkaPrincipal here } }"
]
| https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-kafkalistenerauthenticationcustom-reference |
Appendix D. S3 unsupported header fields | Appendix D. S3 unsupported header fields Table D.1. Unsupported Header Fields Name Type x-amz-security-token Request Server Response x-amz-delete-marker Response x-amz-id-2 Response x-amz-request-id Response x-amz-version-id Response | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/developer_guide/s3-unsupported-header-fields_dev |
Monitoring APIs | Monitoring APIs OpenShift Container Platform 4.12 Reference guide for monitoring APIs Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/monitoring_apis/index |
Chapter 1. Key features | Chapter 1. Key features An open standard protocol - AMQP 1.0 Industry-standard APIs - JMS 1.1 and 2.0 New event-driven APIs - Fast, efficient messaging that integrates everywhere Broad language support - C++, Java, JavaScript, Python, Ruby, and .NET Wide availability - Linux, Windows, and JVM-based environments | null | https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/amq_clients_overview/key_features |
Chapter 6. Federal Information Processing Standard on Red Hat OpenStack Services on OpenShift | Chapter 6. Federal Information Processing Standard on Red Hat OpenStack Services on OpenShift The Federal Information Processing Standards (FIPS) is a set of security requirements developed by the National Institute of Standards and Technology (NIST). In Red Hat Enterprise Linux 9, the supported standard is FIPS publication 140-3: Security Requirements for Cryptographic Modules . For details about the supported standard, see the Federal Information Processing Standards Publication 140-3 . FIPS 140-3 validated cryptographic modules are cryptographic libraries that have completed the NIST CMVP process and have received a certificate from NIST. For current information on Red Hat FIPS 140 validated modules, see Compliance Activities and Government Standards . FIPS is enabled by default in Red Hat OpenStack Services on OpenShift (RHOSO) when RHOSO is installed on a FIPS enabled Red Hat OpenShift Container Platform (RHOCP) cluster. You must enable FIPS on the initial install of RHOCP. For more information on installing a RHOCP cluster in FIPS mode, see Installing a cluster in FIPS mode . When you use the system-wide cryptographic policy, FIPS 140 mode , RHEL and CoreOS are designed to restrict the use of core cryptographic modules and libraries to those that have been FIPS-validated. Paramiko however, implements cryptographic functions in code, and has not been FIPS-validated. RHOSO core components use the RHEL cryptographic libraries submitted to NIST for FIPS validation unless they call paramiko. 6.1. Preparing to install a FIPS enabled Red Hat OpenStack Services on OpenShift control plane Before you install the Red Hat OpenStack Services on OpenShift (RHOSO) control plane, you must modify iscsi.conf to remove MD5 and SHA1. The iSCSId configuration for the control plane is not handled by RHOSO operators, so you must complete this step on the Red Hat OpenShift Container Platform (RHOCP) cluster. Prerequisites You have a pre-existing RHOCP cluster with FIPS enabled. For more information about FIPS on RHOCP, see Support for FIPS cryptography . Procedure On each of your nodes, ensure that the value of node.session.auth.chap_algs in the /etc/iscsi/iscsi.conf file is set to SHA3-256,SHA256 . 6.2. Verification of FIPS status You can check the FIPS status of RHOCP or deployed worker nodes. Procedure Log in to your Red Hat OpenShift Container Platform (RHOCP) cluster with an account with cluster-admin privileges. Get a list of the nodes in the cluster: Example output: Open a debug pod on one of the nodes shown in the output of the step: Example output: Check for fips_enabled in /proc Example output. 1 is displayed for enabled, 0 for disabled: For more information about installing Red Hat OpenShift Cluster Platform in FIPS mode, see Support for FIPS cryptography in the RHOCP Installing guide. | [
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master1 Ready control-plane,master 7d1h v1.28.6+6216ea1 master2 Ready control-plane,master 7d1h v1.28.6+6216ea1 master3 Ready control-plane,master 7d1h v1.28.6+6216ea1 worker1 Ready worker 7d1h v1.28.6+6216ea1 worker2 Ready worker 7d1h v1.28.6+6216ea1 worker3 Ready worker",
"oc debug node/worker2",
"Temporary namespace openshift-debug-rq2m8 is created for debugging node Starting pod/worker2-debug-5shqt To use host binaries, run `chroot /host` Pod IP: 192.168.50.112 If you don't see a command prompt, try pressing enter. sh-5.1#",
"sh-5.1# cat /proc/sys/crypto/fips_enabled",
"1"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/planning_your_deployment/assembly-fips_planning |
Chapter 7. Applying autoscaling to an OpenShift Container Platform cluster | Chapter 7. Applying autoscaling to an OpenShift Container Platform cluster Applying autoscaling to an OpenShift Container Platform cluster involves deploying a cluster autoscaler and then deploying machine autoscalers for each machine type in your cluster. Important You can configure the cluster autoscaler only in clusters where the Machine API Operator is operational. 7.1. About the cluster autoscaler The cluster autoscaler adjusts the size of an OpenShift Container Platform cluster to meet its current deployment needs. It uses declarative, Kubernetes-style arguments to provide infrastructure management that does not rely on objects of a specific cloud provider. The cluster autoscaler has a cluster scope, and is not associated with a particular namespace. The cluster autoscaler increases the size of the cluster when there are pods that fail to schedule on any of the current worker nodes due to insufficient resources or when another node is necessary to meet deployment needs. The cluster autoscaler does not increase the cluster resources beyond the limits that you specify. The cluster autoscaler computes the total memory, CPU, and GPU on all nodes the cluster, even though it does not manage the control plane nodes. These values are not single-machine oriented. They are an aggregation of all the resources in the entire cluster. For example, if you set the maximum memory resource limit, the cluster autoscaler includes all the nodes in the cluster when calculating the current memory usage. That calculation is then used to determine if the cluster autoscaler has the capacity to add more worker resources. Important Ensure that the maxNodesTotal value in the ClusterAutoscaler resource definition that you create is large enough to account for the total possible number of machines in your cluster. This value must encompass the number of control plane machines and the possible number of compute machines that you might scale to. Automatic node removal Every 10 seconds, the cluster autoscaler checks which nodes are unnecessary in the cluster and removes them. The cluster autoscaler considers a node for removal if the following conditions apply: The node utilization is less than the node utilization level threshold for the cluster. The node utilization level is the sum of the requested resources divided by the allocated resources for the node. If you do not specify a value in the ClusterAutoscaler custom resource, the cluster autoscaler uses a default value of 0.5 , which corresponds to 50% utilization. The cluster autoscaler can move all pods running on the node to the other nodes. The Kubernetes scheduler is responsible for scheduling pods on the nodes. The cluster autoscaler does not have scale down disabled annotation. If the following types of pods are present on a node, the cluster autoscaler will not remove the node: Pods with restrictive pod disruption budgets (PDBs). Kube-system pods that do not run on the node by default. Kube-system pods that do not have a PDB or have a PDB that is too restrictive. Pods that are not backed by a controller object such as a deployment, replica set, or stateful set. Pods with local storage. Pods that cannot be moved elsewhere because of a lack of resources, incompatible node selectors or affinity, matching anti-affinity, and so on. Unless they also have a "cluster-autoscaler.kubernetes.io/safe-to-evict": "true" annotation, pods that have a "cluster-autoscaler.kubernetes.io/safe-to-evict": "false" annotation. For example, you set the maximum CPU limit to 64 cores and configure the cluster autoscaler to only create machines that have 8 cores each. If your cluster starts with 30 cores, the cluster autoscaler can add up to 4 more nodes with 32 cores, for a total of 62. Limitations If you configure the cluster autoscaler, additional usage restrictions apply: Do not modify the nodes that are in autoscaled node groups directly. All nodes within the same node group have the same capacity and labels and run the same system pods. Specify requests for your pods. If you have to prevent pods from being deleted too quickly, configure appropriate PDBs. Confirm that your cloud provider quota is large enough to support the maximum node pools that you configure. Do not run additional node group autoscalers, especially the ones offered by your cloud provider. Note The cluster autoscaler only adds nodes in autoscaled node groups if doing so would result in a schedulable pod. If the available node types cannot meet the requirements for a pod request, or if the node groups that could meet these requirements are at their maximum size, the cluster autoscaler cannot scale up. Interaction with other scheduling features The horizontal pod autoscaler (HPA) and the cluster autoscaler modify cluster resources in different ways. The HPA changes the deployment's or replica set's number of replicas based on the current CPU load. If the load increases, the HPA creates new replicas, regardless of the amount of resources available to the cluster. If there are not enough resources, the cluster autoscaler adds resources so that the HPA-created pods can run. If the load decreases, the HPA stops some replicas. If this action causes some nodes to be underutilized or completely empty, the cluster autoscaler deletes the unnecessary nodes. The cluster autoscaler takes pod priorities into account. The Pod Priority and Preemption feature enables scheduling pods based on priorities if the cluster does not have enough resources, but the cluster autoscaler ensures that the cluster has resources to run all pods. To honor the intention of both features, the cluster autoscaler includes a priority cutoff function. You can use this cutoff to schedule "best-effort" pods, which do not cause the cluster autoscaler to increase resources but instead run only when spare resources are available. Pods with priority lower than the cutoff value do not cause the cluster to scale up or prevent the cluster from scaling down. No new nodes are added to run the pods, and nodes running these pods might be deleted to free resources. 7.1.1. Configuring the cluster autoscaler First, deploy the cluster autoscaler to manage automatic resource scaling in your OpenShift Container Platform cluster. Note Because the cluster autoscaler is scoped to the entire cluster, you can make only one cluster autoscaler for the cluster. 7.1.1.1. Cluster autoscaler resource definition This ClusterAutoscaler resource definition shows the parameters and sample values for the cluster autoscaler. Note When you change the configuration of an existing cluster autoscaler, it restarts. apiVersion: "autoscaling.openshift.io/v1" kind: "ClusterAutoscaler" metadata: name: "default" spec: podPriorityThreshold: -10 1 resourceLimits: maxNodesTotal: 24 2 cores: min: 8 3 max: 128 4 memory: min: 4 5 max: 256 6 gpus: - type: <gpu_type> 7 min: 0 8 max: 16 9 logVerbosity: 4 10 scaleDown: 11 enabled: true 12 delayAfterAdd: 10m 13 delayAfterDelete: 5m 14 delayAfterFailure: 30s 15 unneededTime: 5m 16 utilizationThreshold: "0.4" 17 expanders: ["Random"] 18 1 Specify the priority that a pod must exceed to cause the cluster autoscaler to deploy additional nodes. Enter a 32-bit integer value. The podPriorityThreshold value is compared to the value of the PriorityClass that you assign to each pod. 2 Specify the maximum number of nodes to deploy. This value is the total number of machines that are deployed in your cluster, not just the ones that the autoscaler controls. Ensure that this value is large enough to account for all of your control plane and compute machines and the total number of replicas that you specify in your MachineAutoscaler resources. 3 Specify the minimum number of cores to deploy in the cluster. 4 Specify the maximum number of cores to deploy in the cluster. 5 Specify the minimum amount of memory, in GiB, in the cluster. 6 Specify the maximum amount of memory, in GiB, in the cluster. 7 Optional: To configure the cluster autoscaler to deploy GPU-enabled nodes, specify a type value. This value must match the value of the spec.template.spec.metadata.labels[cluster-api/accelerator] label in the machine set that manages the GPU-enabled nodes of that type. For example, this value might be nvidia-t4 to represent Nvidia T4 GPUs, or nvidia-a10g for A10G GPUs. For more information, see "Labeling GPU machine sets for the cluster autoscaler". 8 Specify the minimum number of GPUs of the specified type to deploy in the cluster. 9 Specify the maximum number of GPUs of the specified type to deploy in the cluster. 10 Specify the logging verbosity level between 0 and 10 . The following log level thresholds are provided for guidance: 1 : (Default) Basic information about changes. 4 : Debug-level verbosity for troubleshooting typical issues. 9 : Extensive, protocol-level debugging information. If you do not specify a value, the default value of 1 is used. 11 In this section, you can specify the period to wait for each action by using any valid ParseDuration interval, including ns , us , ms , s , m , and h . 12 Specify whether the cluster autoscaler can remove unnecessary nodes. 13 Optional: Specify the period to wait before deleting a node after a node has recently been added . If you do not specify a value, the default value of 10m is used. 14 Optional: Specify the period to wait before deleting a node after a node has recently been deleted . If you do not specify a value, the default value of 0s is used. 15 Optional: Specify the period to wait before deleting a node after a scale down failure occurred. If you do not specify a value, the default value of 3m is used. 16 Optional: Specify a period of time before an unnecessary node is eligible for deletion. If you do not specify a value, the default value of 10m is used. 17 Optional: Specify the node utilization level . Nodes below this utilization level are eligible for deletion. The node utilization level is the sum of the requested resources divided by the allocated resources for the node, and must be a value greater than "0" but less than "1" . If you do not specify a value, the cluster autoscaler uses a default value of "0.5" , which corresponds to 50% utilization. You must express this value as a string. 18 Optional: Specify any expanders that you want the cluster autoscaler to use. The following values are valid: LeastWaste : Selects the machine set that minimizes the idle CPU after scaling. If multiple machine sets would yield the same amount of idle CPU, the selection minimizes unused memory. Priority : Selects the machine set with the highest user-assigned priority. To use this expander, you must create a config map that defines the priority of your machine sets. For more information, see "Configuring a priority expander for the cluster autoscaler." Random : (Default) Selects the machine set randomly. If you do not specify a value, the default value of Random is used. You can specify multiple expanders by using the [LeastWaste, Priority] format. The cluster autoscaler applies each expander according to the specified order. In the [LeastWaste, Priority] example, the cluster autoscaler first evaluates according to the LeastWaste criteria. If more than one machine set satisfies the LeastWaste criteria equally well, the cluster autoscaler then evaluates according to the Priority criteria. If more than one machine set satisfies all of the specified expanders equally well, the cluster autoscaler selects one to use at random. Note When performing a scaling operation, the cluster autoscaler remains within the ranges set in the ClusterAutoscaler resource definition, such as the minimum and maximum number of cores to deploy or the amount of memory in the cluster. However, the cluster autoscaler does not correct the current values in your cluster to be within those ranges. The minimum and maximum CPUs, memory, and GPU values are determined by calculating those resources on all nodes in the cluster, even if the cluster autoscaler does not manage the nodes. For example, the control plane nodes are considered in the total memory in the cluster, even though the cluster autoscaler does not manage the control plane nodes. 7.1.1.2. Configuring a priority expander for the cluster autoscaler When the cluster autoscaler uses the priority expander, it scales up by using the machine set with the highest user-assigned priority. To use this expander, you must create a config map that defines the priority of your machine sets. For each specified priority level, you must create regular expressions to identify machine sets that you want to use when prioritizing a machine set for selection. The regular expressions must match the name of any compute machine set that you want the cluster autoscaler to consider for selection. Prerequisites You have deployed an OpenShift Container Platform cluster that uses the Machine API. You have access to the cluster using an account with cluster-admin permissions. You have installed the OpenShift CLI ( oc ). Procedure List the compute machine sets on your cluster by running the following command: USD oc get machinesets.machine.openshift.io Example output NAME DESIRED CURRENT READY AVAILABLE AGE archive-agl030519-vplxk-worker-us-east-1c 1 1 1 1 25m fast-01-agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m fast-02-agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m fast-03-agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m fast-04-agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m prod-01-agl030519-vplxk-worker-us-east-1a 1 1 1 1 33m prod-02-agl030519-vplxk-worker-us-east-1c 1 1 1 1 33m Using regular expressions, construct one or more patterns that match the name of any compute machine set that you want to set a priority level for. For example, use the regular expression pattern *fast* to match any compute machine set that includes the string fast in its name. Create a cluster-autoscaler-priority-expander.yml YAML file that defines a config map similar to the following: Example priority expander config map apiVersion: v1 kind: ConfigMap metadata: name: cluster-autoscaler-priority-expander 1 namespace: openshift-machine-api 2 data: priorities: |- 3 10: - .*fast.* - .*archive.* 40: - .*prod.* 1 You must name config map cluster-autoscaler-priority-expander . 2 You must create the config map in the same namespace as cluster autoscaler pod, which is the openshift-machine-api namespace. 3 Define the priority of your machine sets. The priorities values must be positive integers. The cluster autoscaler uses higher-value priorities before lower-value priorities. For each priority level, specify the regular expressions that correspond to the machine sets you want to use. Create the config map by running the following command: USD oc create configmap cluster-autoscaler-priority-expander \ --from-file=<location_of_config_map_file>/cluster-autoscaler-priority-expander.yml Verification Review the config map by running the following command: USD oc get configmaps cluster-autoscaler-priority-expander -o yaml steps To use the priority expander, ensure that the ClusterAutoscaler resource definition is configured to use the expanders: ["Priority"] parameter. 7.1.1.3. Labeling GPU machine sets for the cluster autoscaler You can use a machine set label to indicate which machines the cluster autoscaler can use to deploy GPU-enabled nodes. Prerequisites Your cluster uses a cluster autoscaler. Procedure On the machine set that you want to create machines for the cluster autoscaler to use to deploy GPU-enabled nodes, add a cluster-api/accelerator label: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1 1 Specify a label of your choice that consists of alphanumeric characters, - , _ , or . and starts and ends with an alphanumeric character. For example, you might use nvidia-t4 to represent Nvidia T4 GPUs, or nvidia-a10g for A10G GPUs. Note You must specify the value of this label for the spec.resourceLimits.gpus.type parameter in your ClusterAutoscaler CR. For more information, see "Cluster autoscaler resource definition". 7.1.2. Deploying a cluster autoscaler To deploy a cluster autoscaler, you create an instance of the ClusterAutoscaler resource. Procedure Create a YAML file for a ClusterAutoscaler resource that contains the custom resource definition. Create the custom resource in the cluster by running the following command: USD oc create -f <filename>.yaml 1 1 <filename> is the name of the custom resource file. steps After you configure the cluster autoscaler, you must configure at least one machine autoscaler . 7.2. About the machine autoscaler The machine autoscaler adjusts the number of Machines in the compute machine sets that you deploy in an OpenShift Container Platform cluster. You can scale both the default worker compute machine set and any other compute machine sets that you create. The machine autoscaler makes more Machines when the cluster runs out of resources to support more deployments. Any changes to the values in MachineAutoscaler resources, such as the minimum or maximum number of instances, are immediately applied to the compute machine set they target. Important You must deploy a machine autoscaler for the cluster autoscaler to scale your machines. The cluster autoscaler uses the annotations on compute machine sets that the machine autoscaler sets to determine the resources that it can scale. If you define a cluster autoscaler without also defining machine autoscalers, the cluster autoscaler will never scale your cluster. 7.2.1. Configuring machine autoscalers After you deploy the cluster autoscaler, deploy MachineAutoscaler resources that reference the compute machine sets that are used to scale the cluster. Important You must deploy at least one MachineAutoscaler resource after you deploy the ClusterAutoscaler resource. Note You must configure separate resources for each compute machine set. Remember that compute machine sets are different in each region, so consider whether you want to enable machine scaling in multiple regions. The compute machine set that you scale must have at least one machine in it. 7.2.1.1. Machine autoscaler resource definition This MachineAutoscaler resource definition shows the parameters and sample values for the machine autoscaler. apiVersion: "autoscaling.openshift.io/v1beta1" kind: "MachineAutoscaler" metadata: name: "worker-us-east-1a" 1 namespace: "openshift-machine-api" spec: minReplicas: 1 2 maxReplicas: 12 3 scaleTargetRef: 4 apiVersion: machine.openshift.io/v1beta1 kind: MachineSet 5 name: worker-us-east-1a 6 1 Specify the machine autoscaler name. To make it easier to identify which compute machine set this machine autoscaler scales, specify or include the name of the compute machine set to scale. The compute machine set name takes the following form: <clusterid>-<machineset>-<region> . 2 Specify the minimum number machines of the specified type that must remain in the specified zone after the cluster autoscaler initiates cluster scaling. If running in AWS, GCP, Azure, RHOSP, or vSphere, this value can be set to 0 . For other providers, do not set this value to 0 . You can save on costs by setting this value to 0 for use cases such as running expensive or limited-usage hardware that is used for specialized workloads, or by scaling a compute machine set with extra large machines. The cluster autoscaler scales the compute machine set down to zero if the machines are not in use. Important Do not set the spec.minReplicas value to 0 for the three compute machine sets that are created during the OpenShift Container Platform installation process for an installer provisioned infrastructure. 3 Specify the maximum number machines of the specified type that the cluster autoscaler can deploy in the specified zone after it initiates cluster scaling. Ensure that the maxNodesTotal value in the ClusterAutoscaler resource definition is large enough to allow the machine autoscaler to deploy this number of machines. 4 In this section, provide values that describe the existing compute machine set to scale. 5 The kind parameter value is always MachineSet . 6 The name value must match the name of an existing compute machine set, as shown in the metadata.name parameter value. 7.2.2. Deploying a machine autoscaler To deploy a machine autoscaler, you create an instance of the MachineAutoscaler resource. Procedure Create a YAML file for a MachineAutoscaler resource that contains the custom resource definition. Create the custom resource in the cluster by running the following command: USD oc create -f <filename>.yaml 1 1 <filename> is the name of the custom resource file. 7.3. Disabling autoscaling You can disable an individual machine autoscaler in your cluster or disable autoscaling on the cluster entirely. 7.3.1. Disabling a machine autoscaler To disable a machine autoscaler, you delete the corresponding MachineAutoscaler custom resource (CR). Note Disabling a machine autoscaler does not disable the cluster autoscaler. To disable the cluster autoscaler, follow the instructions in "Disabling the cluster autoscaler". Procedure List the MachineAutoscaler CRs for the cluster by running the following command: USD oc get MachineAutoscaler -n openshift-machine-api Example output NAME REF KIND REF NAME MIN MAX AGE compute-us-east-1a MachineSet compute-us-east-1a 1 12 39m compute-us-west-1a MachineSet compute-us-west-1a 2 4 37m Optional: Create a YAML file backup of the MachineAutoscaler CR by running the following command: USD oc get MachineAutoscaler/<machine_autoscaler_name> \ 1 -n openshift-machine-api \ -o yaml> <machine_autoscaler_name_backup>.yaml 2 1 <machine_autoscaler_name> is the name of the CR that you want to delete. 2 <machine_autoscaler_name_backup> is the name for the backup of the CR. Delete the MachineAutoscaler CR by running the following command: USD oc delete MachineAutoscaler/<machine_autoscaler_name> -n openshift-machine-api Example output machineautoscaler.autoscaling.openshift.io "compute-us-east-1a" deleted Verification To verify that the machine autoscaler is disabled, run the following command: USD oc get MachineAutoscaler -n openshift-machine-api The disabled machine autoscaler does not appear in the list of machine autoscalers. steps If you need to re-enable the machine autoscaler, use the <machine_autoscaler_name_backup>.yaml backup file and follow the instructions in "Deploying a machine autoscaler". Additional resources Disabling the cluster autoscaler Deploying a machine autoscaler 7.3.2. Disabling the cluster autoscaler To disable the cluster autoscaler, you delete the corresponding ClusterAutoscaler resource. Note Disabling the cluster autoscaler disables autoscaling on the cluster, even if the cluster has existing machine autoscalers. Procedure List the ClusterAutoscaler resource for the cluster by running the following command: USD oc get ClusterAutoscaler Example output NAME AGE default 42m Optional: Create a YAML file backup of the ClusterAutoscaler CR by running the following command: USD oc get ClusterAutoscaler/default \ 1 -o yaml> <cluster_autoscaler_backup_name>.yaml 2 1 default is the name of the ClusterAutoscaler CR. 2 <cluster_autoscaler_backup_name> is the name for the backup of the CR. Delete the ClusterAutoscaler CR by running the following command: USD oc delete ClusterAutoscaler/default Example output clusterautoscaler.autoscaling.openshift.io "default" deleted Verification To verify that the cluster autoscaler is disabled, run the following command: USD oc get ClusterAutoscaler Expected output No resources found steps Disabling the cluster autoscaler by deleting the ClusterAutoscaler CR prevents the cluster from autoscaling but does not delete any existing machine autoscalers on the cluster. To clean up unneeded machine autoscalers, see "Disabling a machine autoscaler". If you need to re-enable the cluster autoscaler, use the <cluster_autoscaler_name_backup>.yaml backup file and follow the instructions in "Deploying a cluster autoscaler". Additional resources Disabling the machine autoscaler Deploying a cluster autoscaler 7.4. Additional resources Including pod priority in pod scheduling decisions in OpenShift Container Platform | [
"apiVersion: \"autoscaling.openshift.io/v1\" kind: \"ClusterAutoscaler\" metadata: name: \"default\" spec: podPriorityThreshold: -10 1 resourceLimits: maxNodesTotal: 24 2 cores: min: 8 3 max: 128 4 memory: min: 4 5 max: 256 6 gpus: - type: <gpu_type> 7 min: 0 8 max: 16 9 logVerbosity: 4 10 scaleDown: 11 enabled: true 12 delayAfterAdd: 10m 13 delayAfterDelete: 5m 14 delayAfterFailure: 30s 15 unneededTime: 5m 16 utilizationThreshold: \"0.4\" 17 expanders: [\"Random\"] 18",
"oc get machinesets.machine.openshift.io",
"NAME DESIRED CURRENT READY AVAILABLE AGE archive-agl030519-vplxk-worker-us-east-1c 1 1 1 1 25m fast-01-agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m fast-02-agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m fast-03-agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m fast-04-agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m prod-01-agl030519-vplxk-worker-us-east-1a 1 1 1 1 33m prod-02-agl030519-vplxk-worker-us-east-1c 1 1 1 1 33m",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-autoscaler-priority-expander 1 namespace: openshift-machine-api 2 data: priorities: |- 3 10: - .*fast.* - .*archive.* 40: - .*prod.*",
"oc create configmap cluster-autoscaler-priority-expander --from-file=<location_of_config_map_file>/cluster-autoscaler-priority-expander.yml",
"oc get configmaps cluster-autoscaler-priority-expander -o yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1",
"oc create -f <filename>.yaml 1",
"apiVersion: \"autoscaling.openshift.io/v1beta1\" kind: \"MachineAutoscaler\" metadata: name: \"worker-us-east-1a\" 1 namespace: \"openshift-machine-api\" spec: minReplicas: 1 2 maxReplicas: 12 3 scaleTargetRef: 4 apiVersion: machine.openshift.io/v1beta1 kind: MachineSet 5 name: worker-us-east-1a 6",
"oc create -f <filename>.yaml 1",
"oc get MachineAutoscaler -n openshift-machine-api",
"NAME REF KIND REF NAME MIN MAX AGE compute-us-east-1a MachineSet compute-us-east-1a 1 12 39m compute-us-west-1a MachineSet compute-us-west-1a 2 4 37m",
"oc get MachineAutoscaler/<machine_autoscaler_name> \\ 1 -n openshift-machine-api -o yaml> <machine_autoscaler_name_backup>.yaml 2",
"oc delete MachineAutoscaler/<machine_autoscaler_name> -n openshift-machine-api",
"machineautoscaler.autoscaling.openshift.io \"compute-us-east-1a\" deleted",
"oc get MachineAutoscaler -n openshift-machine-api",
"oc get ClusterAutoscaler",
"NAME AGE default 42m",
"oc get ClusterAutoscaler/default \\ 1 -o yaml> <cluster_autoscaler_backup_name>.yaml 2",
"oc delete ClusterAutoscaler/default",
"clusterautoscaler.autoscaling.openshift.io \"default\" deleted",
"oc get ClusterAutoscaler",
"No resources found"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/machine_management/applying-autoscaling |
Chapter 9. neutron test | Chapter 9. neutron test The neutron test applies to OpenStack products or components that implement features for the OpenStack Networking service. The test uses Tempest Framework integrated with the Red Hat OpenStack Services on OpenShift (RHOSO) to test both operational and functional features. The neutron test verifies the functionality of the neutron driver or backend that you are certifying by running selected feature tests. The following feature groups are currently tested: Section 9.1, "neutron-base-ipv4" (required) Section 9.2, "neutron-base-ipv6" (required) Section 9.3, "neutron-address-scope" (optional) Section 9.4, "neutron-agents" (optional) Section 9.5, "neutron-allowed-address-pair" (optional) Section 9.6, "neutron-auto-allocated-topology" (optional) Section 9.7, "neutron-availability-zones" (optional) Section 9.8, "neutron-extra-dhcp-options" (optional) Section 9.9, "neutron-ip-availability" (optional) Section 9.10, "neutron-l3" (required) Section 9.11, "neutron-l3-ipv6" (required) Section 9.12, "neutron-l3-flavors" (optional) Section 9.13, "neutron-mtu" (optional) Section 9.14, "neutron-multicast" (optional) Section 9.15, "neutron-network-segment-range" (optional) Section 9.16, "neutron-port-security" (optional) Section 9.17, "neutron-security-groups" (optional) Section 9.18, "neutron-tags" (optional) Section 9.19, "neutron-port-forwarding" (optional) Section 9.20, "neutron-revision" (optional) Section 9.21, "neutron-rbac" (optional) Section 9.22, "neutron-shared-network" (optional) Section 9.23, "neutron-trunk-ports" (optional) Section 9.24, "neutron-quality-of-service" (optional) Section 9.25, "neutron-server-operations" (required) Section 9.26, "neutron-quota" (optional) Section 9.27, "neutron-dns-integration" (optional) Prerequisites When you deploy OpenStack, ensure that you provide at least two EDPM compute nodes. For more information on configuring a neutron see, Configuring networking services . Configure a public subnet before running the tempest tests. Note Ensure that the external network has a sufficient number of IP addresses available in the allocation_pools . The required number of IP addresses may vary depending on the number of test workers running concurrently. Configure a public image with an advanced operating system. By default, Tempest tests use the Cirros image to generate instances, which is sufficient for most tests. However, some network-related tests require an image such as RHEL with additional tools (e.g., tcpdump, Python). Note This configuration is managed by the rhos-cert-init script , but you need to provide certain values, such as the image URL. The Tempest and neutron-tempest-plugin offer numerous configuration options for testing the neutron component. Certification test environments do not configure most of these options automatically, except those related to the advanced image and flavor. If certifying the driver requires specific additional configuration settings, use a tempest-conf-overrides file. Initialize the certification test environment with the rhos-cert-init command. Create the tempest-conf-overrides file and add the required configuration settings. Note In the tempest-conf-overrides file, enter the necessary configuration settings on a separate line without comments. Add required configuration options to the file, for example: 9.1. neutron-base-ipv4 The neutron-base-ipv4 test validates the driver and base functionalities of neutron. It verifies that API operations for resources such as ports, networks, subnets, and subnet pools are functioning correctly. Additionally, it checks if basic connectivity for instances are working fine. This test is mandatory. 9.2. neutron-base-ipv6 The neutron-base-ipv6 test verifies that the driver and base functionalities of neutron work correctly with IPv6. This test is mandatory. 9.3. neutron-address-scope The neutron-address-scope test verifies that all operations for managing address scopes can be performed with the help of a vendor driver. This test is optional. 9.4. neutron-agents The neutron-agents test verifies that all operations for managing neutron agents can be performed with the help of a vendor driver. This test is optional. 9.5. neutron-allowed-address-pair The neutron-allowed-address-pair test verifies that the allowed-address-pairs operates correctly with the help of a vendor driver. This test is optional. 9.6. neutron-auto-allocated-topology The neutron-auto-allocated-topology (get-me-a-network extension) test verifies whether a tenant user can delete or retrieve allocated topologies. This test is optional. 9.7. neutron-availability-zones The neutron-availability-zones test verifies that all standard API operations can be applied to availability zones. This test is mandatory if the driver supports availability zones in Neutron. 9.8. neutron-extra-dhcp-options The neutron-extra-dhcp-options test verifies that setting extra DHCP options works correctly with a vendor driver. This test is optional. 9.9. neutron-ip-availability The neutron-ip-availability test verifies that the APIs for checking the number of available and used IP addresses in networks works correctly with a vendor driver. This test is optional. 9.10. neutron-l3 The neutron-l3 test verifies that the driver functionalities related to L3 services, including routers and Floating IPs, works correctly with the vendor driver. This test is mandatory if the vendor's driver supports L3 services. 9.11. neutron-l3-ipv6 The neutron-l3-ipv6 test verifies that the driver functionalities related to L3 services and IPv6, including routers, works correctly with the vendor driver. This test is mandatory if the vendor's driver supports L3 services for IPv6. 9.12. neutron-l3-flavors The neutron-l3-flavors test verifies that all standard flavor operations can be performed using a third-party plugin or driver. This test is optional. 9.13. neutron-mtu The neutron-mtu test verifies whether the vendor driver allows setting and changing the MTU for networks. This test is optional. 9.14. neutron-multicast The neutron-multicast test verifies whether the vendor driver supports multicast traffic between virtual machines (VMs). This test is optional. 9.15. neutron-network-segment-range The neutron-network-segment-range test verifies whether the vendor driver allows management of available network segment ranges for tenants. This test is optional. 9.16. neutron-port-security The neutron-port-security test verifies whether the vendor driver allows enabling or disabling port security for ports and networks. This test is optional. 9.17. neutron-security-groups The neutron-security-groups test verifies that neutron's security group APIs and functionalities operate correctly with the vendor driver. This test is optional. 9.18. neutron-tags The neutron-tags test verifies that neutron's API for managing tags on various resources, such as networks and ports, operate correctly with the vendor driver. This test is optional. 9.19. neutron-port-forwarding The neutron-port-forwarding test verifies that neutron's Floating IP Port Forwarding APIs and functionalities operate correctly with the vendor driver. This test is optional. 9.20. neutron-revision The neutron-revision test verifies that the vendor driver allows setting and checking revisions for API resources such as ports and networks. This test is optional. 9.21. neutron-rbac The neutron-rbac test verifies that all Role-Based Access Control (RBAC) operations can be performed on various Neutron resources. This test is optional. 9.22. neutron-shared-network The neutron-shared-network test verifies whether the vendor driver allows to share networks with all other tenants in the cloud. This test is optional. 9.23. neutron-trunk-ports The neutron-trunk-ports test verifies whether neutron's Trunk APIs and functionalities operate correctly with the vendor driver. This test is optional. 9.24. neutron-quality-of-service The neutron-quality-of-service test verifies that neutron's QoS APIs and functionalities work correctly with the vendor driver. This test is optional. 9.25. neutron-server-operations The neutron-server-operations test verifies the connectivity to instances before and after key operations, including: Cold or live migration Resize and revert resize Pause and unpause Reboot Stop or start This test is mandatory. 9.26. neutron-quota The neutron-quota test verifies whether the vendor driver allows management of resource quotas for each project. This test is optional. 9.27. neutron-dns-integration The neutron-dns-integration test verifies whether the integration between neutron and Designate (DNS as a Service) functions correctly. This test is optional. | [
"neutron_plugin_options.max_mtu 9000"
]
| https://docs.redhat.com/en/documentation/red_hat_software_certification/2025/html/red_hat_openstack_services_on_openshift_certification_policy_guide/con_neutron_rhoso-policy-certifiation-lifecycle |
Chapter 16. Setting up a broker cluster | Chapter 16. Setting up a broker cluster A cluster consists of multiple broker instances that have been grouped together. Broker clusters enhance performance by distributing the message processing load across multiple brokers. In addition, broker clusters can minimize downtime through high availability. You can connect brokers together in many different cluster topologies. Within the cluster, each active broker manages its own messages and handles its own connections. You can also balance client connections across the cluster and redistribute messages to avoid broker starvation. 16.1. Understanding broker clusters Before creating a broker cluster, you should understand some important clustering concepts. 16.1.1. How broker clusters balance message load When brokers are connected to form a cluster, AMQ Broker automatically balances the message load between the brokers. This ensures that the cluster can maintain high message throughput. Consider a symmetric cluster of four brokers. Each broker is configured with a queue named OrderQueue . The OrderProducer client connects to Broker1 and sends messages to OrderQueue . Broker1 forwards the messages to the other brokers in round-robin fashion. The OrderConsumer clients connected to each broker consume the messages. The exact order depends on the order in which the brokers started. Figure 16.1. Message load balancing Without message load balancing, the messages sent to Broker1 would stay on Broker1 and only OrderConsumer1 would be able to consume them. While AMQ Broker automatically load balances messages by default, you can configure the cluster to only load balance messages to brokers that have a matching consumer. You can also configure message redistribution to automatically redistribute messages from queues that do not have any consumers to queues that do have consumers. Additional resources The message load balancing policy is configured with the message-load-balancing property in each broker's cluster connection. For more information, see Appendix C, Cluster Connection Configuration Elements . For more information about message redistribution, see Section 16.4.2, "Configuring message redistribution" . 16.1.2. How broker clusters improve reliability Broker clusters make high availability and failover possible, which makes them more reliable than standalone brokers. By configuring high availability, you can ensure that client applications can continue to send and receive messages even if a broker encounters a failure event. With high availability, the brokers in the cluster are grouped into live-backup groups. A live-backup group consists of a live broker that serves client requests, and one or more backup brokers that wait passively to replace the live broker if it fails. If a failure occurs, the backup brokers replaces the live broker in its live-backup group, and the clients reconnect and continue their work. 16.1.3. Understanding node IDs The broker node ID is a Globally Unique Identifier (GUID) generated programmatically when the journal for a broker instance is first created and initialized. The node ID is stored in the server.lock file. The node ID is used to uniquely identify a broker instance, regardless of whether the broker is a standalone instance, or part of a cluster. Live-backup broker pairs share the same node ID, since they share the same journal. In a broker cluster, broker instances (nodes) connect to each other and create bridges and internal "store-and-forward" queues. The names of these internal queues are based on the node IDs of the other broker instances. Broker instances also monitor cluster broadcasts for node IDs that match their own. A broker produces a warning message in the log if it identifies a duplicate ID. When you are using the replication high availability (HA) policy, a master broker that starts and has check-for-live-server set to true searches for a broker that is using its node ID. If the master broker finds another broker using the same node ID, it either does not start, or initiates failback, based on the HA configuration. The node ID is durable , meaning that it survives restarts of the broker. However, if you delete a broker instance (including its journal), then the node ID is also permanently deleted. Additional resources For more information about configuring the replication HA policy, see Configuring replication high availability . 16.1.4. Common broker cluster topologies You can connect brokers to form either a symmetric or chain cluster topology. The topology you implement depends on your environment and messaging requirements. Symmetric clusters In a symmetric cluster, every broker is connected to every other broker. This means that every broker is no more than one hop away from every other broker. Figure 16.2. Symmetric cluster Each broker in a symmetric cluster is aware of all of the queues that exist on every other broker in the cluster and the consumers that are listening on those queues. Therefore, symmetric clusters are able to load balance and redistribute messages more optimally than a chain cluster. Symmetric clusters are easier to set up than chain clusters, but they can be difficult to use in environments in which network restrictions prevent brokers from being directly connected. Chain clusters In a chain cluster, each broker in the cluster is not connected to every broker in the cluster directly. Instead, the brokers form a chain with a broker on each end of the chain and all other brokers just connecting to the and brokers in the chain. Figure 16.3. Chain cluster Chain clusters are more difficult to set up than symmetric clusters, but can be useful when brokers are on separate networks and cannot be directly connected. By using a chain cluster, an intermediary broker can indirectly connect two brokers to enable messages to flow between them even though the two brokers are not directly connected. 16.1.5. Broker discovery methods Discovery is the mechanism by which brokers in a cluster propagate their connection details to each other. AMQ Broker supports both dynamic discovery and static discovery . Dynamic discovery Each broker in the cluster broadcasts its connection settings to the other members through either UDP multicast or JGroups. In this method, each broker uses: A broadcast group to push information about its cluster connection to other potential members of the cluster. A discovery group to receive and store cluster connection information about the other brokers in the cluster. Static discovery If you are not able to use UDP or JGroups in your network, or if you want to manually specify each member of the cluster, you can use static discovery. In this method, a broker "joins" the cluster by connecting to a second broker and sending its connection details. The second broker then propagates those details to the other brokers in the cluster. 16.1.6. Cluster sizing considerations Before creating a broker cluster, consider your messaging throughput, topology, and high availability requirements. These factors affect the number of brokers to include in the cluster. Note After creating the cluster, you can adjust the size by adding and removing brokers. You can add and remove brokers without losing any messages. Messaging throughput The cluster should contain enough brokers to provide the messaging throughput that you require. The more brokers in the cluster, the greater the throughput. However, large clusters can be complex to manage. Topology You can create either symmetric clusters or chain clusters. The type of topology you choose affects the number of brokers you may need. For more information, see Section 16.1.4, "Common broker cluster topologies" . High availability If you require high availability (HA), consider choosing an HA policy before creating the cluster. The HA policy affects the size of the cluster, because each master broker should have at least one slave broker. For more information, see Section 16.3, "Implementing high availability" . 16.2. Creating a broker cluster You create a broker cluster by configuring a cluster connection on each broker that should participate in the cluster. The cluster connection defines how the broker should connect to the other brokers. You can create a broker cluster that uses static discovery or dynamic discovery (either UDP multicast or JGroups). Prerequisites You should have determined the size of the broker cluster. For more information, see Section 16.1.6, "Cluster sizing considerations" . 16.2.1. Creating a broker cluster with static discovery You can create a broker cluster by specifying a static list of brokers. Use this static discovery method if you are unable to use UDP multicast or JGroups on your network. Procedure Open the <broker-instance-dir> /etc/broker.xml configuration file. Within the <core> element, add the following connectors: A connector that defines how other brokers can connect to this one One or more connectors that define how this broker can connect to other brokers in the cluster <configuration> <core> ... <connectors> <connector name="netty-connector">tcp://localhost:61617</connector> 1 <connector name="broker2">tcp://localhost:61618</connector> 2 <connector name="broker3">tcp://localhost:61619</connector> </connectors> ... </core> </configuration> 1 This connector defines connection information that other brokers can use to connect to this one. This information will be sent to other brokers in the cluster during discovery. 2 The broker2 and broker3 connectors define how this broker can connect to two other brokers in the cluster, one of which will always be available. If there are other brokers in the cluster, they will be discovered by one of these connectors when the initial connection is made. For more information about connectors, see Section 2.2, "About Connectors" . Add a cluster connection and configure it to use static discovery. By default, the cluster connection will load balance messages for all addresses in a symmetric topology. <configuration> <core> ... <cluster-connections> <cluster-connection name="my-cluster"> <connector-ref>netty-connector</connector-ref> <static-connectors> <connector-ref>broker2-connector</connector-ref> <connector-ref>broker3-connector</connector-ref> </static-connectors> </cluster-connection> </cluster-connections> ... </core> </configuration> cluster-connection Use the name attribute to specify the name of the cluster connection. connector-ref The connector that defines how other brokers can connect to this one. static-connectors One or more connectors that this broker can use to make an initial connection to another broker in the cluster. After making this initial connection, the broker will discover the other brokers in the cluster. You only need to configure this property if the cluster uses static discovery. Configure any additional properties for the cluster connection. These additional cluster connection properties have default values that are suitable for most common use cases. Therefore, you only need to configure these properties if you do not want the default behavior. For more information, see Appendix C, Cluster Connection Configuration Elements . Create the cluster user and password. AMQ Broker ships with default cluster credentials, but you should change them to prevent unauthorized remote clients from using these default credentials to connect to the broker. Important The cluster password must be the same on every broker in the cluster. <configuration> <core> ... <cluster-user>cluster_user</cluster-user> <cluster-password>cluster_user_password</cluster-password> ... </core> </configuration> Repeat this procedure on each additional broker. You can copy the cluster configuration to each additional broker. However, do not copy any of the other AMQ Broker data files (such as the bindings, journal, and large messages directories). These files must be unique among the nodes in the cluster or the cluster will not form properly. Additional resources For an example of a broker cluster that uses static discovery, see the clustered-static-discovery AMQ Broker example program . 16.2.2. Creating a broker cluster with UDP-based dynamic discovery You can create a broker cluster in which the brokers discover each other dynamically through UDP multicast. Procedure Open the <broker-instance-dir> /etc/broker.xml configuration file. Within the <core> element, add a connector. This connector defines connection information that other brokers can use to connect to this one. This information will be sent to other brokers in the cluster during discovery. <configuration> <core> ... <connectors> <connector name="netty-connector">tcp://localhost:61617</connector> </connectors> ... </core> </configuration> Add a UDP broadcast group. The broadcast group enables the broker to push information about its cluster connection to the other brokers in the cluster. This broadcast group uses UDP to broadcast the connection settings: <configuration> <core> ... <broadcast-groups> <broadcast-group name="my-broadcast-group"> <local-bind-address>172.16.9.3</local-bind-address> <local-bind-port>-1</local-bind-port> <group-address>231.7.7.7</group-address> <group-port>9876</group-port> <broadcast-period>2000</broadcast-period> <connector-ref>netty-connector</connector-ref> </broadcast-group> </broadcast-groups> ... </core> </configuration> The following parameters are required unless otherwise noted: broadcast-group Use the name attribute to specify a unique name for the broadcast group. local-bind-address The address to which the UDP socket is bound. If you have multiple network interfaces on your broker, you should specify which one you want to use for broadcasts. If this property is not specified, the socket will be bound to an IP address chosen by the operating system. This is a UDP-specific attribute. local-bind-port The port to which the datagram socket is bound. In most cases, use the default value of -1 , which specifies an anonymous port. This parameter is used in connection with local-bind-address . This is a UDP-specific attribute. group-address The multicast address to which the data will be broadcast. It is a class D IP address in the range 224.0.0.0 - 239.255.255.255 inclusive. The address 224.0.0.0 is reserved and is not available for use. This is a UDP-specific attribute. group-port The UDP port number used for broadcasting. This is a UDP-specific attribute. broadcast-period (optional) The interval in milliseconds between consecutive broadcasts. The default value is 2000 milliseconds. connector-ref The previously configured cluster connector that should be broadcasted. Add a UDP discovery group. The discovery group defines how this broker receives connector information from other brokers. The broker maintains a list of connectors (one entry for each broker). As it receives broadcasts from a broker, it updates its entry. If it does not receive a broadcast from a broker for a length of time, it removes the entry. This discovery group uses UDP to discover the brokers in the cluster: <configuration> <core> ... <discovery-groups> <discovery-group name="my-discovery-group"> <local-bind-address>172.16.9.7</local-bind-address> <group-address>231.7.7.7</group-address> <group-port>9876</group-port> <refresh-timeout>10000</refresh-timeout> </discovery-group> <discovery-groups> ... </core> </configuration> The following parameters are required unless otherwise noted: discovery-group Use the name attribute to specify a unique name for the discovery group. local-bind-address (optional) If the machine on which the broker is running uses multiple network interfaces, you can specify the network interface to which the discovery group should listen. This is a UDP-specific attribute. group-address The multicast address of the group on which to listen. It should match the group-address in the broadcast group that you want to listen from. This is a UDP-specific attribute. group-port The UDP port number of the multicast group. It should match the group-port in the broadcast group that you want to listen from. This is a UDP-specific attribute. refresh-timeout (optional) The amount of time in milliseconds that the discovery group waits after receiving the last broadcast from a particular broker before removing that broker's connector pair entry from its list. The default is 10000 milliseconds (10 seconds). Set this to a much higher value than the broadcast-period on the broadcast group. Otherwise, brokers might periodically disappear from the list even though they are still broadcasting (due to slight differences in timing). Create a cluster connection and configure it to use dynamic discovery. By default, the cluster connection will load balance messages for all addresses in a symmetric topology. <configuration> <core> ... <cluster-connections> <cluster-connection name="my-cluster"> <connector-ref>netty-connector</connector-ref> <discovery-group-ref discovery-group-name="my-discovery-group"/> </cluster-connection> </cluster-connections> ... </core> </configuration> cluster-connection Use the name attribute to specify the name of the cluster connection. connector-ref The connector that defines how other brokers can connect to this one. discovery-group-ref The discovery group that this broker should use to locate other members of the cluster. You only need to configure this property if the cluster uses dynamic discovery. Configure any additional properties for the cluster connection. These additional cluster connection properties have default values that are suitable for most common use cases. Therefore, you only need to configure these properties if you do not want the default behavior. For more information, see Appendix C, Cluster Connection Configuration Elements . Create the cluster user and password. AMQ Broker ships with default cluster credentials, but you should change them to prevent unauthorized remote clients from using these default credentials to connect to the broker. Important The cluster password must be the same on every broker in the cluster. <configuration> <core> ... <cluster-user>cluster_user</cluster-user> <cluster-password>cluster_user_password</cluster-password> ... </core> </configuration> Repeat this procedure on each additional broker. You can copy the cluster configuration to each additional broker. However, do not copy any of the other AMQ Broker data files (such as the bindings, journal, and large messages directories). These files must be unique among the nodes in the cluster or the cluster will not form properly. Additional resources For an example of a broker cluster configuration that uses dynamic discovery with UDP, see the clustered-queue AMQ Broker example program . 16.2.3. Creating a broker cluster with JGroups-based dynamic discovery If you are already using JGroups in your environment, you can use it to create a broker cluster in which the brokers discover each other dynamically. Prerequisites JGroups must be installed and configured. For an example of a JGroups configuration file, see the clustered-jgroups AMQ Broker example program . Procedure Open the <broker-instance-dir> /etc/broker.xml configuration file. Within the <core> element, add a connector. This connector defines connection information that other brokers can use to connect to this one. This information will be sent to other brokers in the cluster during discovery. <configuration> <core> ... <connectors> <connector name="netty-connector">tcp://localhost:61617</connector> </connectors> ... </core> </configuration> Within the <core> element, add a JGroups broadcast group. The broadcast group enables the broker to push information about its cluster connection to the other brokers in the cluster. This broadcast group uses JGroups to broadcast the connection settings: <configuration> <core> ... <broadcast-groups> <broadcast-group name="my-broadcast-group"> <jgroups-file>test-jgroups-file_ping.xml</jgroups-file> <jgroups-channel>activemq_broadcast_channel</jgroups-channel> <broadcast-period>2000</broadcast-period> <connector-ref>netty-connector</connector-ref> </broadcast-group> </broadcast-groups> ... </core> </configuration> The following parameters are required unless otherwise noted: broadcast-group Use the name attribute to specify a unique name for the broadcast group. jgroups-file The name of JGroups configuration file to initialize JGroups channels. The file must be in the Java resource path so that the broker can load it. jgroups-channel The name of the JGroups channel to connect to for broadcasting. broadcast-period (optional) The interval, in milliseconds, between consecutive broadcasts. The default value is 2000 milliseconds. connector-ref The previously configured cluster connector that should be broadcasted. Add a JGroups discovery group. The discovery group defines how connector information is received. The broker maintains a list of connectors (one entry for each broker). As it receives broadcasts from a broker, it updates its entry. If it does not receive a broadcast from a broker for a length of time, it removes the entry. This discovery group uses JGroups to discover the brokers in the cluster: <configuration> <core> ... <discovery-groups> <discovery-group name="my-discovery-group"> <jgroups-file>test-jgroups-file_ping.xml</jgroups-file> <jgroups-channel>activemq_broadcast_channel</jgroups-channel> <refresh-timeout>10000</refresh-timeout> </discovery-group> <discovery-groups> ... </core> </configuration> The following parameters are required unless otherwise noted: discovery-group Use the name attribute to specify a unique name for the discovery group. jgroups-file The name of JGroups configuration file to initialize JGroups channels. The file must be in the Java resource path so that the broker can load it. jgroups-channel The name of the JGroups channel to connect to for receiving broadcasts. refresh-timeout (optional) The amount of time in milliseconds that the discovery group waits after receiving the last broadcast from a particular broker before removing that broker's connector pair entry from its list. The default is 10000 milliseconds (10 seconds). Set this to a much higher value than the broadcast-period on the broadcast group. Otherwise, brokers might periodically disappear from the list even though they are still broadcasting (due to slight differences in timing). Create a cluster connection and configure it to use dynamic discovery. By default, the cluster connection will load balance messages for all addresses in a symmetric topology. <configuration> <core> ... <cluster-connections> <cluster-connection name="my-cluster"> <connector-ref>netty-connector</connector-ref> <discovery-group-ref discovery-group-name="my-discovery-group"/> </cluster-connection> </cluster-connections> ... </core> </configuration> cluster-connection Use the name attribute to specify the name of the cluster connection. connector-ref The connector that defines how other brokers can connect to this one. discovery-group-ref The discovery group that this broker should use to locate other members of the cluster. You only need to configure this property if the cluster uses dynamic discovery. Configure any additional properties for the cluster connection. These additional cluster connection properties have default values that are suitable for most common use cases. Therefore, you only need to configure these properties if you do not want the default behavior. For more information, see Appendix C, Cluster Connection Configuration Elements . Create the cluster user and password. AMQ Broker ships with default cluster credentials, but you should change them to prevent unauthorized remote clients from using these default credentials to connect to the broker. Important The cluster password must be the same on every broker in the cluster. <configuration> <core> ... <cluster-user>cluster_user</cluster-user> <cluster-password>cluster_user_password</cluster-password> ... </core> </configuration> Repeat this procedure on each additional broker. You can copy the cluster configuration to each additional broker. However, do not copy any of the other AMQ Broker data files (such as the bindings, journal, and large messages directories). These files must be unique among the nodes in the cluster or the cluster will not form properly. Additional resources For an example of a broker cluster that uses dynamic discovery with JGroups, see the clustered-jgroups AMQ Broker example program . 16.3. Implementing high availability After creating a broker cluster, you can improve its reliability by implementing high availability (HA). With HA, the broker cluster can continue to function even if one or more brokers go offline. Implementing HA involves several steps: You should understand what live-backup groups are, and choose an HA policy that best meets your requirements. See Understanding how HA works in AMQ Broker . When you have chosen a suitable HA policy, configure the HA policy on each broker in the cluster. See: Configuring shared store high availability Configuring replication high availability Configuring limited high availability with live-only Configuring high availability with colocated backups Configure your client applications to use failover . Note In the later event that you need to troubleshoot a broker cluster configured for high availability, it is recommended that you enable Garbage Collection (GC) logging for each Java Virtual Machine (JVM) instance that is running a broker in the cluster. To learn how to enable GC logs on your JVM, consult the official documentation for the Java Development Kit (JDK) version used by your JVM. For more information on the JVM versions that AMQ Broker supports, see Red Hat AMQ 7 Supported Configurations . 16.3.1. Understanding high availability In AMQ Broker, you implement high availability (HA) by grouping the brokers in the cluster into live-backup groups . In a live-backup group, a live broker is linked to a backup broker, which can take over for the live broker if it fails. AMQ Broker also provides several different strategies for failover (called HA policies ) within a live-backup group. 16.3.1.1. How live-backup groups provide high availability In AMQ Broker, you implement high availability (HA) by linking together the brokers in your cluster to form live-backup groups . Live-backup groups provide failover , which means that if one broker fails, another broker can take over its message processing. A live-backup group consists of one live broker (sometimes called the master broker) linked to one or more backup brokers (sometimes called slave brokers). The live broker serves client requests, while the backup brokers wait in passive mode. If the live broker fails, a backup broker replaces the live broker, enabling the clients to reconnect and continue their work. 16.3.1.2. High availability policies A high availability (HA) policy defines how failover happens in a live-backup group. AMQ Broker provides several different HA policies: Shared store (recommended) The live and backup brokers store their messaging data in a common directory on a shared file system; typically a Storage Area Network (SAN) or Network File System (NFS) server. You can also store broker data in a specified database if you have configured JDBC-based persistence. With shared store, if a live broker fails, the backup broker loads the message data from the shared store and takes over for the failed live broker. In most cases, you should use shared store instead of replication. Because shared store does not replicate data over the network, it typically provides better performance than replication. Shared store also avoids network isolation (also called "split brain") issues in which a live broker and its backup become live at the same time. Replication The live and backup brokers continuously synchronize their messaging data over the network. If the live broker fails, the backup broker loads the synchronized data and takes over for the failed live broker. Data synchronization between the live and backup brokers ensures that no messaging data is lost if the live broker fails. When the live and backup brokers initially join together, the live broker replicates all of its existing data to the backup broker over the network. Once this initial phase is complete, the live broker replicates persistent data to the backup broker as the live broker receives it. This means that if the live broker drops off the network, the backup broker has all of the persistent data that the live broker has received up to that point. Because replication synchronizes data over the network, network failures can result in network isolation in which a live broker and its backup become live at the same time. Live-only (limited HA) When a live broker is stopped gracefully, it copies its messages and transaction state to another live broker and then shuts down. Clients can then reconnect to the other broker to continue sending and receiving messages. Additional resources For more information about the persistent message data that is shared between brokers in a live-backup group, see Section 6.1, "About Journal-based Persistence" . 16.3.1.3. Replication policy limitations Network isolation (sometimes called "split brain") is a limitation of the replication high availability (HA) policy. You should understand how it occurs, and how to avoid it. Network isolation can happen if a live broker and its backup lose their connection. In this situation, both a live broker and its backup can become active at the same time. Specifically, if the backup broker can still connect to more than half of the live brokers in the cluster, it also becomes active. Because there is no message replication between the brokers in this situation, they each serve clients and process messages without the other knowing it. In this case, each broker has a completely different journal. Recovering from this situation can be very difficult and in some cases, not possible. To avoid network isolation, consider the following: To eliminate any possibility of network isolation, use the shared store HA policy. If you do use the replication HA policy, you can reduce (but not eliminate) the chance of encountering network isolation by using at least three live-backup pairs . Using at least three live-backup pairs ensures that a majority result can be achieved in any quorum vote that takes place when a live-backup broker pair experiences a replication interruption. Some additional considerations when you use the replication HA policy are described below: When a live broker fails and the backup transitions to live, no further replication takes place until a new backup broker is attached to the live, or failback to the original live broker occurs. If the backup broker in a live-backup group fails, the live broker continues to serve messages. However, messages are not replicated until another broker is added as a backup, or the original backup broker is restarted. During that time, messages are persisted only to the live broker. Suppose that both brokers in a live-backup pair were previously shut down, but are now available to be restarted. In this case, to avoid message loss, you need to restart the most recently active broker first. If the most recently active broker was the backup broker, you need to manually reconfigure this broker as a master broker to enable it to be restarted first. 16.3.2. Configuring shared store high availability You can use the shared store high availability (HA) policy to implement HA in a broker cluster. With shared store, both live and backup brokers access a common directory on a shared file system; typically a Storage Area Network (SAN) or Network File System (NFS) server. You can also store broker data in a specified database if you have configured JDBC-based persistence. With shared store, if a live broker fails, the backup broker loads the message data from the shared store and takes over for the failed live broker. In general, a SAN offers better performance (for example, speed) versus an NFS server, and is the recommended option, if available. If you need to use an NFS server, see Red Hat AMQ 7 Supported Configurations for more information about network file systems that AMQ Broker supports. In most cases, you should use shared store HA instead of replication. Because shared store does not replicate data over the network, it typically provides better performance than replication. Shared store also avoids network isolation (also called "split brain") issues in which a live broker and its backup become live at the same time. Note When using shared store, the startup time for the backup broker depends on the size of the message journal. When the backup broker takes over for a failed live broker, it loads the journal from the shared store. This process can be time consuming if the journal contains a lot of data. 16.3.2.1. Configuring an NFS shared store When using shared store high availability, you must configure both the live and backup brokers to use a common directory on a shared file system. Typically, you use a Storage Area Network (SAN) or Network File System (NFS) server. Listed below are some recommended configuration options when mounting an exported directory from an NFS server on each of your broker machine instances. sync Specifies that all changes are immediately flushed to disk. intr Allows NFS requests to be interrupted if the server is shut down or cannot be reached. noac Disables attribute caching. This behavior is needed to achieve attribute cache coherence among multiple clients. soft Specifies that if the NFS server is unavailable, the error should be reported rather than waiting for the server to come back online. lookupcache=none Disables lookup caching. timeo=n The time, in deciseconds (tenths of a second), that the NFS client (that is, the broker) waits for a response from the NFS server before it retries a request. For NFS over TCP, the default timeo value is 600 (60 seconds). For NFS over UDP, the client uses an adaptive algorithm to estimate an appropriate timeout value for frequently used request types, such as read and write requests. retrans=n The number of times that the NFS client retries a request before it attempts further recovery action. If the retrans option is not specified, the NFS client tries each request three times. Important It is important to use reasonable values when you configure the timeo and retrans options. A default timeo wait time of 600 deciseconds (60 seconds) combined with a retrans value of 5 retries can result in a five-minute wait for AMQ Broker to detect an NFS disconnection. Additional resources To learn how to mount an exported directory from an NFS server, see Mounting an NFS share with mount in the Red Hat Enterprise Linux documentation. For information about network file systems supported by AMQ Broker, see Red Hat AMQ 7 Supported Configurations . 16.3.2.2. Configuring shared store high availability This procedure shows how to configure shared store high availability for a broker cluster. Prerequisites A shared storage system must be accessible to the live and backup brokers. Typically, you use a Storage Area Network (SAN) or Network File System (NFS) server to provide the shared store. For more information about supported network file systems, see Red Hat AMQ 7 Supported Configurations . If you have configured JDBC-based persistence, you can use your specified database to provide the shared store. To learn how to configure JDBC persistence, see Configuring JDBC Persistence . Procedure Group the brokers in your cluster into live-backup groups. In most cases, a live-backup group should consist of two brokers: a live broker and a backup broker. If you have six brokers in your cluster, you would need three live-backup groups. Create the first live-backup group consisting of one live broker and one backup broker. Open the live broker's <broker-instance-dir> /etc/broker.xml configuration file. If you are using: A network file system to provide the shared store, verify that the live broker's paging, bindings, journal, and large messages directories point to a shared location that the backup broker can also access. <configuration> <core> ... <paging-directory>../sharedstore/data/paging</paging-directory> <bindings-directory>../sharedstore/data/bindings</bindings-directory> <journal-directory>../sharedstore/data/journal</journal-directory> <large-messages-directory>../sharedstore/data/large-messages</large-messages-directory> ... </core> </configuration> A database to provide the shared store, ensure that both the master and backup broker can connect to the same database and have the same configuration specified in the database-store element of the broker.xml configuration file. An example configuration is shown below. <configuration> <core> <store> <database-store> <jdbc-connection-url>jdbc:oracle:data/oracle/database-store;create=true</jdbc-connection-url> <jdbc-user>ENC(5493dd76567ee5ec269d11823973462f)</jdbc-user> <jdbc-password>ENC(56a0db3b71043054269d11823973462f)</jdbc-password> <bindings-table-name>BINDINGS_TABLE</bindings-table-name> <message-table-name>MESSAGE_TABLE</message-table-name> <large-message-table-name>LARGE_MESSAGES_TABLE</large-message-table-name> <page-store-table-name>PAGE_STORE_TABLE</page-store-table-name> <node-manager-store-table-name>NODE_MANAGER_TABLE<node-manager-store-table-name> <jdbc-driver-class-name>oracle.jdbc.driver.OracleDriver</jdbc-driver-class-name> <jdbc-network-timeout>10000</jdbc-network-timeout> <jdbc-lock-renew-period>2000</jdbc-lock-renew-period> <jdbc-lock-expiration>15000</jdbc-lock-expiration> <jdbc-journal-sync-period>5</jdbc-journal-sync-period> </database-store> </store> </core> </configuration> Configure the live broker to use shared store for its HA policy. <configuration> <core> ... <ha-policy> <shared-store> <master> <failover-on-shutdown>true</failover-on-shutdown> </master> </shared-store> </ha-policy> ... </core> </configuration> failover-on-shutdown If this broker is stopped normally, this property controls whether the backup broker should become live and take over. Open the backup broker's <broker-instance-dir> /etc/broker.xml configuration file. If you are using: A network file system to provide the shared store, verify that the backup broker's paging, bindings, journal, and large messages directories point to the same shared location as the live broker. <configuration> <core> ... <paging-directory>../sharedstore/data/paging</paging-directory> <bindings-directory>../sharedstore/data/bindings</bindings-directory> <journal-directory>../sharedstore/data/journal</journal-directory> <large-messages-directory>../sharedstore/data/large-messages</large-messages-directory> ... </core> </configuration> A database to provide the shared store, ensure that both the master and backup brokers can connect to the same database and have the same configuration specified in the database-store element of the broker.xml configuration file. Configure the backup broker to use shared store for its HA policy. <configuration> <core> ... <ha-policy> <shared-store> <slave> <failover-on-shutdown>true</failover-on-shutdown> <allow-failback>true</allow-failback> <restart-backup>true</restart-backup> </slave> </shared-store> </ha-policy> ... </core> </configuration> failover-on-shutdown If this broker has become live and then is stopped normally, this property controls whether the backup broker (the original live broker) should become live and take over. allow-failback If failover has occurred and the backup broker has taken over for the live broker, this property controls whether the backup broker should fail back to the original live broker when it restarts and reconnects to the cluster. Note Failback is intended for a live-backup pair (one live broker paired with a single backup broker). If the live broker is configured with multiple backups, then failback will not occur. Instead, if a failover event occurs, the backup broker will become live, and the backup will become its backup. When the original live broker comes back online, it will not be able to initiate failback, because the broker that is now live already has a backup. restart-backup This property controls whether the backup broker automatically restarts after it fails back to the live broker. The default value of this property is true . Repeat Step 2 for each remaining live-backup group in the cluster. 16.3.3. Configuring replication high availability You can use the replication high availability (HA) policy to implement HA in a broker cluster. With replication, persistent data is synchronized between the live and backup brokers. If a live broker encounters a failure, message data is synchronized to the backup broker and it takes over for the failed live broker. You should use replication as an alternative to shared store, if you do not have a shared file system. However, replication can result in network isolation in which a live broker and its backup become live at the same time. Replication requires at least three live-backup pairs to lessen (but not eliminate) the risk of network isolation. Using at least three live-backup broker pairs enables your cluster to use quorum voting to avoid having two live brokers. The sections that follow explain how quorum voting works and how to configure replication HA for a broker cluster with at least three live-backup pairs. Note Because the live and backup brokers must synchronize their messaging data over the network, replication adds a performance overhead. This synchronization process blocks journal operations, but it does not block clients. You can configure the maximum amount of time that journal operations can be blocked for data synchronization. 16.3.3.1. About quorum voting In the event that a live broker and its backup experience an interrupted replication connection, you can configure a process called quorum voting to mitigate against network isolation (or "split brain") issues. During network isolation, a live broker and its backup can become active at the same time. The following table describes the two types of quorum voting that AMQ Broker uses. Vote type Description Initiator Required configuration Participants Action based on vote result Backup vote If a backup broker loses its replication connection to the live broker, the backup broker decides whether or not to start based on the result of this vote. Backup broker None. A backup vote happens automatically when a backup broker loses connection to its replication partner. However, you can control the properties of a backup vote by specifying custom values for these parameters: quorum-vote-wait vote-retries vote-retry-wait Other live brokers in the cluster The backup broker starts if it receives a majority (that is, a quorum ) vote from the other live brokers in the cluster, indicating that its replication partner is no longer available. Live vote If a live broker loses connection to its replication partner, the live broker decides whether to continue running based on this vote. Live broker A live vote happens when a live broker loses connection to its replication partner and vote-on-replication-failure is set to true . A backup broker that has become active is considered a live broker, and can initiate a live vote. Other live brokers in the cluster The live broker shuts down if it doesn't receive a majority vote from the other live brokers in the cluster, indicating that its cluster connection is still active. Important Listed below are some important things to note about how the configuration of your broker cluster affects the behavior of quorum voting. For a quorum vote to succeed, the size of your cluster must allow a majority result to be achieved. Therefore, when you use the replication HA policy, your cluster should have at least three live-backup broker pairs. The more live-backup broker pairs that you add to your cluster, the more you increase the overall fault tolerance of the cluster. For example, suppose you have three live-backup pairs. If you lose a complete live-backup pair, the two remaining live-backup pairs cannot achieve a majority result in any subsequent quorum vote. This situation means that any further replication interruption in the cluster might cause a live broker to shut down, and prevent its backup broker from starting up. By configuring your cluster with, say, five broker pairs, the cluster can experience at least two failures, while still ensuring a majority result from any quorum vote. If you intentionally reduce the number of live-backup broker pairs in your cluster, the previously established threshold for a majority vote does not automatically decrease. During this time, any quorum vote triggered by a lost replication connection cannot succeed, making your cluster more vulnerable to network isolation. To make your cluster recalculate the majority threshold for a quorum vote, first shut down the live-backup pairs that you are removing from your cluster. Then, restart the remaining live-backup pairs in the cluster. When all of the remaining brokers have been restarted, the cluster recalculates the quorum vote threshold. 16.3.3.2. Configuring a broker cluster for replication high availability The following procedure describes how to configure replication high-availability (HA) for a six-broker cluster. In this topology, the six brokers are grouped into three live-backup pairs: each of the three live brokers is paired with a dedicated backup broker. Replication requires at least three live-backup pairs to lessen (but not eliminate) the risk of network isolation. Prerequisites You must have a broker cluster with at least six brokers. The six brokers are configured into three live-backup pairs. For more information about adding brokers to a cluster, see Chapter 16, Setting up a broker cluster . Procedure Group the brokers in your cluster into live-backup groups. In most cases, a live-backup group should consist of two brokers: a live broker and a backup broker. If you have six brokers in your cluster, you need three live-backup groups. Create the first live-backup group consisting of one live broker and one backup broker. Open the live broker's <broker-instance-dir> /etc/broker.xml configuration file. Configure the live broker to use replication for its HA policy. <configuration> <core> ... <ha-policy> <replication> <master> <check-for-live-server>true</check-for-live-server> <group-name>my-group-1</group-name> <vote-on-replication-failure>true</vote-on-replication-failure> ... </master> </replication> </ha-policy> ... </core> </configuration> check-for-live-server If the live broker fails, this property controls whether clients should fail back to it when it restarts. If you set this property to true , when the live broker restarts after a failover, it searches for another broker in the cluster with the same node ID. If the live broker finds another broker with the same node ID, this indicates that a backup broker successfully started upon failure of the live broker. In this case, the live broker synchronizes its data with the backup broker. The live broker then requests the backup broker to shut down. If the backup broker is configured for failback, as shown below, it shuts down. The live broker then resumes its active role, and clients reconnect to it. Warning If you do not set check-for-live-server to true on the live broker, you might experience duplicate messaging handling when you restart the live broker after a failover. Specifically, if you restart a live broker with this property set to false , the live broker does not synchronize data with its backup broker. In this case, the live broker might process the same messages that the backup broker has already handled, causing duplicates. group-name A name for this live-backup group. To form a live-backup group, the live and backup brokers must be configured with the same group name. vote-on-replication-failure This property controls whether a live broker initiates a quorum vote called a live vote in the event of an interrupted replication connection. A live vote is a way for a live broker to determine whether it or its partner is the cause of the interrupted replication connection. Based on the result of the vote, the live broker either stays running or shuts down. Important For a quorum vote to succeed, the size of your cluster must allow a majority result to be achieved. Therefore, when you use the replication HA policy, your cluster should have at least three live-backup broker pairs. The more broker pairs you configure in your cluster, the more you increase the overall fault tolerance of the cluster. For example, suppose you have three live-backup broker pairs. If you lose connection to a complete live-backup pair, the two remaining live-backup pairs can no longer achieve a majority result in a quorum vote. This situation means that any subsequent replication interruption might cause a live broker to shut down, and prevent its backup broker from starting up. By configuring your cluster with, say, five broker pairs, the cluster can experience at least two failures, while still ensuring a majority result from any quorum vote. Configure any additional HA properties for the live broker. These additional HA properties have default values that are suitable for most common use cases. Therefore, you only need to configure these properties if you do not want the default behavior. For more information, see Appendix F, Replication High Availability Configuration Elements . Open the backup broker's <broker-instance-dir> /etc/broker.xml configuration file. Configure the backup (that is, slave) broker to use replication for its HA policy. <configuration> <core> ... <ha-policy> <replication> <slave> <allow-failback>true</allow-failback> <restart-backup>true</restart-backup> <group-name>my-group-1</group-name> <vote-on-replication-failure>true</vote-on-replication-failure> ... </slave> </replication> </ha-policy> ... </core> </configuration> allow-failback If failover has occurred and the backup broker has taken over for the live broker, this property controls whether the backup broker should fail back to the original live broker when it restarts and reconnects to the cluster. Note Failback is intended for a live-backup pair (one live broker paired with a single backup broker). If the live broker is configured with multiple backups, then failback will not occur. Instead, if a failover event occurs, the backup broker will become live, and the backup will become its backup. When the original live broker comes back online, it will not be able to initiate failback, because the broker that is now live already has a backup. restart-backup This property controls whether the backup broker automatically restarts after it fails back to the live broker. The default value of this property is true . group-name The group name of the live broker to which this backup should connect. A backup broker connects only to a live broker that shares the same group name. vote-on-replication-failure This property controls whether a live broker initiates a quorum vote called a live vote in the event of an interrupted replication connection. A backup broker that has become active is considered a live broker and can initiate a live vote. A live vote is a way for a live broker to determine whether it or its partner is the cause of the interrupted replication connection. Based on the result of the vote, the live broker either stays running or shuts down. (Optional) Configure properties of the quorum votes that the backup broker initiates. <configuration> <core> ... <ha-policy> <replication> <slave> ... <vote-retries>12</vote-retries> <vote-retry-wait>5000</vote-retry-wait> ... </slave> </replication> </ha-policy> ... </core> </configuration> vote-retries This property controls how many times the backup broker retries the quorum vote in order to receive a majority result that allows the backup broker to start up. vote-retry-wait This property controls how long, in milliseconds, that the backup broker waits between each retry of the quorum vote. Configure any additional HA properties for the backup broker. These additional HA properties have default values that are suitable for most common use cases. Therefore, you only need to configure these properties if you do not want the default behavior. For more information, see Appendix F, Replication High Availability Configuration Elements . Repeat step 2 for each additional live-backup group in the cluster. If there are six brokers in the cluster, repeat this procedure two more times; once for each remaining live-backup group. Additional resources For examples of broker clusters that use replication for HA, see the HA example programs . For more information about node IDs, see Understanding node IDs . 16.3.4. Configuring limited high availability with live-only The live-only HA policy enables you to shut down a broker in a cluster without losing any messages. With live-only, when a live broker is stopped gracefully, it copies its messages and transaction state to another live broker and then shuts down. Clients can then reconnect to the other broker to continue sending and receiving messages. The live-only HA policy only handles cases when the broker is stopped gracefully. It does not handle unexpected broker failures. While live-only HA prevents message loss, it may not preserve message order. If a broker configured with live-only HA is stopped, its messages will be appended to the ends of the queues of another broker. Note When a broker is preparing to scale down, it sends a message to its clients before they are disconnected informing them which new broker is ready to process their messages. However, clients should reconnect to the new broker only after their initial broker has finished scaling down. This ensures that any state, such as queues or transactions, is available on the other broker when the client reconnects. The normal reconnect settings apply when the client is reconnecting, so you should set these high enough to deal with the time needed to scale down. This procedure describes how to configure each broker in the cluster to scale down. After completing this procedure, whenever a broker is stopped gracefully, it will copy its messages and transaction state to another broker in the cluster. Procedure Open the first broker's <broker-instance-dir> /etc/broker.xml configuration file. Configure the broker to use the live-only HA policy. <configuration> <core> ... <ha-policy> <live-only> </live-only> </ha-policy> ... </core> </configuration> Configure a method for scaling down the broker cluster. Specify the broker or group of brokers to which this broker should scale down. To scale down to... Do this... A specific broker in the cluster Specify the connector of the broker to which you want to scale down. <live-only> <scale-down> <connectors> <connector-ref>broker1-connector</connector-ref> </connectors> </scale-down> </live-only> Any broker in the cluster Specify the broker cluster's discovery group. <live-only> <scale-down> <discovery-group-ref discovery-group-name="my-discovery-group"/> </scale-down> </live-only> A broker in a particular broker group Specify a broker group. <live-only> <scale-down> <group-name>my-group-name</group-name> </scale-down> </live-only> Repeat this procedure for each remaining broker in the cluster. Additional resources For an example of a broker cluster that uses live-only to scale down the cluster, see the scale-down example programs . 16.3.5. Configuring high availability with colocated backups Rather than configure live-backup groups, you can colocate backup brokers in the same JVM as another live broker. In this configuration, each live broker is configured to request another live broker to create and start a backup broker in its JVM. Figure 16.4. Colocated live and backup brokers You can use colocation with either shared store or replication as the high availability (HA) policy. The new backup broker inherits its configuration from the live broker that creates it. The name of the backup is set to colocated_backup_n where n is the number of backups the live broker has created. In addition, the backup broker inherits the configuration for its connectors and acceptors from the live broker that creates it. By default, port offset of 100 is applied to each. For example, if the live broker has an acceptor for port 61616, the first backup broker created will use port 61716, the second backup will use 61816, and so on. Directories for the journal, large messages, and paging are set according to the HA policy you choose. If you choose shared store, the requesting broker notifies the target broker which directories to use. If replication is chosen, directories are inherited from the creating broker and have the new backup's name appended to them. This procedure configures each broker in the cluster to use shared store HA, and to request a backup to be created and colocated with another broker in the cluster. Procedure Open the first broker's <broker-instance-dir> /etc/broker.xml configuration file. Configure the broker to use an HA policy and colocation. In this example, the broker is configured with shared store HA and colocation. <configuration> <core> ... <ha-policy> <shared-store> <colocated> <request-backup>true</request-backup> <max-backups>1</max-backups> <backup-request-retries>-1</backup-request-retries> <backup-request-retry-interval>5000</backup-request-retry-interval/> <backup-port-offset>150</backup-port-offset> <excludes> <connector-ref>remote-connector</connector-ref> </excludes> <master> <failover-on-shutdown>true</failover-on-shutdown> </master> <slave> <failover-on-shutdown>true</failover-on-shutdown> <allow-failback>true</allow-failback> <restart-backup>true</restart-backup> </slave> </colocated> </shared-store> </ha-policy> ... </core> </configuration> request-backup By setting this property to true , this broker will request a backup broker to be created by another live broker in the cluster. max-backups The number of backup brokers that this broker can create. If you set this property to 0 , this broker will not accept backup requests from other brokers in the cluster. backup-request-retries The number of times this broker should try to request a backup broker to be created. The default is -1 , which means unlimited tries. backup-request-retry-interval The amount of time in milliseconds that the broker should wait before retrying a request to create a backup broker. The default is 5000 , or 5 seconds. backup-port-offset The port offset to use for the acceptors and connectors for a new backup broker. If this broker receives a request to create a backup for another broker in the cluster, it will create the backup broker with the ports offset by this amount. The default is 100 . excludes (optional) Excludes connectors from the backup port offset. If you have configured any connectors for external brokers that should be excluded from the backup port offset, add a <connector-ref> for each of the connectors. master The shared store or replication failover configuration for this broker. slave The shared store or replication failover configuration for this broker's backup. Repeat this procedure for each remaining broker in the cluster. Additional resources For examples of broker clusters that use colocated backups, see the HA example programs . 16.3.6. Configuring clients to fail over After configuring high availability in a broker cluster, you configure your clients to fail over. Client failover ensures that if a broker fails, the clients connected to it can reconnect to another broker in the cluster with minimal downtime. Note In the event of transient network problems, AMQ Broker automatically reattaches connections to the same broker. This is similar to failover, except that the client reconnects to the same broker. You can configure two different types of client failover: Automatic client failover The client receives information about the broker cluster when it first connects. If the broker to which it is connected fails, the client automatically reconnects to the broker's backup, and the backup broker re-creates any sessions and consumers that existed on each connection before failover. Application-level client failover As an alternative to automatic client failover, you can instead code your client applications with your own custom reconnection logic in a failure handler. Procedure Use AMQ Core Protocol JMS to configure your client application with automatic or application-level failover. For more information, see Using the AMQ Core Protocol JMS Client . 16.4. Enabling message redistribution If your broker cluster uses on-demand message load balancing, you can configure message redistribution to prevent messages from being "stuck" in a queue that does not have a consumer to consume the messages. This section contains information about: Understanding message distribution Configuring message redistribution 16.4.1. Understanding message redistribution Broker clusters use load balancing to distribute the message load across the cluster. When configuring load balancing in the cluster connection, if you set message-load-balancing to ON_DEMAND , the broker forwards messages only to other brokers that have matching consumers. This behavior ensures that messages are not moved to queues that do not have any consumers to consume the messages. However, if the consumers attached to a queue close after the messages are forwarded to the broker, those messages become "stuck" in the queue and are not consumed. This issue is sometimes called starvation . Message redistribution prevents starvation by automatically redistributing the messages from queues that have no consumers to brokers in the cluster that do have matching consumers. 16.4.1.1. Limitations of message redistribution with message filters Message redistribution does not support the use of filters (also know as selectors ) by consumers. A common use case for consumers with filters is a request-reply pattern using a correlation ID. For example, consider the following scenario: You have a cluster of two brokers, brokerA and brokerB . Each broker is configured with redistribution-delay set to 0 and message-load-balancing set to ON_DEMAND . brokerA and brokerB each has a queue named myQueue . Based on a request, a producer sends a message that is routed to queue myQueue on brokerA . The message has a correlation ID property named myCorrelID , with a value of 10 . A consumer connects to queue myQueue on brokerA with a filter of myCorrelID=5 . This filter does not match the correlation ID value of the message. Another consumer connects to queue myQueue on brokerB with a filter of myCorrelID=10 . This filter matches the correlation ID value of the message. In this case, although the filter of the consumer on brokerB matches the message, the message is not redistributed from brokerA to brokerB because a consumer for the queue myQueue exists on brokerA . In the preceding scenario, you can ensure that the intended client receives the message by creating the consumers before the request is sent to the producer. The message is immediately routed to the consumer with a filter matching the correlation ID of the message. Redistribution is not required. Additional resources For more information about cluster load balancing, see Section 16.1.1, "How broker clusters balance message load" . 16.4.2. Configuring message redistribution This procedure shows how to configure message redistribution. Procedure Open the <broker-instance-dir> /etc/broker.xml configuration file. In the <cluster-connection> element, verify that <message-load-balancing> is set to <ON_DEMAND> . <configuration> <core> ... <cluster-connections> <cluster-connection name="my-cluster"> ... <message-load-balancing>ON_DEMAND</message-load-balancing> ... </cluster-connection> </cluster-connections> </core> </configuration> Within the <address-settings> element, set the redistribution delay for a queue or set of queues. In this example, messages load balanced to my.queue will be redistributed 5000 milliseconds after the last consumer closes. <configuration> <core> ... <address-settings> <address-setting match="my.queue"> <redistribution-delay>5000</redistribution-delay> </address-setting> </address-settings> ... </core> </configuration> address-setting Set the match attribute to be the name of the queue for which you want messages to be redistributed. You can use the broker wildcard syntax to specify a range of queues. For more information, see Section 4.2, "Applying address settings to sets of addresses" . redistribution-delay The amount of time (in milliseconds) that the broker should wait after this queue's final consumer closes before redistributing messages to other brokers in the cluster. If you set this to 0 , messages will be redistributed immediately. However, you should typically set a delay before redistributing - it is common for a consumer to close but another one to be quickly created on the same queue. Repeat this procedure for each additional broker in the cluster. Additional resources For an example of a broker cluster configuration that redistributes messages, see the queue-message-redistribution AMQ Broker example program . 16.5. Configuring clustered message grouping Message grouping enables clients to send groups of messages of a particular type to be processed serially by the same consumer. By adding a grouping handler to each broker in the cluster, you ensure that clients can send grouped messages to any broker in the cluster and still have those messages consumed in the correct order by the same consumer. There are two types of grouping handlers: local handlers and remote handlers . They enable the broker cluster to route all of the messages in a particular group to the appropriate queue so that the intended consumer can consume them in the correct order. Prerequisites There should be at least one consumer on each broker in the cluster. When a message is pinned to a consumer on a queue, all messages with the same group ID will be routed to that queue. If the consumer is removed, the queue will continue to receive the messages even if there are no consumers. Procedure Configure a local handler on one broker in the cluster. If you are using high availability, this should be a master broker. Open the broker's <broker-instance-dir> /etc/broker.xml configuration file. Within the <core> element, add a local handler: The local handler serves as an arbiter for the remote handlers. It stores route information and communicates it to the other brokers. <configuration> <core> ... <grouping-handler name="my-grouping-handler"> <type>LOCAL</type> <timeout>10000</timeout> </grouping-handler> ... </core> </configuration> grouping-handler Use the name attribute to specify a unique name for the grouping handler. type Set this to LOCAL . timeout The amount of time to wait (in milliseconds) for a decision to be made about where to route the message. The default is 5000 milliseconds. If the timeout is reached before a routing decision is made, an exception is thrown, which ensures strict message ordering. When the broker receives a message with a group ID, it proposes a route to a queue to which the consumer is attached. If the route is accepted by the grouping handlers on the other brokers in the cluster, then the route is established: all brokers in the cluster will forward messages with this group ID to that queue. If the broker's route proposal is rejected, then it proposes an alternate route, repeating the process until a route is accepted. If you are using high availability, copy the local handler configuration to the master broker's slave broker. Copying the local handler configuration to the slave broker prevents a single point of failure for the local handler. On each remaining broker in the cluster, configure a remote handler. Open the broker's <broker-instance-dir> /etc/broker.xml configuration file. Within the <core> element, add a remote handler: <configuration> <core> ... <grouping-handler name="my-grouping-handler"> <type>REMOTE</type> <timeout>5000</timeout> </grouping-handler> ... </core> </configuration> grouping-handler Use the name attribute to specify a unique name for the grouping handler. type Set this to REMOTE . timeout The amount of time to wait (in milliseconds) for a decision to be made about where to route the message. The default is 5000 milliseconds. Set this value to at least half of the value of the local handler. Additional resources For an example of a broker cluster configured for message grouping, see the clustered-grouping AMQ Broker example program . 16.6. Connecting clients to a broker cluster You can use the AMQ JMS clients to connect to the cluster. By using JMS, you can configure your messaging clients to discover the list of brokers dynamically or statically. You can also configure client-side load balancing to distribute the client sessions created from the connection across the cluster. Procedure Use AMQ Core Protocol JMS to configure your client application to connect to the broker cluster. For more information, see Using the AMQ Core Protocol JMS Client . | [
"<configuration> <core> <connectors> <connector name=\"netty-connector\">tcp://localhost:61617</connector> 1 <connector name=\"broker2\">tcp://localhost:61618</connector> 2 <connector name=\"broker3\">tcp://localhost:61619</connector> </connectors> </core> </configuration>",
"<configuration> <core> <cluster-connections> <cluster-connection name=\"my-cluster\"> <connector-ref>netty-connector</connector-ref> <static-connectors> <connector-ref>broker2-connector</connector-ref> <connector-ref>broker3-connector</connector-ref> </static-connectors> </cluster-connection> </cluster-connections> </core> </configuration>",
"<configuration> <core> <cluster-user>cluster_user</cluster-user> <cluster-password>cluster_user_password</cluster-password> </core> </configuration>",
"<configuration> <core> <connectors> <connector name=\"netty-connector\">tcp://localhost:61617</connector> </connectors> </core> </configuration>",
"<configuration> <core> <broadcast-groups> <broadcast-group name=\"my-broadcast-group\"> <local-bind-address>172.16.9.3</local-bind-address> <local-bind-port>-1</local-bind-port> <group-address>231.7.7.7</group-address> <group-port>9876</group-port> <broadcast-period>2000</broadcast-period> <connector-ref>netty-connector</connector-ref> </broadcast-group> </broadcast-groups> </core> </configuration>",
"<configuration> <core> <discovery-groups> <discovery-group name=\"my-discovery-group\"> <local-bind-address>172.16.9.7</local-bind-address> <group-address>231.7.7.7</group-address> <group-port>9876</group-port> <refresh-timeout>10000</refresh-timeout> </discovery-group> <discovery-groups> </core> </configuration>",
"<configuration> <core> <cluster-connections> <cluster-connection name=\"my-cluster\"> <connector-ref>netty-connector</connector-ref> <discovery-group-ref discovery-group-name=\"my-discovery-group\"/> </cluster-connection> </cluster-connections> </core> </configuration>",
"<configuration> <core> <cluster-user>cluster_user</cluster-user> <cluster-password>cluster_user_password</cluster-password> </core> </configuration>",
"<configuration> <core> <connectors> <connector name=\"netty-connector\">tcp://localhost:61617</connector> </connectors> </core> </configuration>",
"<configuration> <core> <broadcast-groups> <broadcast-group name=\"my-broadcast-group\"> <jgroups-file>test-jgroups-file_ping.xml</jgroups-file> <jgroups-channel>activemq_broadcast_channel</jgroups-channel> <broadcast-period>2000</broadcast-period> <connector-ref>netty-connector</connector-ref> </broadcast-group> </broadcast-groups> </core> </configuration>",
"<configuration> <core> <discovery-groups> <discovery-group name=\"my-discovery-group\"> <jgroups-file>test-jgroups-file_ping.xml</jgroups-file> <jgroups-channel>activemq_broadcast_channel</jgroups-channel> <refresh-timeout>10000</refresh-timeout> </discovery-group> <discovery-groups> </core> </configuration>",
"<configuration> <core> <cluster-connections> <cluster-connection name=\"my-cluster\"> <connector-ref>netty-connector</connector-ref> <discovery-group-ref discovery-group-name=\"my-discovery-group\"/> </cluster-connection> </cluster-connections> </core> </configuration>",
"<configuration> <core> <cluster-user>cluster_user</cluster-user> <cluster-password>cluster_user_password</cluster-password> </core> </configuration>",
"<configuration> <core> <paging-directory>../sharedstore/data/paging</paging-directory> <bindings-directory>../sharedstore/data/bindings</bindings-directory> <journal-directory>../sharedstore/data/journal</journal-directory> <large-messages-directory>../sharedstore/data/large-messages</large-messages-directory> </core> </configuration>",
"<configuration> <core> <store> <database-store> <jdbc-connection-url>jdbc:oracle:data/oracle/database-store;create=true</jdbc-connection-url> <jdbc-user>ENC(5493dd76567ee5ec269d11823973462f)</jdbc-user> <jdbc-password>ENC(56a0db3b71043054269d11823973462f)</jdbc-password> <bindings-table-name>BINDINGS_TABLE</bindings-table-name> <message-table-name>MESSAGE_TABLE</message-table-name> <large-message-table-name>LARGE_MESSAGES_TABLE</large-message-table-name> <page-store-table-name>PAGE_STORE_TABLE</page-store-table-name> <node-manager-store-table-name>NODE_MANAGER_TABLE<node-manager-store-table-name> <jdbc-driver-class-name>oracle.jdbc.driver.OracleDriver</jdbc-driver-class-name> <jdbc-network-timeout>10000</jdbc-network-timeout> <jdbc-lock-renew-period>2000</jdbc-lock-renew-period> <jdbc-lock-expiration>15000</jdbc-lock-expiration> <jdbc-journal-sync-period>5</jdbc-journal-sync-period> </database-store> </store> </core> </configuration>",
"<configuration> <core> <ha-policy> <shared-store> <master> <failover-on-shutdown>true</failover-on-shutdown> </master> </shared-store> </ha-policy> </core> </configuration>",
"<configuration> <core> <paging-directory>../sharedstore/data/paging</paging-directory> <bindings-directory>../sharedstore/data/bindings</bindings-directory> <journal-directory>../sharedstore/data/journal</journal-directory> <large-messages-directory>../sharedstore/data/large-messages</large-messages-directory> </core> </configuration>",
"<configuration> <core> <ha-policy> <shared-store> <slave> <failover-on-shutdown>true</failover-on-shutdown> <allow-failback>true</allow-failback> <restart-backup>true</restart-backup> </slave> </shared-store> </ha-policy> </core> </configuration>",
"<configuration> <core> <ha-policy> <replication> <master> <check-for-live-server>true</check-for-live-server> <group-name>my-group-1</group-name> <vote-on-replication-failure>true</vote-on-replication-failure> </master> </replication> </ha-policy> </core> </configuration>",
"<configuration> <core> <ha-policy> <replication> <slave> <allow-failback>true</allow-failback> <restart-backup>true</restart-backup> <group-name>my-group-1</group-name> <vote-on-replication-failure>true</vote-on-replication-failure> </slave> </replication> </ha-policy> </core> </configuration>",
"<configuration> <core> <ha-policy> <replication> <slave> <vote-retries>12</vote-retries> <vote-retry-wait>5000</vote-retry-wait> </slave> </replication> </ha-policy> </core> </configuration>",
"<configuration> <core> <ha-policy> <live-only> </live-only> </ha-policy> </core> </configuration>",
"<live-only> <scale-down> <connectors> <connector-ref>broker1-connector</connector-ref> </connectors> </scale-down> </live-only>",
"<live-only> <scale-down> <discovery-group-ref discovery-group-name=\"my-discovery-group\"/> </scale-down> </live-only>",
"<live-only> <scale-down> <group-name>my-group-name</group-name> </scale-down> </live-only>",
"<configuration> <core> <ha-policy> <shared-store> <colocated> <request-backup>true</request-backup> <max-backups>1</max-backups> <backup-request-retries>-1</backup-request-retries> <backup-request-retry-interval>5000</backup-request-retry-interval/> <backup-port-offset>150</backup-port-offset> <excludes> <connector-ref>remote-connector</connector-ref> </excludes> <master> <failover-on-shutdown>true</failover-on-shutdown> </master> <slave> <failover-on-shutdown>true</failover-on-shutdown> <allow-failback>true</allow-failback> <restart-backup>true</restart-backup> </slave> </colocated> </shared-store> </ha-policy> </core> </configuration>",
"<configuration> <core> <cluster-connections> <cluster-connection name=\"my-cluster\"> <message-load-balancing>ON_DEMAND</message-load-balancing> </cluster-connection> </cluster-connections> </core> </configuration>",
"<configuration> <core> <address-settings> <address-setting match=\"my.queue\"> <redistribution-delay>5000</redistribution-delay> </address-setting> </address-settings> </core> </configuration>",
"<configuration> <core> <grouping-handler name=\"my-grouping-handler\"> <type>LOCAL</type> <timeout>10000</timeout> </grouping-handler> </core> </configuration>",
"<configuration> <core> <grouping-handler name=\"my-grouping-handler\"> <type>REMOTE</type> <timeout>5000</timeout> </grouping-handler> </core> </configuration>"
]
| https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/configuring_amq_broker/setting-up-broker-cluster-configuring |
Chapter 12. Configuring System Purpose using the subscription-manager command-line tool | Chapter 12. Configuring System Purpose using the subscription-manager command-line tool System purpose is a feature of the Red Hat Enterprise Linux installation to help RHEL customers get the benefit of our subscription experience and services offered in the Red Hat Hybrid Cloud Console, a dashboard-based, Software-as-a-Service (SaaS) application that enables you to view subscription usage in your Red Hat account. You can configure system purpose attributes either on the activation keys or by using the subscription manager tool. Prerequisites You have installed and registered your Red Hat Enterprise Linux 9 system, but system purpose is not configured. You are logged in as a root user. Note In the entitlement mode, if your system is registered but has subscriptions that do not satisfy the required purpose, you can run the subscription-manager remove --all command to remove attached subscriptions. You can then use the command-line subscription-manager syspurpose {role, usage, service-level} tools to set the required purpose attributes, and lastly run subscription-manager attach --auto to re-entitle the system with considerations for the updated attributes. Whereas, in the SCA enabled account, you can directly update the system purpose details post registration without making an update to the subscriptions in the system. Procedure From a terminal window, run the following command to set the intended role of the system: Replace VALUE with the role that you want to assign: Red Hat Enterprise Linux Server Red Hat Enterprise Linux Workstation Red Hat Enterprise Linux Compute Node For example: Optional: Before setting a value, see the available roles supported by the subscriptions for your organization: Optional: Run the following command to unset the role: Run the following command to set the intended Service Level Agreement (SLA) of the system: Replace VALUE with the SLA that you want to assign: Premium Standard Self-Support For example: Optional: Before setting a value, see the available service-levels supported by the subscriptions for your organization: Optional: Run the following command to unset the SLA: Run the following command to set the intended usage of the system: Replace VALUE with the usage that you want to assign: Production Disaster Recovery Development/Test For example: Optional: Before setting a value, see the available usages supported by the subscriptions for your organization: Optional: Run the following command to unset the usage: Run the following command to show the current system purpose properties: Optional: For more detailed syntax information run the following command to access the subscription-manager man page and browse to the SYSPURPOSE OPTIONS: Verification To verify the system's subscription status in a system registered with an account having entitlement mode enabled: An overall status Current means that all of the installed products are covered by the subscription(s) attached and entitlements to access their content set repositories has been granted. A system purpose status Matched means that all of the system purpose attributes (role, usage, service-level) that were set on the system are satisfied by the subscription(s) attached. When the status information is not ideal, additional information is displayed to help the system administrator decide what corrections to make to the attached subscriptions to cover the installed products and intended system purpose. To verify the system's subscription status in a system registered with an account having SCA mode enabled: In SCA mode, subscriptions are no longer required to be attached to individual systems. Hence, both the overall status and system purpose status are displayed as Disabled . However, the technical, business, and operational use cases supplied by system purpose attributes are important to the subscriptions service. Without these attributes, the subscriptions service data is less accurate. Additional resources To learn more about the subscriptions service, see the Getting Started with the Subscriptions Service guide . | [
"subscription-manager syspurpose role --set \"VALUE\"",
"subscription-manager syspurpose role --set \"Red Hat Enterprise Linux Server\"",
"subscription-manager syspurpose role --list",
"subscription-manager syspurpose role --unset",
"subscription-manager syspurpose service-level --set \"VALUE\"",
"subscription-manager syspurpose service-level --set \"Standard\"",
"subscription-manager syspurpose service-level --list",
"subscription-manager syspurpose service-level --unset",
"subscription-manager syspurpose usage --set \"VALUE\"",
"subscription-manager syspurpose usage --set \"Production\"",
"subscription-manager syspurpose usage --list",
"subscription-manager syspurpose usage --unset",
"subscription-manager syspurpose --show",
"man subscription-manager",
"subscription-manager status +-------------------------------------------+ System Status Details +-------------------------------------------+ Overall Status: Current System Purpose Status: Matched",
"subscription-manager status +-------------------------------------------+ System Status Details +-------------------------------------------+ Overall Status: Disabled Content Access Mode is set to Simple Content Access. This host has access to content, regardless of subscription status. System Purpose Status: Disabled"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/interactively_installing_rhel_from_installation_media/proc_configuring-system-purpose-using-the-subscription-manager-command-line-tool_rhel-installer |
Chapter 2. Preparing to update a cluster | Chapter 2. Preparing to update a cluster 2.1. Preparing to update to OpenShift Container Platform 4.15 Learn more about administrative tasks that cluster admins must perform to successfully initialize an update, as well as optional guidelines for ensuring a successful update. 2.1.1. Kubernetes API removals There are no Kubernetes API removals in OpenShift Container Platform 4.15. Important If you have IPsec enabled on your cluster, you must disable it prior to upgrading to OpenShift Container Platform 4.15. There is a known issue where pod-to-pod communication might be interrupted or lost when updating to 4.15 without disabling IPsec. For information on disabling IPsec, see Configuring IPsec encryption . ( OCPBUGS-43323 ) 2.1.2. Assessing the risk of conditional updates A conditional update is an update target that is available but not recommended due to a known risk that applies to your cluster. The Cluster Version Operator (CVO) periodically queries the OpenShift Update Service (OSUS) for the most recent data about update recommendations, and some potential update targets might have risks associated with them. The CVO evaluates the conditional risks, and if the risks are not applicable to the cluster, then the target version is available as a recommended update path for the cluster. If the risk is determined to be applicable, or if for some reason CVO cannot evaluate the risk, then the update target is available to the cluster as a conditional update. When you encounter a conditional update while you are trying to update to a target version, you must assess the risk of updating your cluster to that version. Generally, if you do not have a specific need to update to that target version, it is best to wait for a recommended update path from Red Hat. However, if you have a strong reason to update to that version, for example, if you need to fix an important CVE, then the benefit of fixing the CVE might outweigh the risk of the update being problematic for your cluster. You can complete the following tasks to determine whether you agree with the Red Hat assessment of the update risk: Complete extensive testing in a non-production environment to the extent that you are comfortable completing the update in your production environment. Follow the links provided in the conditional update description, investigate the bug, and determine if it is likely to cause issues for your cluster. If you need help understanding the risk, contact Red Hat Support. Additional resources Evaluation of update availability 2.1.3. etcd backups before cluster updates etcd backups record the state of your cluster and all of its resource objects. You can use backups to attempt restoring the state of a cluster in disaster scenarios where you cannot recover a cluster in its currently dysfunctional state. In the context of updates, you can attempt an etcd restoration of the cluster if an update introduced catastrophic conditions that cannot be fixed without reverting to the cluster version. etcd restorations might be destructive and destabilizing to a running cluster, use them only as a last resort. Warning Due to their high consequences, etcd restorations are not intended to be used as a rollback solution. Rolling your cluster back to a version is not supported. If your update is failing to complete, contact Red Hat support. There are several factors that affect the viability of an etcd restoration. For more information, see "Backing up etcd data" and "Restoring to a cluster state". Additional resources Backing up etcd Restoring to a cluster state 2.1.4. Best practices for cluster updates OpenShift Container Platform provides a robust update experience that minimizes workload disruptions during an update. Updates will not begin unless the cluster is in an upgradeable state at the time of the update request. This design enforces some key conditions before initiating an update, but there are a number of actions you can take to increase your chances of a successful cluster update. 2.1.4.1. Choose versions recommended by the OpenShift Update Service The OpenShift Update Service (OSUS) provides update recommendations based on cluster characteristics such as the cluster's subscribed channel. The Cluster Version Operator saves these recommendations as either recommended or conditional updates. While it is possible to attempt an update to a version that is not recommended by OSUS, following a recommended update path protects users from encountering known issues or unintended consequences on the cluster. Choose only update targets that are recommended by OSUS to ensure a successful update. 2.1.4.2. Address all critical alerts on the cluster Critical alerts must always be addressed as soon as possible, but it is especially important to address these alerts and resolve any problems before initiating a cluster update. Failing to address critical alerts before beginning an update can cause problematic conditions for the cluster. In the Administrator perspective of the web console, navigate to Observe Alerting to find critical alerts. 2.1.4.3. Ensure that the cluster is in an Upgradable state When one or more Operators have not reported their Upgradeable condition as True for more than an hour, the ClusterNotUpgradeable warning alert is triggered in the cluster. In most cases this alert does not block patch updates, but you cannot perform a minor version update until you resolve this alert and all Operators report Upgradeable as True . For more information about the Upgradeable condition, see "Understanding cluster Operator condition types" in the additional resources section. 2.1.4.4. Ensure that enough spare nodes are available A cluster should not be running with little to no spare node capacity, especially when initiating a cluster update. Nodes that are not running and available may limit a cluster's ability to perform an update with minimal disruption to cluster workloads. Depending on the configured value of the cluster's maxUnavailable spec, the cluster might not be able to apply machine configuration changes to nodes if there is an unavailable node. Additionally, if compute nodes do not have enough spare capacity, workloads might not be able to temporarily shift to another node while the first node is taken offline for an update. Make sure that you have enough available nodes in each worker pool, as well as enough spare capacity on your compute nodes, to increase the chance of successful node updates. Warning The default setting for maxUnavailable is 1 for all the machine config pools in OpenShift Container Platform. It is recommended to not change this value and update one control plane node at a time. Do not change this value to 3 for the control plane pool. 2.1.4.5. Ensure that the cluster's PodDisruptionBudget is properly configured You can use the PodDisruptionBudget object to define the minimum number or percentage of pod replicas that must be available at any given time. This configuration protects workloads from disruptions during maintenance tasks such as cluster updates. However, it is possible to configure the PodDisruptionBudget for a given topology in a way that prevents nodes from being drained and updated during a cluster update. When planning a cluster update, check the configuration of the PodDisruptionBudget object for the following factors: For highly available workloads, make sure there are replicas that can be temporarily taken offline without being prohibited by the PodDisruptionBudget . For workloads that aren't highly available, make sure they are either not protected by a PodDisruptionBudget or have some alternative mechanism for draining these workloads eventually, such as periodic restart or guaranteed eventual termination. Additional resources Understanding cluster Operator condition types 2.2. Preparing to update a cluster with manually maintained credentials The Cloud Credential Operator (CCO) Upgradable status for a cluster with manually maintained credentials is False by default. For minor releases, for example, from 4.12 to 4.13, this status prevents you from updating until you have addressed any updated permissions and annotated the CloudCredential resource to indicate that the permissions are updated as needed for the version. This annotation changes the Upgradable status to True . For z-stream releases, for example, from 4.13.0 to 4.13.1, no permissions are added or changed, so the update is not blocked. Before updating a cluster with manually maintained credentials, you must accommodate any new or changed credentials in the release image for the version of OpenShift Container Platform you are updating to. 2.2.1. Update requirements for clusters with manually maintained credentials Before you update a cluster that uses manually maintained credentials with the Cloud Credential Operator (CCO), you must update the cloud provider resources for the new release. If the cloud credential management for your cluster was configured using the CCO utility ( ccoctl ), use the ccoctl utility to update the resources. Clusters that were configured to use manual mode without the ccoctl utility require manual updates for the resources. After updating the cloud provider resources, you must update the upgradeable-to annotation for the cluster to indicate that it is ready to update. Note The process to update the cloud provider resources and the upgradeable-to annotation can only be completed by using command line tools. 2.2.1.1. Cloud credential configuration options and update requirements by platform type Some platforms only support using the CCO in one mode. For clusters that are installed on those platforms, the platform type determines the credentials update requirements. For platforms that support using the CCO in multiple modes, you must determine which mode the cluster is configured to use and take the required actions for that configuration. Figure 2.1. Credentials update requirements by platform type Red Hat OpenStack Platform (RHOSP) and VMware vSphere These platforms do not support using the CCO in manual mode. Clusters on these platforms handle changes in cloud provider resources automatically and do not require an update to the upgradeable-to annotation. Administrators of clusters on these platforms should skip the manually maintained credentials section of the update process. IBM Cloud and Nutanix Clusters installed on these platforms are configured using the ccoctl utility. Administrators of clusters on these platforms must take the following actions: Extract and prepare the CredentialsRequest custom resources (CRs) for the new release. Configure the ccoctl utility for the new release and use it to update the cloud provider resources. Indicate that the cluster is ready to update with the upgradeable-to annotation. Microsoft Azure Stack Hub These clusters use manual mode with long-term credentials and do not use the ccoctl utility. Administrators of clusters on these platforms must take the following actions: Extract and prepare the CredentialsRequest custom resources (CRs) for the new release. Manually update the cloud provider resources for the new release. Indicate that the cluster is ready to update with the upgradeable-to annotation. Amazon Web Services (AWS), global Microsoft Azure, and Google Cloud Platform (GCP) Clusters installed on these platforms support multiple CCO modes. The required update process depends on the mode that the cluster is configured to use. If you are not sure what mode the CCO is configured to use on your cluster, you can use the web console or the CLI to determine this information. Additional resources Determining the Cloud Credential Operator mode by using the web console Determining the Cloud Credential Operator mode by using the CLI Extracting and preparing credentials request resources About the Cloud Credential Operator 2.2.1.2. Determining the Cloud Credential Operator mode by using the web console You can determine what mode the Cloud Credential Operator (CCO) is configured to use by using the web console. Note Only Amazon Web Services (AWS), global Microsoft Azure, and Google Cloud Platform (GCP) clusters support multiple CCO modes. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator permissions. Procedure Log in to the OpenShift Container Platform web console as a user with the cluster-admin role. Navigate to Administration Cluster Settings . On the Cluster Settings page, select the Configuration tab. Under Configuration resource , select CloudCredential . On the CloudCredential details page, select the YAML tab. In the YAML block, check the value of spec.credentialsMode . The following values are possible, though not all are supported on all platforms: '' : The CCO is operating in the default mode. In this configuration, the CCO operates in mint or passthrough mode, depending on the credentials provided during installation. Mint : The CCO is operating in mint mode. Passthrough : The CCO is operating in passthrough mode. Manual : The CCO is operating in manual mode. Important To determine the specific configuration of an AWS, GCP, or global Microsoft Azure cluster that has a spec.credentialsMode of '' , Mint , or Manual , you must investigate further. AWS and GCP clusters support using mint mode with the root secret deleted. If the cluster is specifically configured to use mint mode or uses mint mode by default, you must determine if the root secret is present on the cluster before updating. An AWS, GCP, or global Microsoft Azure cluster that uses manual mode might be configured to create and manage cloud credentials from outside of the cluster with AWS STS, GCP Workload Identity, or Microsoft Entra Workload ID. You can determine whether your cluster uses this strategy by examining the cluster Authentication object. AWS or GCP clusters that use mint mode only: To determine whether the cluster is operating without the root secret, navigate to Workloads Secrets and look for the root secret for your cloud provider. Note Ensure that the Project dropdown is set to All Projects . Platform Secret name AWS aws-creds GCP gcp-credentials If you see one of these values, your cluster is using mint or passthrough mode with the root secret present. If you do not see these values, your cluster is using the CCO in mint mode with the root secret removed. AWS, GCP, or global Microsoft Azure clusters that use manual mode only: To determine whether the cluster is configured to create and manage cloud credentials from outside of the cluster, you must check the cluster Authentication object YAML values. Navigate to Administration Cluster Settings . On the Cluster Settings page, select the Configuration tab. Under Configuration resource , select Authentication . On the Authentication details page, select the YAML tab. In the YAML block, check the value of the .spec.serviceAccountIssuer parameter. A value that contains a URL that is associated with your cloud provider indicates that the CCO is using manual mode with short-term credentials for components. These clusters are configured using the ccoctl utility to create and manage cloud credentials from outside of the cluster. An empty value ( '' ) indicates that the cluster is using the CCO in manual mode but was not configured using the ccoctl utility. steps If you are updating a cluster that has the CCO operating in mint or passthrough mode and the root secret is present, you do not need to update any cloud provider resources and can continue to the part of the update process. If your cluster is using the CCO in mint mode with the root secret removed, you must reinstate the credential secret with the administrator-level credential before continuing to the part of the update process. If your cluster was configured using the CCO utility ( ccoctl ), you must take the following actions: Extract and prepare the CredentialsRequest custom resources (CRs) for the new release. Configure the ccoctl utility for the new release and use it to update the cloud provider resources. Update the upgradeable-to annotation to indicate that the cluster is ready to update. If your cluster is using the CCO in manual mode but was not configured using the ccoctl utility, you must take the following actions: Extract and prepare the CredentialsRequest custom resources (CRs) for the new release. Manually update the cloud provider resources for the new release. Update the upgradeable-to annotation to indicate that the cluster is ready to update. Additional resources Extracting and preparing credentials request resources 2.2.1.3. Determining the Cloud Credential Operator mode by using the CLI You can determine what mode the Cloud Credential Operator (CCO) is configured to use by using the CLI. Note Only Amazon Web Services (AWS), global Microsoft Azure, and Google Cloud Platform (GCP) clusters support multiple CCO modes. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator permissions. You have installed the OpenShift CLI ( oc ). Procedure Log in to oc on the cluster as a user with the cluster-admin role. To determine the mode that the CCO is configured to use, enter the following command: USD oc get cloudcredentials cluster \ -o=jsonpath={.spec.credentialsMode} The following output values are possible, though not all are supported on all platforms: '' : The CCO is operating in the default mode. In this configuration, the CCO operates in mint or passthrough mode, depending on the credentials provided during installation. Mint : The CCO is operating in mint mode. Passthrough : The CCO is operating in passthrough mode. Manual : The CCO is operating in manual mode. Important To determine the specific configuration of an AWS, GCP, or global Microsoft Azure cluster that has a spec.credentialsMode of '' , Mint , or Manual , you must investigate further. AWS and GCP clusters support using mint mode with the root secret deleted. If the cluster is specifically configured to use mint mode or uses mint mode by default, you must determine if the root secret is present on the cluster before updating. An AWS, GCP, or global Microsoft Azure cluster that uses manual mode might be configured to create and manage cloud credentials from outside of the cluster with AWS STS, GCP Workload Identity, or Microsoft Entra Workload ID. You can determine whether your cluster uses this strategy by examining the cluster Authentication object. AWS or GCP clusters that use mint mode only: To determine whether the cluster is operating without the root secret, run the following command: USD oc get secret <secret_name> \ -n=kube-system where <secret_name> is aws-creds for AWS or gcp-credentials for GCP. If the root secret is present, the output of this command returns information about the secret. An error indicates that the root secret is not present on the cluster. AWS, GCP, or global Microsoft Azure clusters that use manual mode only: To determine whether the cluster is configured to create and manage cloud credentials from outside of the cluster, run the following command: USD oc get authentication cluster \ -o jsonpath \ --template='{ .spec.serviceAccountIssuer }' This command displays the value of the .spec.serviceAccountIssuer parameter in the cluster Authentication object. An output of a URL that is associated with your cloud provider indicates that the CCO is using manual mode with short-term credentials for components. These clusters are configured using the ccoctl utility to create and manage cloud credentials from outside of the cluster. An empty output indicates that the cluster is using the CCO in manual mode but was not configured using the ccoctl utility. steps If you are updating a cluster that has the CCO operating in mint or passthrough mode and the root secret is present, you do not need to update any cloud provider resources and can continue to the part of the update process. If your cluster is using the CCO in mint mode with the root secret removed, you must reinstate the credential secret with the administrator-level credential before continuing to the part of the update process. If your cluster was configured using the CCO utility ( ccoctl ), you must take the following actions: Extract and prepare the CredentialsRequest custom resources (CRs) for the new release. Configure the ccoctl utility for the new release and use it to update the cloud provider resources. Update the upgradeable-to annotation to indicate that the cluster is ready to update. If your cluster is using the CCO in manual mode but was not configured using the ccoctl utility, you must take the following actions: Extract and prepare the CredentialsRequest custom resources (CRs) for the new release. Manually update the cloud provider resources for the new release. Update the upgradeable-to annotation to indicate that the cluster is ready to update. Additional resources Extracting and preparing credentials request resources 2.2.2. Extracting and preparing credentials request resources Before updating a cluster that uses the Cloud Credential Operator (CCO) in manual mode, you must extract and prepare the CredentialsRequest custom resources (CRs) for the new release. Prerequisites Install the OpenShift CLI ( oc ) that matches the version for your updated version. Log in to the cluster as user with cluster-admin privileges. Procedure Obtain the pull spec for the update that you want to apply by running the following command: USD oc adm upgrade The output of this command includes pull specs for the available updates similar to the following: Partial example output ... Recommended updates: VERSION IMAGE 4.15.0 quay.io/openshift-release-dev/ocp-release@sha256:6a899c54dda6b844bb12a247e324a0f6cde367e880b73ba110c056df6d018032 ... Set a USDRELEASE_IMAGE variable with the release image that you want to use by running the following command: USD RELEASE_IMAGE=<update_pull_spec> where <update_pull_spec> is the pull spec for the release image that you want to use. For example: quay.io/openshift-release-dev/ocp-release@sha256:6a899c54dda6b844bb12a247e324a0f6cde367e880b73ba110c056df6d018032 Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --to=<path_to_directory_for_credentials_requests> 2 1 The --included parameter includes only the manifests that your specific cluster configuration requires for the target release. 2 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. For each CredentialsRequest CR in the release image, ensure that a namespace that matches the text in the spec.secretRef.namespace field exists in the cluster. This field is where the generated secrets that hold the credentials configuration are stored. Sample AWS CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: cloud-credential-operator-iam-ro namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: "*" secretRef: name: cloud-credential-operator-iam-ro-creds namespace: openshift-cloud-credential-operator 1 1 This field indicates the namespace which must exist to hold the generated secret. The CredentialsRequest CRs for other platforms have a similar format with different platform-specific values. For any CredentialsRequest CR for which the cluster does not already have a namespace with the name specified in spec.secretRef.namespace , create the namespace by running the following command: USD oc create namespace <component_namespace> steps If the cloud credential management for your cluster was configured using the CCO utility ( ccoctl ), configure the ccoctl utility for a cluster update and use it to update your cloud provider resources. If your cluster was not configured with the ccoctl utility, manually update your cloud provider resources. Additional resources Configuring the Cloud Credential Operator utility for a cluster update Manually updating cloud provider resources 2.2.3. Configuring the Cloud Credential Operator utility for a cluster update To upgrade a cluster that uses the Cloud Credential Operator (CCO) in manual mode to create and manage cloud credentials from outside of the cluster, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). Your cluster was configured using the ccoctl utility to create and manage cloud credentials from outside of the cluster. You have extracted the CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image and ensured that a namespace that matches the text in the spec.secretRef.namespace field exists in the cluster. Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(oc get clusterversion -o jsonpath={..desired.image}) Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE --file="/usr/bin/ccoctl" -a ~/.pull-secret Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 2.2.4. Updating cloud provider resources with the Cloud Credential Operator utility The process for upgrading an OpenShift Container Platform cluster that was configured using the CCO utility ( ccoctl ) is similar to creating the cloud provider resources during installation. Note On AWS clusters, some ccoctl commands make AWS API calls to create or modify AWS resources. You can use the --dry-run flag to avoid making API calls. Using this flag creates JSON files on the local file system instead. You can review and modify the JSON files and then apply them with the AWS CLI tool using the --cli-input-json parameters. Prerequisites You have extracted the CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image and ensured that a namespace that matches the text in the spec.secretRef.namespace field exists in the cluster. You have extracted and configured the ccoctl binary from the release image. Procedure Use the ccoctl tool to process all CredentialsRequest objects by running the command for your cloud provider. The following commands process CredentialsRequest objects: Example 2.1. Amazon Web Services (AWS) USD ccoctl aws create-all \ 1 --name=<name> \ 2 --region=<aws_region> \ 3 --credentials-requests-dir=<path_to_credentials_requests_directory> \ 4 --output-dir=<path_to_ccoctl_output_dir> \ 5 --create-private-s3-bucket 6 1 To create the AWS resources individually, use the "Creating AWS resources individually" procedure in the "Installing a cluster on AWS with customizations" content. This option might be useful if you need to review the JSON files that the ccoctl tool creates before modifying AWS resources, or if the process the ccoctl tool uses to create AWS resources automatically does not meet the requirements of your organization. 2 Specify the name used to tag any cloud resources that are created for tracking. 3 Specify the AWS region in which cloud resources will be created. 4 Specify the directory containing the files for the component CredentialsRequest objects. 5 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 6 Optional: By default, the ccoctl utility stores the OpenID Connect (OIDC) configuration files in a public S3 bucket and uses the S3 URL as the public OIDC endpoint. To store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL instead, use the --create-private-s3-bucket parameter. Example 2.2. Google Cloud Platform (GCP) USD ccoctl gcp create-all \ --name=<name> \ 1 --region=<gcp_region> \ 2 --project=<gcp_project_id> \ 3 --credentials-requests-dir=<path_to_credentials_requests_directory> \ 4 --output-dir=<path_to_ccoctl_output_dir> 5 1 Specify the user-defined name for all created GCP resources used for tracking. 2 Specify the GCP region in which cloud resources will be created. 3 Specify the GCP project ID in which cloud resources will be created. 4 Specify the directory containing the files of CredentialsRequest manifests to create GCP service accounts. 5 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. Example 2.3. IBM Cloud USD ccoctl ibmcloud create-service-id \ --credentials-requests-dir=<path_to_credential_requests_directory> \ 1 --name=<cluster_name> \ 2 --output-dir=<installation_directory> \ 3 --resource-group-name=<resource_group_name> 4 1 Specify the directory containing the files for the component CredentialsRequest objects. 2 Specify the name of the OpenShift Container Platform cluster. 3 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 4 Optional: Specify the name of the resource group used for scoping the access policies. Example 2.4. Microsoft Azure USD ccoctl azure create-managed-identities \ --name <azure_infra_name> \ 1 --output-dir ./output_dir \ --region <azure_region> \ 2 --subscription-id <azure_subscription_id> \ 3 --credentials-requests-dir <path_to_directory_for_credentials_requests> \ --issuer-url "USD{OIDC_ISSUER_URL}" \ 4 --dnszone-resource-group-name <azure_dns_zone_resourcegroup_name> \ 5 --installation-resource-group-name "USD{AZURE_INSTALL_RG}" 6 1 The value of the name parameter is used to create an Azure resource group. To use an existing Azure resource group instead of creating a new one, specify the --oidc-resource-group-name argument with the existing group name as its value. 2 Specify the region of the existing cluster. 3 Specify the subscription ID of the existing cluster. 4 Specify the OIDC issuer URL from the existing cluster. You can obtain this value by running the following command: USD oc get authentication cluster \ -o jsonpath \ --template='{ .spec.serviceAccountIssuer }' 5 Specify the name of the resource group that contains the DNS zone. 6 Specify the Azure resource group name. You can obtain this value by running the following command: USD oc get infrastructure cluster \ -o jsonpath \ --template '{ .status.platformStatus.azure.resourceGroupName }' Example 2.5. Nutanix USD ccoctl nutanix create-shared-secrets \ --credentials-requests-dir=<path_to_credentials_requests_directory> \ 1 --output-dir=<ccoctl_output_dir> \ 2 --credentials-source-filepath=<path_to_credentials_file> 3 1 Specify the path to the directory that contains the files for the component CredentialsRequests objects. 2 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 3 Optional: Specify the directory that contains the credentials data YAML file. By default, ccoctl expects this file to be in <home_directory>/.nutanix/credentials . For each CredentialsRequest object, ccoctl creates the required provider resources and a permissions policy as defined in each CredentialsRequest object from the OpenShift Container Platform release image. Apply the secrets to your cluster by running the following command: USD ls <path_to_ccoctl_output_dir>/manifests/*-credentials.yaml | xargs -I{} oc apply -f {} Verification You can verify that the required provider resources and permissions policies are created by querying the cloud provider. For more information, refer to your cloud provider documentation on listing roles or service accounts. steps Update the upgradeable-to annotation to indicate that the cluster is ready to upgrade. Additional resources Indicating that the cluster is ready to upgrade 2.2.5. Manually updating cloud provider resources Before upgrading a cluster with manually maintained credentials, you must create secrets for any new credentials for the release image that you are upgrading to. You must also review the required permissions for existing credentials and accommodate any new permissions requirements in the new release for those components. Prerequisites You have extracted the CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image and ensured that a namespace that matches the text in the spec.secretRef.namespace field exists in the cluster. Procedure Create YAML files with secrets for any CredentialsRequest custom resources that the new release image adds. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Example 2.6. Sample AWS YAML files Sample AWS CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: "*" ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample AWS Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key> Example 2.7. Sample Azure YAML files Note Global Azure and Azure Stack Hub use the same CredentialsRequest object and secret formats. Sample Azure CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample Azure Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region> Example 2.8. Sample GCP YAML files Sample GCP CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/iam.securityReviewer - roles/iam.roleViewer skipServiceCheck: true ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample GCP Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: service_account.json: <base64_encoded_gcp_service_account_file> If the CredentialsRequest custom resources for any existing credentials that are stored in secrets have changed permissions requirements, update the permissions as required. steps Update the upgradeable-to annotation to indicate that the cluster is ready to upgrade. Additional resources Manually creating long-term credentials for AWS Manually creating long-term credentials for Azure Manually creating long-term credentials for Azure Stack Hub Manually creating long-term credentials for GCP Indicating that the cluster is ready to upgrade 2.2.6. Indicating that the cluster is ready to upgrade The Cloud Credential Operator (CCO) Upgradable status for a cluster with manually maintained credentials is False by default. Prerequisites For the release image that you are upgrading to, you have processed any new credentials manually or by using the Cloud Credential Operator utility ( ccoctl ). You have installed the OpenShift CLI ( oc ). Procedure Log in to oc on the cluster as a user with the cluster-admin role. Edit the CloudCredential resource to add an upgradeable-to annotation within the metadata field by running the following command: USD oc edit cloudcredential cluster Text to add ... metadata: annotations: cloudcredential.openshift.io/upgradeable-to: <version_number> ... Where <version_number> is the version that you are upgrading to, in the format x.y.z . For example, use 4.12.2 for OpenShift Container Platform 4.12.2. It may take several minutes after adding the annotation for the upgradeable status to change. Verification In the Administrator perspective of the web console, navigate to Administration Cluster Settings . To view the CCO status details, click cloud-credential in the Cluster Operators list. If the Upgradeable status in the Conditions section is False , verify that the upgradeable-to annotation is free of typographical errors. When the Upgradeable status in the Conditions section is True , begin the OpenShift Container Platform upgrade. 2.3. Preflight validation for Kernel Module Management (KMM) Modules Before performing an upgrade on the cluster with applied KMM modules, you must verify that kernel modules installed using KMM are able to be installed on the nodes after the cluster upgrade and possible kernel upgrade. Preflight attempts to validate every Module loaded in the cluster, in parallel. Preflight does not wait for validation of one Module to complete before starting validation of another Module . 2.3.1. Validation kickoff Preflight validation is triggered by creating a PreflightValidationOCP resource in the cluster. This spec contains two fields: releaseImage Mandatory field that provides the name of the release image for the OpenShift Container Platform version the cluster is upgraded to. pushBuiltImage If true , then the images created during the Build and Sign validation are pushed to their repositories. This field is false by default. 2.3.2. Validation lifecycle Preflight validation attempts to validate every module loaded in the cluster. Preflight stops running validation on a Module resource after the validation is successful. If module validation fails, you can change the module definitions and Preflight tries to validate the module again in the loop. If you want to run Preflight validation for an additional kernel, then you should create another PreflightValidationOCP resource for that kernel. After all the modules have been validated, it is recommended to delete the PreflightValidationOCP resource. 2.3.3. Validation status A PreflightValidationOCP resource reports the status and progress of each module in the cluster that it attempts or has attempted to validate in its .status.modules list. Elements of that list contain the following fields: lastTransitionTime The last time the Module resource status transitioned from one status to another. This should be when the underlying status has changed. If that is not known, then using the time when the API field changed is acceptable. name The name of the Module resource. namespace The namespace of the Module resource. statusReason Verbal explanation regarding the status. verificationStage Describes the validation stage being executed: image : Image existence verification build : Build process verification sign : Sign process verification verificationStatus The status of the Module verification: true : Verified false : Verification failed error : Error during the verification process unknown : Verification has not started 2.3.4. Preflight validation stages per Module Preflight runs the following validations on every KMM Module present in the cluster: Image validation stage Build validation stage Sign validation stage 2.3.4.1. Image validation stage Image validation is always the first stage of the preflight validation to be executed. If image validation is successful, no other validations are run on that specific module. Image validation consists of two stages: Image existence and accessibility. The code tries to access the image defined for the upgraded kernel in the module and get its manifests. Verify the presence of the kernel module defined in the Module in the correct path for future modprobe execution. If this validation is successful, it probably means that the kernel module was compiled with the correct Linux headers. The correct path is <dirname>/lib/modules/<upgraded_kernel>/ . 2.3.4.2. Build validation stage Build validation is executed only when image validation has failed and there is a build section in the Module that is relevant for the upgraded kernel. Build validation attempts to run the build job and validate that it finishes successfully. Note You must specify the kernel version when running depmod , as shown here: USD RUN depmod -b /opt USD{KERNEL_VERSION} If the PushBuiltImage flag is defined in the PreflightValidationOCP custom resource (CR), it also tries to push the resulting image into its repository. The resulting image name is taken from the definition of the containerImage field of the Module CR. Note If the sign section is defined for the upgraded kernel, then the resulting image will not be the containerImage field of the Module CR, but a temporary image name, because the resulting image should be the product of Sign flow. 2.3.4.3. Sign validation stage Sign validation is executed only when image validation has failed. There is a sign section in the Module resource that is relevant for the upgrade kernel, and build validation finishes successfully in case there was a build section in the Module relevant for the upgraded kernel. Sign validation attempts to run the sign job and validate that it finishes successfully. If the PushBuiltImage flag is defined in the PreflightValidationOCP CR, sign validation also tries to push the resulting image to its registry. The resulting image is always the image defined in the ContainerImage field of the Module . The input image is either the output of the Build stage, or an image defined in the UnsignedImage field. Note If a build section exists, the sign section input image is the build section's output image. Therefore, in order for the input image to be available for the sign section, the PushBuiltImage flag must be defined in the PreflightValidationOCP CR. 2.3.5. Example PreflightValidationOCP resource This section shows an example of the PreflightValidationOCP resource in the YAML format. The example verifies all of the currently present modules against the upcoming kernel version included in the OpenShift Container Platform release 4.11.18, which the following release image points to: quay.io/openshift-release-dev/ocp-release@sha256:22e149142517dfccb47be828f012659b1ccf71d26620e6f62468c264a7ce7863 Because .spec.pushBuiltImage is set to true , KMM pushes the resulting images of Build/Sign in to the defined repositories. apiVersion: kmm.sigs.x-k8s.io/v1beta2 kind: PreflightValidationOCP metadata: name: preflight spec: releaseImage: quay.io/openshift-release-dev/ocp-release@sha256:22e149142517dfccb47be828f012659b1ccf71d26620e6f62468c264a7ce7863 pushBuiltImage: true | [
"oc get cloudcredentials cluster -o=jsonpath={.spec.credentialsMode}",
"oc get secret <secret_name> -n=kube-system",
"oc get authentication cluster -o jsonpath --template='{ .spec.serviceAccountIssuer }'",
"oc adm upgrade",
"Recommended updates: VERSION IMAGE 4.15.0 quay.io/openshift-release-dev/ocp-release@sha256:6a899c54dda6b844bb12a247e324a0f6cde367e880b73ba110c056df6d018032",
"RELEASE_IMAGE=<update_pull_spec>",
"quay.io/openshift-release-dev/ocp-release@sha256:6a899c54dda6b844bb12a247e324a0f6cde367e880b73ba110c056df6d018032",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --to=<path_to_directory_for_credentials_requests> 2",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: cloud-credential-operator-iam-ro namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: \"*\" secretRef: name: cloud-credential-operator-iam-ro-creds namespace: openshift-cloud-credential-operator 1",
"oc create namespace <component_namespace>",
"RELEASE_IMAGE=USD(oc get clusterversion -o jsonpath={..desired.image})",
"CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)",
"oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret",
"chmod 775 ccoctl",
"./ccoctl.rhel9",
"OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.",
"ccoctl aws create-all \\ 1 --name=<name> \\ 2 --region=<aws_region> \\ 3 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 4 --output-dir=<path_to_ccoctl_output_dir> \\ 5 --create-private-s3-bucket 6",
"ccoctl gcp create-all --name=<name> \\ 1 --region=<gcp_region> \\ 2 --project=<gcp_project_id> \\ 3 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 4 --output-dir=<path_to_ccoctl_output_dir> 5",
"ccoctl ibmcloud create-service-id --credentials-requests-dir=<path_to_credential_requests_directory> \\ 1 --name=<cluster_name> \\ 2 --output-dir=<installation_directory> \\ 3 --resource-group-name=<resource_group_name> 4",
"ccoctl azure create-managed-identities --name <azure_infra_name> \\ 1 --output-dir ./output_dir --region <azure_region> \\ 2 --subscription-id <azure_subscription_id> \\ 3 --credentials-requests-dir <path_to_directory_for_credentials_requests> --issuer-url \"USD{OIDC_ISSUER_URL}\" \\ 4 --dnszone-resource-group-name <azure_dns_zone_resourcegroup_name> \\ 5 --installation-resource-group-name \"USD{AZURE_INSTALL_RG}\" 6",
"oc get authentication cluster -o jsonpath --template='{ .spec.serviceAccountIssuer }'",
"oc get infrastructure cluster -o jsonpath --template '{ .status.platformStatus.azure.resourceGroupName }'",
"ccoctl nutanix create-shared-secrets --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 1 --output-dir=<ccoctl_output_dir> \\ 2 --credentials-source-filepath=<path_to_credentials_file> 3",
"ls <path_to_ccoctl_output_dir>/manifests/*-credentials.yaml | xargs -I{} oc apply -f {}",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: \"*\" secretRef: name: <component_secret> namespace: <component_namespace>",
"apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key>",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor secretRef: name: <component_secret> namespace: <component_namespace>",
"apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region>",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/iam.securityReviewer - roles/iam.roleViewer skipServiceCheck: true secretRef: name: <component_secret> namespace: <component_namespace>",
"apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: service_account.json: <base64_encoded_gcp_service_account_file>",
"oc edit cloudcredential cluster",
"metadata: annotations: cloudcredential.openshift.io/upgradeable-to: <version_number>",
"RUN depmod -b /opt USD{KERNEL_VERSION}",
"quay.io/openshift-release-dev/ocp-release@sha256:22e149142517dfccb47be828f012659b1ccf71d26620e6f62468c264a7ce7863",
"apiVersion: kmm.sigs.x-k8s.io/v1beta2 kind: PreflightValidationOCP metadata: name: preflight spec: releaseImage: quay.io/openshift-release-dev/ocp-release@sha256:22e149142517dfccb47be828f012659b1ccf71d26620e6f62468c264a7ce7863 pushBuiltImage: true"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/updating_clusters/preparing-to-update-a-cluster |
Chapter 96. KafkaTopic schema reference | Chapter 96. KafkaTopic schema reference Property Description spec The specification of the topic. KafkaTopicSpec status The status of the topic. KafkaTopicStatus | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-kafkatopic-reference |
11.3. JSON Representation of a Network Resource | 11.3. JSON Representation of a Network Resource Example 11.2. A JSON representation of a network resource | [
"{ \"network\" : [ { \"data_center\" : { \"href\" : \"/ovirt-engine/api/datacenters/00000002-0002-0002-0002-000000000255\", \"id\" : \"00000002-0002-0002-0002-000000000255\" }, \"stp\" : \"false\", \"mtu\" : \"0\", \"usages\" : { \"usage\" : [ \"vm\" ] }, \"name\" : \"ovirtmgmt\", \"description\" : \"Management Network\", \"href\" : \"/ovirt-engine/api/networks/00000000-0000-0000-0000-000000000009\", \"id\" : \"00000000-0000-0000-0000-000000000009\", \"link\" : [ { \"href\" : \"/ovirt-engine/api/networks/00000000-0000-0000-0000-000000000009/permissions\", \"rel\" : \"permissions\" }, { \"href\" : \"/ovirt-engine/api/networks/00000000-0000-0000-0000-000000000009/vnicprofiles\", \"rel\" : \"vnicprofiles\" }, { \"href\" : \"/ovirt-engine/api/networks/00000000-0000-0000-0000-000000000009/labels\", \"rel\" : \"labels\" } ] } ] }"
]
| https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/version_3_rest_api_guide/json_representation_of_a_network_resource |
Chapter 8. SelfSubjectReview [authentication.k8s.io/v1] | Chapter 8. SelfSubjectReview [authentication.k8s.io/v1] Description SelfSubjectReview contains the user information that the kube-apiserver has about the user making this request. When using impersonation, users will receive the user info of the user being impersonated. If impersonation or request header authentication is used, any extra keys will have their case ignored and returned as lowercase. Type object 8.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata status object SelfSubjectReviewStatus is filled by the kube-apiserver and sent back to a user. 8.1.1. .status Description SelfSubjectReviewStatus is filled by the kube-apiserver and sent back to a user. Type object Property Type Description userInfo object UserInfo holds the information about the user needed to implement the user.Info interface. 8.1.2. .status.userInfo Description UserInfo holds the information about the user needed to implement the user.Info interface. Type object Property Type Description extra object Any additional information provided by the authenticator. extra{} array (string) groups array (string) The names of groups this user is a part of. uid string A unique value that identifies this user across time. If this user is deleted and another user by the same name is added, they will have different UIDs. username string The name that uniquely identifies this user among all active users. 8.1.3. .status.userInfo.extra Description Any additional information provided by the authenticator. Type object 8.2. API endpoints The following API endpoints are available: /apis/authentication.k8s.io/v1/selfsubjectreviews POST : create a SelfSubjectReview 8.2.1. /apis/authentication.k8s.io/v1/selfsubjectreviews Table 8.1. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. HTTP method POST Description create a SelfSubjectReview Table 8.2. Body parameters Parameter Type Description body SelfSubjectReview schema Table 8.3. HTTP responses HTTP code Reponse body 200 - OK SelfSubjectReview schema 201 - Created SelfSubjectReview schema 202 - Accepted SelfSubjectReview schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/authorization_apis/selfsubjectreview-authentication-k8s-io-v1 |
Chapter 2. Differences between java and alt-java | Chapter 2. Differences between java and alt-java Similarities exist between alt-java and java binaries, with the exception of the SSB mitigation. Although the SBB mitigation patch exists only for x86-64 architecture, Intel and AMD, the alt-java exists on all architectures. For non-x86 architectures, the alt-java binary is identical to java binary, except alt-java has no patches. Additional resources For more information about similarities between alt-java and java , see RH1750419 in the Red Hat Bugzilla documentation. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/21/html/using_alt-java/diff-java-and-altjava |
Chapter 22. Software management | Chapter 22. Software management The following chapter contains the most notable changes to software management between RHEL 8 and RHEL 9. 22.1. Notable changes to software management Package management with DNF/YUM In Red Hat Enterprise Linux 9, software installation is ensured by DNF . Red Hat continues to support the usage of the yum term for consistency with major versions of RHEL. If you type dnf instead of yum , the command works as expected because both are aliases for compatibility. Although RHEL 8 and RHEL 9 are based on DNF , they are compatible with YUM used in RHEL 7. For more information, see Managing software with the DNF tool . Notable RPM features and changes Red Hat Enterprise Linux 9 is distributed with RPM version 4.16. This version introduces many enhancements over its versions. Notable features include: New SPEC features, most notably: Fast macro-based dependency generators It is now possible to define dependency generators as regular RPM macros. This is especially useful in combination with the embedded Lua interpreter ( %{lua:... } ) because it enables writing sophisticated yet fast generators and avoiding redundant forking and executing a shell script. Example: The %generate_buildrequires section that enables generating dynamic build dependencies Additional build dependencies can now be generated programmatically at RPM build time, using the newly available %generate_buildrequires section. This is useful when packaging software written in a language in which a specialized utility is commonly used to determine run-time or build-time dependencies, such as Rust, Golang, Node.js, Ruby, Python or Haskell. Meta (unordered) dependencies A new dependency qualifier called meta enables expressing dependencies that are not specifically install-time or run-time dependencies. This is useful for avoiding unnecessary dependency loops that could otherwise arise from the normal dependency ordering, such as when specifying the dependencies of a meta package. Example: Native version comparison in expressions It is now possible to compare arbitrary version strings in expressions by using the newly supported v"... " format. Example: Caret version operator, opposite of tilde The new caret ( ^ ) operator can be used to express a version that is higher than the base version. It is a complement to the existing tilde ( ~ ) operator which has the opposite semantics. %elif , %elifos and %elifarch statements Optional automatic patch and source numbering Patch: and Source: tags without a number are now automatically numbered based on the order in which they are listed. %autopatch now accepts patch ranges The %autopatch macro now accepts the -m and -M parameters to limit the minimum and maximum patch number to apply, respectively. %patchlist and %sourcelist sections It is now possible to list patch and source files without preceding each item with the respective Patch : and Source: tags by using the newly added %patchlist and %sourcelist sections. A more intuitive way to declare build conditionals Starting from RHEL 9.2, you can use the new %bcond macro to build conditionals. The %bcond macro takes a build conditional name and the default value as arguments. Compared to the old %bcond_with and %bcond_without macros, %bcond is easier to understand and allows you to calculate the default value at build time. The default value can be any numeric expression. Example: To create a gnutls build conditional, enabled by default: To create a bootstrap build conditional, disabled by default: To create an openssl build conditional, defaulting to opposite of gnutls : The RPM database is now based on the sqlite library. Read-only support for BerkeleyDB databases has been retained for migration and query purposes. A new rpm-plugin-audit plug-in for issuing audit log events on transactions, previously built into RPM itself Increased parallelism in package builds There have been numerous improvements to the way the package build process is parallelized. These improvements involve various buildroot policy scripts and sanity checks, file classification, and subpackage creation and ordering. As a result, package builds on multiprocessor systems, particularly for large packages, should now be faster and more efficient. Enforced UTF-8 validation of header data at build-time RPM now supports the Zstandard ( zstd ) compression algorithm In RHEL 9, the default RPM compression algorithm has switched to Zstandard ( zstd ). As a result, packages now install faster, which can be especially noticeable during large transactions. | [
"%__foo_provides() %{basename:%{1}}",
"Requires(meta): <pkgname>",
"%if v\"%{python_version}\" < v\"3.9\"",
"%bcond gnutls 1",
"%bcond bootstrap 0",
"%bcond openssl %{without gnutls}"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/considerations_in_adopting_rhel_9/assembly_software-management_considerations-in-adopting-rhel-9 |
Chapter 6. Installer-provisioned postinstallation configuration | Chapter 6. Installer-provisioned postinstallation configuration After successfully deploying an installer-provisioned cluster, consider the following postinstallation procedures. 6.1. Optional: Configuring NTP for disconnected clusters OpenShift Container Platform installs the chrony Network Time Protocol (NTP) service on the cluster nodes. Use the following procedure to configure NTP servers on the control plane nodes and configure worker nodes as NTP clients of the control plane nodes after a successful deployment. OpenShift Container Platform nodes must agree on a date and time to run properly. When worker nodes retrieve the date and time from the NTP servers on the control plane nodes, it enables the installation and operation of clusters that are not connected to a routable network and thereby do not have access to a higher stratum NTP server. Procedure Install Butane on your installation host by using the following command: USD sudo dnf -y install butane Create a Butane config, 99-master-chrony-conf-override.bu , including the contents of the chrony.conf file for the control plane nodes. Note See "Creating machine configs with Butane" for information about Butane. Butane config example variant: openshift version: 4.15.0 metadata: name: 99-master-chrony-conf-override labels: machineconfiguration.openshift.io/role: master storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | # Use public servers from the pool.ntp.org project. # Please consider joining the pool (https://www.pool.ntp.org/join.html). # The Machine Config Operator manages this file server openshift-master-0.<cluster-name>.<domain> iburst 1 server openshift-master-1.<cluster-name>.<domain> iburst server openshift-master-2.<cluster-name>.<domain> iburst stratumweight 0 driftfile /var/lib/chrony/drift rtcsync makestep 10 3 bindcmdaddress 127.0.0.1 bindcmdaddress ::1 keyfile /etc/chrony.keys commandkey 1 generatecommandkey noclientlog logchange 0.5 logdir /var/log/chrony # Configure the control plane nodes to serve as local NTP servers # for all worker nodes, even if they are not in sync with an # upstream NTP server. # Allow NTP client access from the local network. allow all # Serve time even if not synchronized to a time source. local stratum 3 orphan 1 You must replace <cluster-name> with the name of the cluster and replace <domain> with the fully qualified domain name. Use Butane to generate a MachineConfig object file, 99-master-chrony-conf-override.yaml , containing the configuration to be delivered to the control plane nodes: USD butane 99-master-chrony-conf-override.bu -o 99-master-chrony-conf-override.yaml Create a Butane config, 99-worker-chrony-conf-override.bu , including the contents of the chrony.conf file for the worker nodes that references the NTP servers on the control plane nodes. Butane config example variant: openshift version: 4.15.0 metadata: name: 99-worker-chrony-conf-override labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | # The Machine Config Operator manages this file. server openshift-master-0.<cluster-name>.<domain> iburst 1 server openshift-master-1.<cluster-name>.<domain> iburst server openshift-master-2.<cluster-name>.<domain> iburst stratumweight 0 driftfile /var/lib/chrony/drift rtcsync makestep 10 3 bindcmdaddress 127.0.0.1 bindcmdaddress ::1 keyfile /etc/chrony.keys commandkey 1 generatecommandkey noclientlog logchange 0.5 logdir /var/log/chrony 1 You must replace <cluster-name> with the name of the cluster and replace <domain> with the fully qualified domain name. Use Butane to generate a MachineConfig object file, 99-worker-chrony-conf-override.yaml , containing the configuration to be delivered to the worker nodes: USD butane 99-worker-chrony-conf-override.bu -o 99-worker-chrony-conf-override.yaml Apply the 99-master-chrony-conf-override.yaml policy to the control plane nodes. USD oc apply -f 99-master-chrony-conf-override.yaml Example output machineconfig.machineconfiguration.openshift.io/99-master-chrony-conf-override created Apply the 99-worker-chrony-conf-override.yaml policy to the worker nodes. USD oc apply -f 99-worker-chrony-conf-override.yaml Example output machineconfig.machineconfiguration.openshift.io/99-worker-chrony-conf-override created Check the status of the applied NTP settings. USD oc describe machineconfigpool 6.2. Enabling a provisioning network after installation The assisted installer and installer-provisioned installation for bare metal clusters provide the ability to deploy a cluster without a provisioning network. This capability is for scenarios such as proof-of-concept clusters or deploying exclusively with Redfish virtual media when each node's baseboard management controller is routable via the baremetal network. You can enable a provisioning network after installation using the Cluster Baremetal Operator (CBO). Prerequisites A dedicated physical network must exist, connected to all worker and control plane nodes. You must isolate the native, untagged physical network. The network cannot have a DHCP server when the provisioningNetwork configuration setting is set to Managed . You can omit the provisioningInterface setting in OpenShift Container Platform 4.10 to use the bootMACAddress configuration setting. Procedure When setting the provisioningInterface setting, first identify the provisioning interface name for the cluster nodes. For example, eth0 or eno1 . Enable the Preboot eXecution Environment (PXE) on the provisioning network interface of the cluster nodes. Retrieve the current state of the provisioning network and save it to a provisioning custom resource (CR) file: USD oc get provisioning -o yaml > enable-provisioning-nw.yaml Modify the provisioning CR file: USD vim ~/enable-provisioning-nw.yaml Scroll down to the provisioningNetwork configuration setting and change it from Disabled to Managed . Then, add the provisioningIP , provisioningNetworkCIDR , provisioningDHCPRange , provisioningInterface , and watchAllNameSpaces configuration settings after the provisioningNetwork setting. Provide appropriate values for each setting. apiVersion: v1 items: - apiVersion: metal3.io/v1alpha1 kind: Provisioning metadata: name: provisioning-configuration spec: provisioningNetwork: 1 provisioningIP: 2 provisioningNetworkCIDR: 3 provisioningDHCPRange: 4 provisioningInterface: 5 watchAllNameSpaces: 6 1 The provisioningNetwork is one of Managed , Unmanaged , or Disabled . When set to Managed , Metal3 manages the provisioning network and the CBO deploys the Metal3 pod with a configured DHCP server. When set to Unmanaged , the system administrator configures the DHCP server manually. 2 The provisioningIP is the static IP address that the DHCP server and ironic use to provision the network. This static IP address must be within the provisioning subnet, and outside of the DHCP range. If you configure this setting, it must have a valid IP address even if the provisioning network is Disabled . The static IP address is bound to the metal3 pod. If the metal3 pod fails and moves to another server, the static IP address also moves to the new server. 3 The Classless Inter-Domain Routing (CIDR) address. If you configure this setting, it must have a valid CIDR address even if the provisioning network is Disabled . For example: 192.168.0.1/24 . 4 The DHCP range. This setting is only applicable to a Managed provisioning network. Omit this configuration setting if the provisioning network is Disabled . For example: 192.168.0.64, 192.168.0.253 . 5 The NIC name for the provisioning interface on cluster nodes. The provisioningInterface setting is only applicable to Managed and Unmanaged provisioning networks. Omit the provisioningInterface configuration setting if the provisioning network is Disabled . Omit the provisioningInterface configuration setting to use the bootMACAddress configuration setting instead. 6 Set this setting to true if you want metal3 to watch namespaces other than the default openshift-machine-api namespace. The default value is false . Save the changes to the provisioning CR file. Apply the provisioning CR file to the cluster: USD oc apply -f enable-provisioning-nw.yaml 6.3. Services for an external load balancer You can configure an OpenShift Container Platform cluster to use an external load balancer in place of the default load balancer. Important Configuring an external load balancer depends on your vendor's load balancer. The information and examples in this section are for guideline purposes only. Consult the vendor documentation for more specific information about the vendor's load balancer. Red Hat supports the following services for an external load balancer: Ingress Controller OpenShift API OpenShift MachineConfig API You can choose whether you want to configure one or all of these services for an external load balancer. Configuring only the Ingress Controller service is a common configuration option. To better understand each service, view the following diagrams: Figure 6.1. Example network workflow that shows an Ingress Controller operating in an OpenShift Container Platform environment Figure 6.2. Example network workflow that shows an OpenShift API operating in an OpenShift Container Platform environment Figure 6.3. Example network workflow that shows an OpenShift MachineConfig API operating in an OpenShift Container Platform environment The following configuration options are supported for external load balancers: Use a node selector to map the Ingress Controller to a specific set of nodes. You must assign a static IP address to each node in this set, or configure each node to receive the same IP address from the Dynamic Host Configuration Protocol (DHCP). Infrastructure nodes commonly receive this type of configuration. Target all IP addresses on a subnet. This configuration can reduce maintenance overhead, because you can create and destroy nodes within those networks without reconfiguring the load balancer targets. If you deploy your ingress pods by using a machine set on a smaller network, such as a /27 or /28 , you can simplify your load balancer targets. Tip You can list all IP addresses that exist in a network by checking the machine config pool's resources. Before you configure an external load balancer for your OpenShift Container Platform cluster, consider the following information: For a front-end IP address, you can use the same IP address for the front-end IP address, the Ingress Controller's load balancer, and API load balancer. Check the vendor's documentation for this capability. For a back-end IP address, ensure that an IP address for an OpenShift Container Platform control plane node does not change during the lifetime of the external load balancer. You can achieve this by completing one of the following actions: Assign a static IP address to each control plane node. Configure each node to receive the same IP address from the DHCP every time the node requests a DHCP lease. Depending on the vendor, the DHCP lease might be in the form of an IP reservation or a static DHCP assignment. Manually define each node that runs the Ingress Controller in the external load balancer for the Ingress Controller back-end service. For example, if the Ingress Controller moves to an undefined node, a connection outage can occur. 6.3.1. Configuring an external load balancer You can configure an OpenShift Container Platform cluster to use an external load balancer in place of the default load balancer. Important Before you configure an external load balancer, ensure that you read the "Services for an external load balancer" section. Read the following prerequisites that apply to the service that you want to configure for your external load balancer. Note MetalLB, that runs on a cluster, functions as an external load balancer. OpenShift API prerequisites You defined a front-end IP address. TCP ports 6443 and 22623 are exposed on the front-end IP address of your load balancer. Check the following items: Port 6443 provides access to the OpenShift API service. Port 22623 can provide ignition startup configurations to nodes. The front-end IP address and port 6443 are reachable by all users of your system with a location external to your OpenShift Container Platform cluster. The front-end IP address and port 22623 are reachable only by OpenShift Container Platform nodes. The load balancer backend can communicate with OpenShift Container Platform control plane nodes on port 6443 and 22623. Ingress Controller prerequisites You defined a front-end IP address. TCP ports 443 and 80 are exposed on the front-end IP address of your load balancer. The front-end IP address, port 80 and port 443 are be reachable by all users of your system with a location external to your OpenShift Container Platform cluster. The front-end IP address, port 80 and port 443 are reachable to all nodes that operate in your OpenShift Container Platform cluster. The load balancer backend can communicate with OpenShift Container Platform nodes that run the Ingress Controller on ports 80, 443, and 1936. Prerequisite for health check URL specifications You can configure most load balancers by setting health check URLs that determine if a service is available or unavailable. OpenShift Container Platform provides these health checks for the OpenShift API, Machine Configuration API, and Ingress Controller backend services. The following examples demonstrate health check specifications for the previously listed backend services: Example of a Kubernetes API health check specification Path: HTTPS:6443/readyz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10 Example of a Machine Config API health check specification Path: HTTPS:22623/healthz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10 Example of an Ingress Controller health check specification Path: HTTP:1936/healthz/ready Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 5 Interval: 10 Procedure Configure the HAProxy Ingress Controller, so that you can enable access to the cluster from your load balancer on ports 6443, 443, and 80: Example HAProxy configuration #... listen my-cluster-api-6443 bind 192.168.1.100:6443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /readyz http-check expect status 200 server my-cluster-master-2 192.168.1.101:6443 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:6443 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:6443 check inter 10s rise 2 fall 2 listen my-cluster-machine-config-api-22623 bind 192.168.1.100:22623 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz http-check expect status 200 server my-cluster-master-2 192.168.1.101:22623 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:22623 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:22623 check inter 10s rise 2 fall 2 listen my-cluster-apps-443 bind 192.168.1.100:443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2 listen my-cluster-apps-80 bind 192.168.1.100:80 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2 # ... Use the curl CLI command to verify that the external load balancer and its resources are operational: Verify that the cluster machine configuration API is accessible to the Kubernetes API server resource, by running the following command and observing the response: USD curl https://<loadbalancer_ip_address>:6443/version --insecure If the configuration is correct, you receive a JSON object in response: { "major": "1", "minor": "11+", "gitVersion": "v1.11.0+ad103ed", "gitCommit": "ad103ed", "gitTreeState": "clean", "buildDate": "2019-01-09T06:44:10Z", "goVersion": "go1.10.3", "compiler": "gc", "platform": "linux/amd64" } Verify that the cluster machine configuration API is accessible to the Machine config server resource, by running the following command and observing the output: USD curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK Content-Length: 0 Verify that the controller is accessible to the Ingress Controller resource on port 80, by running the following command and observing the output: USD curl -I -L -H "Host: console-openshift-console.apps.<cluster_name>.<base_domain>" http://<load_balancer_front_end_IP_address> If the configuration is correct, the output from the command shows the following response: HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.ocp4.private.opequon.net/ cache-control: no-cache Verify that the controller is accessible to the Ingress Controller resource on port 443, by running the following command and observing the output: USD curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain> If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private Configure the DNS records for your cluster to target the front-end IP addresses of the external load balancer. You must update records to your DNS server for the cluster API and applications over the load balancer. Examples of modified DNS records <load_balancer_ip_address> A api.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End <load_balancer_ip_address> A apps.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End Important DNS propagation might take some time for each DNS record to become available. Ensure that each DNS record propagates before validating each record. Use the curl CLI command to verify that the external load balancer and DNS record configuration are operational: Verify that you can access the cluster API, by running the following command and observing the output: USD curl https://api.<cluster_name>.<base_domain>:6443/version --insecure If the configuration is correct, you receive a JSON object in response: { "major": "1", "minor": "11+", "gitVersion": "v1.11.0+ad103ed", "gitCommit": "ad103ed", "gitTreeState": "clean", "buildDate": "2019-01-09T06:44:10Z", "goVersion": "go1.10.3", "compiler": "gc", "platform": "linux/amd64" } Verify that you can access the cluster machine configuration, by running the following command and observing the output: USD curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK Content-Length: 0 Verify that you can access each cluster application on port, by running the following command and observing the output: USD curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private Verify that you can access each cluster application on port 443, by running the following command and observing the output: USD curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private | [
"sudo dnf -y install butane",
"variant: openshift version: 4.15.0 metadata: name: 99-master-chrony-conf-override labels: machineconfiguration.openshift.io/role: master storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | # Use public servers from the pool.ntp.org project. # Please consider joining the pool (https://www.pool.ntp.org/join.html). # The Machine Config Operator manages this file server openshift-master-0.<cluster-name>.<domain> iburst 1 server openshift-master-1.<cluster-name>.<domain> iburst server openshift-master-2.<cluster-name>.<domain> iburst stratumweight 0 driftfile /var/lib/chrony/drift rtcsync makestep 10 3 bindcmdaddress 127.0.0.1 bindcmdaddress ::1 keyfile /etc/chrony.keys commandkey 1 generatecommandkey noclientlog logchange 0.5 logdir /var/log/chrony # Configure the control plane nodes to serve as local NTP servers # for all worker nodes, even if they are not in sync with an # upstream NTP server. # Allow NTP client access from the local network. allow all # Serve time even if not synchronized to a time source. local stratum 3 orphan",
"butane 99-master-chrony-conf-override.bu -o 99-master-chrony-conf-override.yaml",
"variant: openshift version: 4.15.0 metadata: name: 99-worker-chrony-conf-override labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | # The Machine Config Operator manages this file. server openshift-master-0.<cluster-name>.<domain> iburst 1 server openshift-master-1.<cluster-name>.<domain> iburst server openshift-master-2.<cluster-name>.<domain> iburst stratumweight 0 driftfile /var/lib/chrony/drift rtcsync makestep 10 3 bindcmdaddress 127.0.0.1 bindcmdaddress ::1 keyfile /etc/chrony.keys commandkey 1 generatecommandkey noclientlog logchange 0.5 logdir /var/log/chrony",
"butane 99-worker-chrony-conf-override.bu -o 99-worker-chrony-conf-override.yaml",
"oc apply -f 99-master-chrony-conf-override.yaml",
"machineconfig.machineconfiguration.openshift.io/99-master-chrony-conf-override created",
"oc apply -f 99-worker-chrony-conf-override.yaml",
"machineconfig.machineconfiguration.openshift.io/99-worker-chrony-conf-override created",
"oc describe machineconfigpool",
"oc get provisioning -o yaml > enable-provisioning-nw.yaml",
"vim ~/enable-provisioning-nw.yaml",
"apiVersion: v1 items: - apiVersion: metal3.io/v1alpha1 kind: Provisioning metadata: name: provisioning-configuration spec: provisioningNetwork: 1 provisioningIP: 2 provisioningNetworkCIDR: 3 provisioningDHCPRange: 4 provisioningInterface: 5 watchAllNameSpaces: 6",
"oc apply -f enable-provisioning-nw.yaml",
"Path: HTTPS:6443/readyz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10",
"Path: HTTPS:22623/healthz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10",
"Path: HTTP:1936/healthz/ready Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 5 Interval: 10",
"# listen my-cluster-api-6443 bind 192.168.1.100:6443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /readyz http-check expect status 200 server my-cluster-master-2 192.168.1.101:6443 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:6443 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:6443 check inter 10s rise 2 fall 2 listen my-cluster-machine-config-api-22623 bind 192.168.1.100:22623 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz http-check expect status 200 server my-cluster-master-2 192.168.1.101:22623 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:22623 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:22623 check inter 10s rise 2 fall 2 listen my-cluster-apps-443 bind 192.168.1.100:443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2 listen my-cluster-apps-80 bind 192.168.1.100:80 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2",
"curl https://<loadbalancer_ip_address>:6443/version --insecure",
"{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }",
"curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure",
"HTTP/1.1 200 OK Content-Length: 0",
"curl -I -L -H \"Host: console-openshift-console.apps.<cluster_name>.<base_domain>\" http://<load_balancer_front_end_IP_address>",
"HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.ocp4.private.opequon.net/ cache-control: no-cache",
"curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain>",
"HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"<load_balancer_ip_address> A api.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End",
"<load_balancer_ip_address> A apps.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End",
"curl https://api.<cluster_name>.<base_domain>:6443/version --insecure",
"{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }",
"curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure",
"HTTP/1.1 200 OK Content-Length: 0",
"curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure",
"HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure",
"HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/deploying_installer-provisioned_clusters_on_bare_metal/ipi-install-post-installation-configuration |
7.144. microcode_ctl | 7.144. microcode_ctl 7.144.1. RHBA-2013:0348 - microcode_ctl bug fix and enhancement update Updated microcode_ctl packages that fix a bug and add various enhancements are now available for Red Hat Enterprise Linux 6. The microcode_ctl packages provide utility code and microcode data to assist the kernel in updating the CPU microcode at system boot time. This microcode supports all current x86-based, Intel 64-based, and AMD64-based CPU models. It takes advantage of the mechanism built-in to Linux that allows microcode to be updated after system boot. When loaded, the updated microcode corrects the behavior of various processors, as described in processor specification updates issued by Intel and AMD for those processors. Bug Fix BZ#740932 Previously, a udev rule in /lib/udev/rules.d/89-microcode.rules allowed the module to load more than once. On very large systems (for example, systems with 2048 or more CPUs), this could result in the system becoming unresponsive on boot. With this update, the udev rule has been changed to ensure the module loads only once. Very large systems now boot as expected. Enhancements BZ#818096 The Intel CPU microcode file has been updated to version 20120606. BZ# 867078 The AMD CPU microcode file has been updated to version 20120910. All users of microcode_ctl are advised to upgrade to these updated packages, which fix this bug and add these enhancements. Note: a system reboot is necessary for this update to take effect. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/microcode_ctl |
Chapter 69. UsedNodePoolStatus schema reference | Chapter 69. UsedNodePoolStatus schema reference Used in: KafkaStatus Property Property type Description name string The name of the KafkaNodePool used by this Kafka resource. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-UsedNodePoolStatus-reference |
4.5. Controlling LVM Device Scans with Filters | 4.5. Controlling LVM Device Scans with Filters At startup, the vgscan command is run to scan the block devices on the system looking for LVM labels, to determine which of them are physical volumes and to read the metadata and build up a list of volume groups. The names of the physical volumes are stored in the LVM cache file of each node in the system, /etc/lvm/cache/.cache . Subsequent commands may read that file to avoiding rescanning. You can control which devices LVM scans by setting up filters in the lvm.conf configuration file. The filters in the lvm.conf file consist of a series of simple regular expressions that get applied to the device names in the /dev directory to decide whether to accept or reject each block device found. The following examples show the use of filters to control which devices LVM scans. Note that some of these examples do not necessarily represent recommended practice, as the regular expressions are matched freely against the complete pathname. For example, a/loop/ is equivalent to a/.*loop.*/ and would match /dev/solooperation/lvol1 . The following filter adds all discovered devices, which is the default behavior as there is no filter configured in the configuration file: The following filter removes the cdrom device in order to avoid delays if the drive contains no media: The following filter adds all loop and removes all other block devices: The following filter adds all loop and IDE and removes all other block devices: The following filter adds just partition 8 on the first IDE drive and removes all other block devices: Note When the lvmetad daemon is running, the filter = setting in the /etc/lvm/lvm.conf file does not apply when you execute the pvscan --cache device command. To filter devices, you need to use the global_filter = setting. Devices that fail the global filter are not opened by LVM and are never scanned. You may need to use a global filter, for example, when you use LVM devices in VMs and you do not want the contents of the devices in the VMs to be scanned by the physical host. For more information on the lvm.conf file, see Appendix B, The LVM Configuration Files and the lvm.conf (5) man page. | [
"filter = [ \"a/.*/\" ]",
"filter = [ \"r|/dev/cdrom|\" ]",
"filter = [ \"a/loop.*/\", \"r/.*/\" ]",
"filter =[ \"a|loop.*|\", \"a|/dev/hd.*|\", \"r|.*|\" ]",
"filter = [ \"a|^/dev/hda8USD|\", \"r/.*/\" ]"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/logical_volume_manager_administration/lvm_filters |
About Quay IO | About Quay IO Red Hat Quay 3 About Quay IO Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_quay/3/html/about_quay_io/index |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_jboss_web_server/6.0/html/red_hat_jboss_web_server_for_openshift/making-open-source-more-inclusive_jws-on-openshift |
Chapter 95. OtherArtifact schema reference | Chapter 95. OtherArtifact schema reference Used in: Plugin Property Property type Description url string URL of the artifact which will be downloaded. Streams for Apache Kafka does not do any security scanning of the downloaded artifacts. For security reasons, you should first verify the artifacts manually and configure the checksum verification to make sure the same artifact is used in the automated build. Required for jar , zip , tgz and other artifacts. Not applicable to the maven artifact type. sha512sum string SHA512 checksum of the artifact. Optional. If specified, the checksum will be verified while building the new container. If not specified, the downloaded artifact will not be verified. Not applicable to the maven artifact type. fileName string Name under which the artifact will be stored. insecure boolean By default, connections using TLS are verified to check they are secure. The server certificate used must be valid, trusted, and contain the server name. By setting this option to true , all TLS verification is disabled and the artifact will be downloaded, even when the server is considered insecure. type string Must be other . | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-OtherArtifact-reference |
14.8.14. smbspool | 14.8.14. smbspool smbspool <job> <user> <title> <copies> <options> <filename> The smbspool program is a CUPS-compatible printing interface to Samba. Although designed for use with CUPS printers, smbspool can work with non-CUPS printers as well. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-samba-programs-smbspool |
Chapter 5. Installing Capsule on AWS | Chapter 5. Installing Capsule on AWS On your AWS environment, complete the following steps: Connect to the new instance. Install Capsule Server. For more information, see Installing Capsule Server . | null | https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/deploying_red_hat_satellite_on_amazon_web_services/installing_capsule_on_aws |
Chapter 4. tuned | Chapter 4. tuned 4.1. Introduction This chapter covers using tuned daemon for dynamically tuning system settings in virtualized environments. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_tuning_and_optimization_guide/chap-virtualization_tuning_optimization_guide-tuned |
Chapter 11. Directory Structure | Chapter 11. Directory Structure 11.1. Red Hat JBoss Data Virtualization File Structure The following shows the contents of the Red Hat JBoss Data Virtualization deployment within a Red Hat JBoss EAP instance: | [
"EAP-6.4.0/ βββ appclient β βββ configuration β βββ appclient.xml β βββ logging.properties βββ bin β βββ add-user.bat β βββ add-user.properties β βββ add-user.sh β βββ appclient.bat β βββ appclient.conf β βββ appclient.conf.bat β βββ appclient.sh β βββ client β β βββ jboss-cli-client.jar β β βββ jboss-client.jar β β βββ README-CLI-JCONSOLE.txt β β βββ README-EJB-JMS.txt β βββ domain.bat β βββ domain.conf β βββ domain.conf.bat β βββ domain.sh β βββ init.d β β βββ jboss-as.conf β β βββ jboss-as-domain.sh β β βββ jboss-as-standalone.sh β βββ jboss-cli.bat β βββ jboss-cli-logging.properties β βββ jboss-cli.sh β βββ jboss-cli.xml β βββ jconsole.bat β βββ jconsole.sh β βββ jdr.bat β βββ jdr.sh β βββ product.conf β βββ run.bat β βββ run.sh β βββ standalone.bat β βββ standalone.conf β βββ standalone.conf.bat β βββ standalone.sh β βββ teiid-oauth-util.bat β βββ teiid-oauth-util.sh β βββ vault.bat β βββ vault.sh β βββ wsconsume.bat β βββ wsconsume.sh β βββ wsprovide.bat β βββ wsprovide.sh βββ bundles β βββ system β βββ layers β βββ base βββ cli-scripts β βββ disable-welcome-root.cli β βββ disable-welcome-root-domain.cli β βββ ModeShape-domain.cli β βββ ModeShape-ds.properties β βββ ModeShape-standalone.cli β βββ teiid-add-database-logger.cli β βββ teiid-add-database-logger-domain.cli β βββ teiid-dashboard-add_datasource.cli β βββ teiid-dashboard-domain-add_datasource.cli β βββ teiid-deploy-dashboard.cli β βββ teiid-deploy-dashboard-domain.cli β βββ teiid-domain-auditcommand-logging.cli β βββ teiid-domain-install-ds-builder-war.cli β βββ teiid-domain-install-vdb-builder-war.cli β βββ teiid-domain-mode-install.cli β βββ teiid-logger-ds.properties β βββ teiid-modeshape-domain.cli β βββ teiid-modeshape-standalone.cli β βββ teiid-standalone-auditcommand-logging.cli β βββ teiid-standalone-ha-mode-install.cli β βββ teiid-standalone-install-ds-builder-war.cli β βββ teiid-standalone-install-vdb-builder-war.cli β βββ teiid-standalone-mode-install.cli βββ dataVirtualization β βββ dataServiceBuilder β β βββ komodo-rest.war β β βββ vdb-bench-doc.war β β βββ vdb-bench-war.war β βββ jdbc β β βββ modeshape-client-3.8.4.GA-redhat-64-12.jar β β βββ teiid-8.12.11.6_4-redhat-64-12-jdbc.jar β β βββ teiid-hibernate-dialect-8.12.11.6_4-redhat-64-12.jar β βββ logging β β βββ database-service.jar β βββ rest-client β β βββ modeshape-client-3.8.4.GA-redhat-64-12.jar β β βββ README.txt β β βββ restclient.bat β β βββ restclient.sh β βββ teiid-adminshell β β βββ teiid-8.12.11.6_4-redhat-64-12-adminshell-dist.zip β βββ teiid-dashboard β β βββ teiid-dashboard-builder.war β β βββ admin β β βββ ckeditor β β βββ common β β βββ components β β βββ configuration β β βββ error.jsp β β βββ favicon.ico β β βββ images β β βββ index.jsp β β βββ js β β βββ js-api β β βββ login_failed.jsp β β βββ login.jsp β β βββ META-INF β β βββ not_authorized.jsp β β βββ panels β β βββ redhat β β βββ robots.txt β β βββ section β β βββ system β β βββ templates β β βββ WEB-INF β βββ vdb β βββ ModeShape.vdb β βββ teiid-odata.war β βββ teiid-olingo-odata4.war βββ docs β βββ examples β β βββ configs β β βββ standalone-genericjms.xml β β βββ standalone-hornetq-colocated.xml β β βββ standalone-jts.xml β β βββ standalone-minimalistic.xml β β βββ standalone-osgi-only.xml β β βββ standalone-picketlink.xml β β βββ standalone-xts.xml β βββ licenses β β βββ apache software license, version 2.0 - apache-2.0.txt β β βββ common development and distribution license - cddl.txt β β βββ common public license, version 1.0 - cpl-1.0.txt β β βββ eclipse distribution license, version 1.0 - edl-1.0.txt β β βββ eclipse public license, version 1.0 - epl-1.0.txt β β βββ gnu general public license, version 2 - gpl-2.0.txt β β βββ gnu general public license, version 2 with the classpath exception - gpl-2.0-ce.txt β β βββ gnu lesser general public license, version 2.1 - lgpl-2.1.txt β β βββ gnu library general public license, version 2 - lgpl-2.0.txt β β βββ h2 license, version 1.0 - h2.txt β β βββ jcip-cc-by-2.5.txt β β βββ jdom license - jdom-1.0.txt β β βββ mozilla public license, version 1.1 - mpl-1.1.txt β β βββ osgi-1.0.txt β β βββ the bsd license - bsd.txt β β βββ the jython license - license.html β β βββ the mit license - mit.txt β β βββ the werken company license - license.html β β βββ w3c software notice and license - w3c.txt β βββ licenses-datavirt β β βββ com.amazonaws,aws-java-sdk,ApacheLicense,Version2.0 β β βββ com.beust,jcommander,TheApacheSoftwareLicense,Version2.0 β β βββ com.codahale.metrics,metrics-core,ApacheLicense2.0 β β βββ com.couchbase.client,core-io,TheApacheSoftwareLicense,Version2.0 β β βββ com.couchbase.client,java-client,TheApacheSoftwareLicense,Version2.0 β β βββ com.datastax.cassandra,cassandra-driver-core,Apache2 β β βββ com.drewnoakes,metadata-extractor,TheApacheSoftwareLicense,Version2.0 β β βββ com.fasterxml,aalto-xml,TheApacheSoftwareLicense,Version2.0 β β βββ com.fasterxml.jackson.core,jackson-annotations,TheApacheSoftwareLicense,Version2.0 β β βββ com.fasterxml.jackson.core,jackson-core,TheApacheSoftwareLicense,Version2.0 β β βββ com.fasterxml.jackson.core,jackson-databind,TheApacheSoftwareLicense,Version2.0 β β βββ com.fasterxml.jackson.dataformat,jackson-dataformat-xml,TheApacheSoftwareLicense,Version2.0 β β βββ com.fasterxml.jackson.dataformat,jackson-dataformat-yaml,TheApacheSoftwareLicense,Version2.0 β β βββ com.fasterxml.jackson.datatype,jackson-datatype-joda,TheApacheSoftwareLicense,Version2.0 β β βββ com.fasterxml.jackson.jaxrs,jackson-jaxrs-base,TheApacheSoftwareLicense,Version2.0 β β βββ com.fasterxml.jackson.jaxrs,jackson-jaxrs-json-provider,TheApacheSoftwareLicense,Version2.0 β β βββ com.fasterxml.jackson.module,jackson-module-jaxb-annotations,TheApacheSoftwareLicense,Version2.0 β β βββ com.force.api,force-partner-api,FreeBSDLicense β β βββ com.force.api,force-wsc,FreeBSDLicense β β βββ com.github.junrar,junrar,UnRarLicense β β βββ com.github.virtuald,curvesapi,CDDL+GPLLicense β β βββ com.google.code.findbugs,annotations,GNULesserPublicLicense β β βββ com.google.code.gson,gson,TheApacheSoftwareLicense,Version2.0 β β βββ com.googlecode.json-simple,json-simple,TheApacheSoftwareLicense,Version2.0 β β βββ com.googlecode.juniversalchardet,juniversalchardet,MozillaPublicLicense1.1(MPL1.1) β β βββ com.googlecode.mp4parser,isoparser,ApacheSoftwareLicense-Version2.0 β β βββ com.google.guava,guava,TheApacheSoftwareLicense,Version2.0 β β βββ com.healthmarketscience.jackcess,jackcess,ApacheLicense,Version2.0 β β βββ com.healthmarketscience.jackcess,jackcess-encrypt,ApacheLicense,Version2.0 β β βββ com.jcraft,jsch,RevisedBSD β β βββ com.mchange,c3p0,GNULesserGeneralPublicLicense,Version2.1 β β βββ commons-cli,commons-cli,TheApacheSoftwareLicense,Version2.0 β β βββ commons-codec,commons-codec,TheApacheSoftwareLicense,Version2.0 β β βββ commons-collections,commons-collections,TheApacheSoftwareLicense,Version2.0 β β βββ commons-fileupload,commons-fileupload,TheApacheSoftwareLicense,Version2.0 β β βββ commons-httpclient,commons-httpclient,ApacheLicense β β βββ commons-io,commons-io,TheApacheSoftwareLicense,Version2.0 β β βββ commons-jxpath,commons-jxpath,TheApacheSoftwareLicense,Version2.0 β β βββ commons-lang,commons-lang,TheApacheSoftwareLicense,Version2.0 β β βββ commons-logging,commons-logging,ApacheLicense,Version2.0 β β βββ commons-logging,commons-logging-api,ApacheLicense,Version2.0 β β βββ commons-logging,commons-logging-api,TheApacheSoftwareLicense,Version2.0 β β βββ commons-logging,commons-logging,TheApacheSoftwareLicense,Version2.0 β β βββ commons-pool,commons-pool,TheApacheSoftwareLicense,Version2.0 β β βββ commons-vfs,commons-vfs,ApacheLicense,Version2.0 β β βββ commons-vfs,commons-vfs,TheApacheSoftwareLicense,Version2.0 β β βββ com.pff,java-libpst,TheApacheSoftwareLicense,Version2.0 β β βββ com.rometools,rome,TheApacheSoftwareLicense,Version2.0 β β βββ com.rometools,rome-utils,TheApacheSoftwareLicense,Version2.0 β β βββ com.squareup,protoparser,Apache2.0 β β βββ com.vividsolutions,jts,LesserGeneralPublicLicense(LGPL) β β βββ dom4j,dom4j,BSD2.0+ β β βββ edu.ucar,cdm,(MIT-style)netCDFClibrarylicense β β βββ edu.ucar,grib,(MIT-style)netCDFClibrarylicense β β βββ edu.ucar,httpservices,(MIT-style)netCDFClibrarylicense β β βββ edu.ucar,jj2000,TheApacheSoftwareLicense,Version2.0 β β βββ edu.ucar,netcdf4,(MIT-style)netCDFClibrarylicense β β βββ edu.ucar,udunits,(MIT-style)netCDFClibrarylicense β β βββ io.hawt,hawtio-core,TheApacheSoftwareLicense,Version2.0 β β βββ io.hawt,hawtio-git,TheApacheSoftwareLicense,Version2.0 β β βββ io.hawt,hawtio-system,TheApacheSoftwareLicense,Version2.0 β β βββ io.hawt,hawtio-util,TheApacheSoftwareLicense,Version2.0 β β βββ io.swagger,swagger-annotations,ApacheLicense2.0 β β βββ io.swagger,swagger-core,ApacheLicense2.0 β β βββ io.swagger,swagger-jaxrs,ApacheLicense2.0 β β βββ io.swagger,swagger-models,ApacheLicense2.0 β β βββ jakarta-regexp,jakarta-regexp,ApacheLicense,Version2.0 β β βββ javax.activation,activation,COMMONDEVELOPMENTANDDISTRIBUTIONLICENSE(CDDL)Version1.0 β β βββ javax.jcr,jcr,AdobeDayJCRLicense β β βββ javax.jcr,jcr,DaySpecificationLicense β β βββ javax.jcr,jcr,DaySpecificationLicenseaddendum β β βββ javax.measure,jsr-275,JScienceBSDLicense β β βββ javax.measure,jsr-275,SpecificationLicense β β βββ javax.validation,validation-api,ApacheLicense,Version2.0 β β βββ javax.ws.rs,jsr311-api,CDDLLicense β β βββ javax.xml.stream,stax-api,COMMONDEVELOPMENTANDDISTRIBUTIONLICENSE(CDDL)Version1.0 β β βββ javax.xml.stream,stax-api,GNUGeneralPublicLibrary β β βββ jline,jline,TheBSDLicense β β βββ joda-time,joda-time,Apache2 β β βββ licenses.css β β βββ licenses.html β β βββ licenses.xml β β βββ log4j,log4j,TheApacheSoftwareLicense,Version2.0 β β βββ net.jcip,jcip-annotations,CreativeCommonsAttributionlicense2.5 β β βββ net.oauth.core,oauth,TheApacheSoftwareLicense,Version2.0 β β βββ net.sf.opencsv,opencsv,Apache2 β β βββ org.antlr,antlr4-runtime,BSD3-ClauseLicense β β βββ org.antlr,antlr-runtime,BSDlicence β β βββ org.antlr,stringtemplate,BSDlicence β β βββ org.apache.accumulo,accumulo-core,ApacheLicense,Version2.0 β β βββ org.apache.accumulo,accumulo-fate,ApacheLicense,Version2.0 β β βββ org.apache.accumulo,accumulo-trace,ApacheLicense,Version2.0 β β βββ org.apache.avro,avro,PublicDomain β β βββ org.apache.chemistry.opencmis,chemistry-opencmis-commons-api,Apache2 β β βββ org.apache.chemistry.opencmis,chemistry-opencmis-commons-impl,Apache2 β β βββ org.apache.chemistry.opencmis,chemistry-opencmis-server-bindings,Apache2 β β βββ org.apache.chemistry.opencmis,chemistry-opencmis-server-support,Apache2 β β βββ org.apache.commons,commons-collections4,ApacheLicense,Version2.0 β β βββ org.apache.commons,commons-compress,ApacheLicense,Version2.0 β β βββ org.apache.commons,commons-csv,TheApacheSoftwareLicense,Version2.0 β β βββ org.apache.commons,commons-exec,ApacheLicense,Version2.0 β β βββ org.apache.commons,commons-lang3,TheApacheSoftwareLicense,Version2.0 β β βββ org.apache.commons,commons-vfs2,TheApacheSoftwareLicense,Version2.0 β β βββ org.apache.cxf,cxf-api,TheApacheSoftwareLicense,Version2.0 β β βββ org.apache.cxf,cxf-core,TheApacheSoftwareLicense,Version2.0 β β βββ org.apache.cxf,cxf-rt-bindings-xml,TheApacheSoftwareLicense,Version2.0 β β βββ org.apache.cxf,cxf-rt-core,TheApacheSoftwareLicense,Version2.0 β β βββ org.apache.cxf,cxf-rt-frontend-jaxrs,TheApacheSoftwareLicense,Version2.0 β β βββ org.apache.cxf,cxf-rt-rs-client,TheApacheSoftwareLicense,Version2.0 β β βββ org.apache.cxf,cxf-rt-rs-security-oauth2-saml,TheApacheSoftwareLicense,Version2.0 β β βββ org.apache.cxf,cxf-rt-rs-security-oauth2,TheApacheSoftwareLicense,Version2.0 β β βββ org.apache.cxf,cxf-rt-rs-security-oauth,TheApacheSoftwareLicense,Version2.0 β β βββ org.apache.cxf,cxf-rt-transports-http-hc,TheApacheSoftwareLicense,Version2.0 β β βββ org.apache.cxf,cxf-rt-transports-http,TheApacheSoftwareLicense,Version2.0 β β βββ org.apache-extras.beanshell,bsh,ApacheLicense,Version2.0 β β βββ org.apache.felix,org.apache.felix.fileinstall,TheApacheSoftwareLicense,Version2.0 β β βββ org.apache.felix,org.apache.felix.framework,ApacheLicense,Version2.0 β β βββ org.apache.geronimo.specs,geronimo-javamail_1.4_spec,TheApacheSoftwareLicense,Version2.0 β β βββ org.apache.httpcomponents,httpasyncclient,ApacheLicense β β βββ org.apache.httpcomponents,httpasyncclient,ApacheLicense,Version2.0 β β βββ org.apache.httpcomponents,httpclient,ApacheLicense,Version2.0 β β βββ org.apache.httpcomponents,httpcore,ApacheLicense,Version2.0 β β βββ org.apache.httpcomponents,httpcore-nio,ApacheLicense β β βββ org.apache.httpcomponents,httpcore-nio,ApacheLicense,Version2.0 β β βββ org.apache.httpcomponents,httpmime,ApacheLicense,Version2.0 β β βββ org.apache.james,apache-mime4j-core,TheApacheSoftwareLicense,Version2.0 β β βββ org.apache.james,apache-mime4j-dom,TheApacheSoftwareLicense,Version2.0 β β βββ org.apache.lucene,lucene-core,Apache2 β β βββ org.apache.lucene,lucene-facet,Apache2 β β βββ org.apache.maven.scm,maven-scm-api,TheApacheSoftwareLicense,Version2.0 β β βββ org.apache.maven.scm,maven-scm-provider-svn-commons,TheApacheSoftwareLicense,Version2.0 β β βββ org.apache.maven.scm,maven-scm-provider-svnexe,TheApacheSoftwareLicense,Version2.0 β β βββ org.apache.olingo,odata-client-api,TheApacheSoftwareLicense,Version2.0 β β βββ org.apache.olingo,odata-client-core,TheApacheSoftwareLicense,Version2.0 β β βββ org.apache.olingo,odata-commons-api,TheApacheSoftwareLicense,Version2.0 β β βββ org.apache.olingo,odata-commons-core,TheApacheSoftwareLicense,Version2.0 β β βββ org.apache.olingo,odata-server-api,TheApacheSoftwareLicense,Version2.0 β β βββ org.apache.olingo,odata-server-core-ext,TheApacheSoftwareLicense,Version2.0 β β βββ org.apache.olingo,odata-server-core,TheApacheSoftwareLicense,Version2.0 β β βββ org.apache.opennlp,opennlp-maxent,TheApacheSoftwareLicense,Version2.0 β β βββ org.apache.opennlp,opennlp-tools,TheApacheSoftwareLicense,Version2.0 β β βββ org.apache.pdfbox,fontbox,ApacheLicense,Version2.0 β β βββ org.apache.pdfbox,jempbox,ApacheLicense,Version2.0 β β βββ org.apache.pdfbox,pdfbox,ApacheLicense,Version2.0 β β βββ org.apache.pdfbox,pdfbox-debugger,ApacheLicense,Version2.0 β β βββ org.apache.pdfbox,pdfbox-tools,ApacheLicense,Version2.0 β β βββ org.apache.poi,poi-ooxml-schemas,TheApacheSoftwareLicense,Version2.0 β β βββ org.apache.poi,poi-ooxml,TheApacheSoftwareLicense,Version2.0 β β βββ org.apache.poi,poi,TheApacheSoftwareLicense,Version2.0 β β βββ org.apache.tika,tika-core,ApacheLicense,Version2.0 β β βββ org.apache.tika,tika-parsers,ApacheLicense,Version2.0 β β βββ org.apache.ws.xmlschema,xmlschema-core,TheApacheSoftwareLicense,Version2.0 β β βββ org.apache.xmlbeans,xmlbeans,TheApacheSoftwareLicense,Version2.0 β β βββ org.apache.zookeeper,zookeeper,ApacheLicense,Version2.0 β β βββ org.codehaus.jackson,jackson-core-asl,TheApacheSoftwareLicense,Version2.0 β β βββ org.codehaus.jackson,jackson-mapper-asl,TheApacheSoftwareLicense,Version2.0 β β βββ org.codehaus.jettison,jettison,PublicDomain β β βββ org.codehaus.plexus,plexus-utils,TheApacheSoftwareLicense,Version2.0 β β βββ org.codehaus.woodstox,stax2-api,TheBSDLicense β β βββ org.codehaus.woodstox,woodstox-core-asl,TheApacheSoftwareLicense,Version2.0 β β βββ org.codelibs,jhighlight,CDDL,v1.0 β β βββ org.codelibs,jhighlight,LGPL,v2.1orlater β β βββ org.eclipse.jgit,org.eclipse.jgit,EclipsePublicLicense-v1.0 β β βββ org.fusesource.jansi,jansi,TheApacheSoftwareLicense,Version2.0 β β βββ org.gagravarr,vorbis-java-core,TheApacheSoftwareLicense,Version2.0 β β βββ org.gagravarr,vorbis-java-tika,TheApacheSoftwareLicense,Version2.0 β β βββ org.hamcrest,hamcrest-core,NewBSDLicense β β βββ org.hibernate.common,hibernate-commons-annotations,GNULESSERGENERALPUBLICLICENSE β β βββ org.hibernate,hibernate-search-engine,GNULesserGeneralPublicLicense β β βββ org.hibernate,hibernate-validator,ApacheLicense,Version2.0 β β βββ org.infinispan,infinispan-cachestore-leveldb,GNULesserGeneralPublicLicense β β βββ org.infinispan,infinispan-core,GNULesserGeneralPublicLicense β β βββ org.iq80.leveldb,leveldb,ApacheLicense2.0 β β βββ org.iq80.leveldb,leveldb-api,ApacheLicense2.0 β β βββ org.itadaki,bzip2,MITLicense(MIT) β β βββ org,jaudiotagger,LGPL β β βββ org.javassist,javassist,ApacheLicense2.0 β β βββ org.javassist,javassist,LGPL2.1 β β βββ org.jboss.aesh,aesh,EclipseLicense,Version1.0 β β βββ org.jboss.as,jboss-as-build-config,lgpl β β βββ org.jboss.as,jboss-as-cli,lgpl β β βββ org.jboss.as,jboss-as-controller-client,lgpl β β βββ org.jboss.as,jboss-as-controller,lgpl β β βββ org.jboss.as,jboss-as-protocol,lgpl β β βββ org.jboss.as,jboss-as-version,lgpl β β βββ org.jboss.dashboard-builder,dashboard-commons,TheApacheSoftwareLicense,Version2.0 β β βββ org.jboss.dashboard-builder,dashboard-displayer-api,TheApacheSoftwareLicense,Version2.0 β β βββ org.jboss.dashboard-builder,dashboard-displayer-core,TheApacheSoftwareLicense,Version2.0 β β βββ org.jboss.dashboard-builder,dashboard-provider-api,TheApacheSoftwareLicense,Version2.0 β β βββ org.jboss.dashboard-builder,dashboard-provider-core,TheApacheSoftwareLicense,Version2.0 β β βββ org.jboss.dashboard-builder,dashboard-provider-csv,TheApacheSoftwareLicense,Version2.0 β β βββ org.jboss.dashboard-builder,dashboard-provider-sql,TheApacheSoftwareLicense,Version2.0 β β βββ org.jboss.dashboard-builder,dashboard-security,TheApacheSoftwareLicense,Version2.0 β β βββ org.jboss,jboss-dmr,PublicDomain β β βββ org.jboss,jboss-vfs,asl β β βββ org.jboss.logging,jboss-logging,ApacheLicense,version2.0 β β βββ org.jboss.logmanager,jboss-logmanager,ApacheLicenseVersion2.0 β β βββ org.jboss.marshalling,jboss-marshalling,PublicDomain β β βββ org.jboss.marshalling,jboss-marshalling-river,PublicDomain β β βββ org.jboss.modules,jboss-modules,lgpl β β βββ org.jboss.msc,jboss-msc,PublicDomain β β βββ org.jboss.oreva,common,TheApacheSoftwareLicense,Version2.0 β β βββ org.jboss.oreva,odata-core,TheApacheSoftwareLicense,Version2.0 β β βββ org.jboss.remoting3,jboss-remoting,LGPL2.1 β β βββ org.jboss.remotingjmx,remoting-jmx,LGPL2.1 β β βββ org.jboss.sasl,jboss-sasl,PublicDomain β β βββ org.jboss.spec.javax.transaction,jboss-transaction-api_1.1_spec,CommonDevelopmentandDistributionLicense β β βββ org.jboss.spec.javax.transaction,jboss-transaction-api_1.1_spec,GNUGeneralPublicLicense,Version2withtheClasspathException β β βββ org.jboss,staxmapper,PublicDomain β β βββ org.jboss.teiid.connectors,connector-accumulo,GNULesserGeneralPublicLicense β β βββ org.jboss.teiid.connectors,connector-cassandra,GNULesserGeneralPublicLicense β β βββ org.jboss.teiid.connectors,connector-couchbase,GNULesserGeneralPublicLicense β β βββ org.jboss.teiid.connectors,connector-file,GNULesserGeneralPublicLicense β β βββ org.jboss.teiid.connectors,connector-google,GNULesserGeneralPublicLicense β β βββ org.jboss.teiid.connectors,connector-infinispan.6,GNULesserGeneralPublicLicense β β βββ org.jboss.teiid.connectors,connector-infinispan-dsl,GNULesserGeneralPublicLicense β β βββ org.jboss.teiid.connectors,connector-infinispan-hotrod,GNULesserGeneralPublicLicense β β βββ org.jboss.teiid.connectors,connector-ldap,GNULesserGeneralPublicLicense β β βββ org.jboss.teiid.connectors,connector-mongodb,GNULesserGeneralPublicLicense β β βββ org.jboss.teiid.connectors,connector-salesforce-34,GNULesserGeneralPublicLicense β β βββ org.jboss.teiid.connectors,connector-salesforce,GNULesserGeneralPublicLicense β β βββ org.jboss.teiid.connectors,connector-simpledb,GNULesserGeneralPublicLicense β β βββ org.jboss.teiid.connectors,connector-solr,GNULesserGeneralPublicLicense β β βββ org.jboss.teiid.connectors,connector-ws,GNULesserGeneralPublicLicense β β βββ org.jboss.teiid.connectors,couchbase-api,GNULesserGeneralPublicLicense β β βββ org.jboss.teiid.connectors,document-api,GNULesserGeneralPublicLicense β β βββ org.jboss.teiid.connectors,google-api,GNULesserGeneralPublicLicense β β βββ org.jboss.teiid.connectors,infinispan-api,GNULesserGeneralPublicLicense β β βββ org.jboss.teiid.connectors,mongodb-api,GNULesserGeneralPublicLicense β β βββ org.jboss.teiid.connectors,simpledb-api,GNULesserGeneralPublicLicense β β βββ org.jboss.teiid.connectors,translator-accumulo,GNULesserGeneralPublicLicense β β βββ org.jboss.teiid.connectors,translator-amazon-s3,GNULesserGeneralPublicLicense β β βββ org.jboss.teiid.connectors,translator-cassandra,GNULesserGeneralPublicLicense β β βββ org.jboss.teiid.connectors,translator-couchbase,GNULesserGeneralPublicLicense β β βββ org.jboss.teiid.connectors,translator-excel,GNULesserGeneralPublicLicense β β βββ org.jboss.teiid.connectors,translator-file,GNULesserGeneralPublicLicense β β βββ org.jboss.teiid.connectors,translator-google,GNULesserGeneralPublicLicense β β βββ org.jboss.teiid.connectors,translator-hbase,GNULesserGeneralPublicLicense β β βββ org.jboss.teiid.connectors,translator-hive,GNULesserGeneralPublicLicense β β βββ org.jboss.teiid.connectors,translator-infinispan-cache,GNULesserGeneralPublicLicense β β βββ org.jboss.teiid.connectors,translator-infinispan-dsl,GNULesserGeneralPublicLicense β β βββ org.jboss.teiid.connectors,translator-infinispan-hotrod,GNULesserGeneralPublicLicense β β βββ org.jboss.teiid.connectors,translator-jdbc,GNULesserGeneralPublicLicense β β βββ org.jboss.teiid.connectors,translator-jpa,GNULesserGeneralPublicLicense β β βββ org.jboss.teiid.connectors,translator-ldap,GNULesserGeneralPublicLicense β β βββ org.jboss.teiid.connectors,translator-loopback,GNULesserGeneralPublicLicense β β βββ org.jboss.teiid.connectors,translator-mongodb,GNULesserGeneralPublicLicense β β βββ org.jboss.teiid.connectors,translator-object,GNULesserGeneralPublicLicense β β βββ org.jboss.teiid.connectors,translator-odata4,GNULesserGeneralPublicLicense β β βββ org.jboss.teiid.connectors,translator-odata,GNULesserGeneralPublicLicense β β βββ org.jboss.teiid.connectors,translator-olap,GNULesserGeneralPublicLicense β β βββ org.jboss.teiid.connectors,translator-prestodb,GNULesserGeneralPublicLicense β β βββ org.jboss.teiid.connectors,translator-salesforce-34,GNULesserGeneralPublicLicense β β βββ org.jboss.teiid.connectors,translator-salesforce,GNULesserGeneralPublicLicense β β βββ org.jboss.teiid.connectors,translator-simpledb,GNULesserGeneralPublicLicense β β βββ org.jboss.teiid.connectors,translator-solr,GNULesserGeneralPublicLicense β β βββ org.jboss.teiid.connectors,translator-ws,GNULesserGeneralPublicLicense β β βββ org.jboss.teiid.extensions,database-logging-appender,GNULesserGeneralPublicLicense β β βββ org.jboss.teiid.extensions,database-service,GNULesserGeneralPublicLicense β β βββ org.jboss.teiid.modeshape,teiid-modeshape-core,TheApacheSoftwareLicense,Version2.0 β β βββ org.jboss.teiid.modeshape,teiid-modeshape-sequencer-dataservice,TheApacheSoftwareLicense,Version2.0 β β βββ org.jboss.teiid.modeshape,teiid-modeshape-sequencer-ddl,TheApacheSoftwareLicense,Version2.0 β β βββ org.jboss.teiid.modeshape,teiid-modeshape-sequencer-vdb,TheApacheSoftwareLicense,Version2.0 β β βββ org.jboss.teiid,teiid-admin,GNULesserGeneralPublicLicense β β βββ org.jboss.teiid,teiid-adminshell,GNULesserGeneralPublicLicense β β βββ org.jboss.teiid,teiid-api,GNULesserGeneralPublicLicense β β βββ org.jboss.teiid,teiid-client,GNULesserGeneralPublicLicense β β βββ org.jboss.teiid,teiid-common-core,GNULesserGeneralPublicLicense β β βββ org.jboss.teiid,teiid-engine,GNULesserGeneralPublicLicense β β βββ org.jboss.teiid,teiid-hibernate-dialect,GNULesserGeneralPublicLicense β β βββ org.jboss.teiid,teiid-jboss-admin,GNULesserGeneralPublicLicense β β βββ org.jboss.teiid,teiid-jboss-integration,GNULesserGeneralPublicLicense β β βββ org.jboss.teiid,teiid-jboss-security,GNULesserGeneralPublicLicense β β βββ org.jboss.teiid,teiid-metadata,GNULesserGeneralPublicLicense β β βββ org.jboss.teiid,teiid-olingo-common,GNULesserGeneralPublicLicense β β βββ org.jboss.teiid,teiid-olingo,GNULesserGeneralPublicLicense β β βββ org.jboss.teiid,teiid-olingo-patches,GNULesserGeneralPublicLicense β β βββ org.jboss.teiid,teiid-runtime,GNULesserGeneralPublicLicense β β βββ org.jboss.threads,jboss-threads,PublicDomain β β βββ org.jboss.xnio,xnio-api,PublicDomain β β βββ org.jboss.xnio,xnio-nio,PublicDomain β β βββ org.jfree,jcommon,GNULesserGeneralPublicLicence β β βββ org.jfree,jfreechart,GNULesserGeneralPublicLicence β β βββ org.jolokia,jolokia-core,Apache2 β β βββ org.json,json,providedwithoutsupportorwarranty β β βββ org.jsoup,jsoup,TheMITLicense β β βββ org.jvnet.mimepull,mimepull,CDDL1.1 β β βββ org.jvnet.staxex,stax-ex,CommonDevelopmentAndDistributionLicense(CDDL)Version1.0 β β βββ org.komodo,komodo-core,GNULesserGeneralPublicLicense β β βββ org.komodo,komodo-importer,GNULesserGeneralPublicLicense β β βββ org.komodo,komodo-modeshape,GNULesserGeneralPublicLicense β β βββ org.komodo,komodo-modeshape-sequencer-teiid-sql,GNULesserGeneralPublicLicense β β βββ org.komodo,komodo-modeshape-vdb,GNULesserGeneralPublicLicense β β βββ org.komodo,komodo-plugin-service,GNULesserGeneralPublicLicense β β βββ org.komodo,komodo-relational,GNULesserGeneralPublicLicense β β βββ org.komodo,komodo-spi,GNULesserGeneralPublicLicense β β βββ org.komodo,komodo-teiid-client,GNULesserGeneralPublicLicense β β βββ org.komodo,komodo-ui,GNULesserGeneralPublicLicense β β βββ org.komodo,komodo-utils,GNULesserGeneralPublicLicense β β βββ org.komodo,komodo-utils-modeshape-logger,GNULesserGeneralPublicLicense β β βββ org.komodo.plugins,komodo-plugin-framework,GNULesserGeneralPublicLicense β β βββ org.komodo.plugins.storage,storage-file,GNULesserGeneralPublicLicense β β βββ org.komodo.plugins.storage,storage-git,GNULesserGeneralPublicLicense β β βββ org.modeshape,modeshape-common,GNULesserGeneralPublicLicense β β βββ org.modeshape,modeshape-connector-git,GNULesserGeneralPublicLicense β β βββ org.modeshape,modeshape-connector-jdbc-metadata,GNULesserGeneralPublicLicense β β βββ org.modeshape,modeshape-extractor-tika,GNULesserGeneralPublicLicense β β βββ org.modeshape,modeshape-jbossas-subsystem,GNULesserGeneralPublicLicense β β βββ org.modeshape,modeshape-jcr-api,GNULesserGeneralPublicLicense β β βββ org.modeshape,modeshape-jcr,GNULesserGeneralPublicLicense β β βββ org.modeshape,modeshape-jdbc-local,GNULesserGeneralPublicLicense β β βββ org.modeshape,modeshape-schematic,GNULesserGeneralPublicLicense β β βββ org.modeshape,modeshape-sequencer-ddl,GNULesserGeneralPublicLicense β β βββ org.modeshape,modeshape-sequencer-images,GNULesserGeneralPublicLicense β β βββ org.modeshape,modeshape-sequencer-java,GNULesserGeneralPublicLicense β β βββ org.modeshape,modeshape-sequencer-mp3,GNULesserGeneralPublicLicense β β βββ org.modeshape,modeshape-sequencer-msoffice,GNULesserGeneralPublicLicense β β βββ org.modeshape,modeshape-sequencer-sramp,GNULesserGeneralPublicLicense β β βββ org.modeshape,modeshape-sequencer-text,GNULesserGeneralPublicLicense β β βββ org.modeshape,modeshape-sequencer-wsdl,GNULesserGeneralPublicLicense β β βββ org.modeshape,modeshape-sequencer-xml,GNULesserGeneralPublicLicense β β βββ org.modeshape,modeshape-sequencer-xsd,GNULesserGeneralPublicLicense β β βββ org.modeshape,modeshape-sequencer-zip,GNULesserGeneralPublicLicense β β βββ org.modeshape,modeshape-web-cmis,GNULesserGeneralPublicLicense β β βββ org.modeshape,modeshape-webdav,GNULesserGeneralPublicLicense β β βββ org.modeshape,modeshape-web-explorer,GNULesserGeneralPublicLicense β β βββ org.modeshape,modeshape-web-jcr,GNULesserGeneralPublicLicense β β βββ org.modeshape,modeshape-web-jcr-rest,GNULesserGeneralPublicLicense β β βββ org.modeshape,modeshape-web-jcr-webdav,GNULesserGeneralPublicLicense β β βββ org.noggit,noggit,TheApacheSoftwareLicense,Version2.0 β β βββ org.opengis,geoapi,OGCcopyright β β βββ org.osgeo,proj4j,TheApacheLicense,Version2.0 β β βββ org.osgi,org.osgi.core,PublicDomain β β βββ org.ow2.asm,asm,BSD β β βββ org.quartz-scheduler,quartz,Apache2.0 β β βββ org.reflections,reflections,TheNewBSDLicense β β βββ org.reflections,reflections,WTFPL β β βββ org.slf4j,slf4j-log4j12,MITLicense β β βββ org.springframework,spring-aop,TheApacheSoftwareLicense,Version2.0 β β βββ org.springframework,spring-beans,TheApacheSoftwareLicense,Version2.0 β β βββ org.springframework,spring-context,TheApacheSoftwareLicense,Version2.0 β β βββ org.springframework,spring-core,TheApacheSoftwareLicense,Version2.0 β β βββ org.springframework,spring-expression,TheApacheSoftwareLicense,Version2.0 β β βββ org.springframework,spring-tx,TheApacheSoftwareLicense,Version2.0 β β βββ org.tallison,jmatio,BSD β β βββ org.teiid,vdb-bench-assembly,TheApacheSoftwareLicense,Version2.0 β β βββ org.wildfly,wildfly-core-security-api,lgpl β β βββ org.wildfly,wildfly-core-security,lgpl β β βββ org.wololo,jts2geojson,MITlicense β β βββ stax,stax-api,COMMONDEVELOPMENTANDDISTRIBUTIONLICENSE(CDDL)Version1.0 β β βββ stax,stax-api,GNUGeneralPublicLibrary β β βββ wsdl4j,wsdl4j,CPL β β βββ xerces,xercesImpl,TheApacheSoftwareLicense,Version2.0 β βββ schema β β βββ application_1_2.dtd β β βββ application_1_3.dtd β β βββ application_1_4.xsd β β βββ application_5.xsd β β βββ application_6.xsd β β βββ application-client_6.xsd β β βββ ejb-jar_1_1.dtd β β βββ ejb-jar_2_0.dtd β β βββ ejb-jar_2_1.xsd β β βββ ejb-jar_3_0.xsd β β βββ ejb-jar_3_1.xsd β β βββ j2ee_1_4.xsd β β βββ j2ee_jaxrpc_mapping_1_1.xsd β β βββ j2ee_web_services_1_1.xsd β β βββ j2ee_web_services_client_1_1.xsd β β βββ javaee_5.xsd β β βββ javaee_6.xsd β β βββ javaee_web_services_1_2.xsd β β βββ javaee_web_services_1_3.xsd β β βββ javaee_web_services_client_1_2.xsd β β βββ javaee_web_services_client_1_3.xsd β β βββ java-properties_1_0.xsd β β βββ jboss_1_0.xsd β β βββ jboss-app_3_0.dtd β β βββ jboss-app_3_2.dtd β β βββ jboss-app_4_0.dtd β β βββ jboss-app_4_2.dtd β β βββ jboss-app_5_0.dtd β β βββ jboss-app_7_0.xsd β β βββ jboss-as-cli_1_0.xsd β β βββ jboss-as-cli_1_1.xsd β β βββ jboss-as-cli_1_2.xsd β β βββ jboss-as-cli_1_3.xsd β β βββ jboss-as-cmp_1_0.xsd β β βββ jboss-as-cmp_1_1.xsd β β βββ jboss-as-config_1_0.xsd β β βββ jboss-as-config_1_1.xsd β β βββ jboss-as-config_1_2.xsd β β βββ jboss-as-config_1_3.xsd β β βββ jboss-as-config_1_4.xsd β β βββ jboss-as-config_1_5.xsd β β βββ jboss-as-config_1_6.xsd β β βββ jboss-as-config_1_7.xsd β β βββ jboss-as-config_1_8.xsd β β βββ jboss-as-configadmin_1_0.xsd β β βββ jboss-as-datasources_1_0.xsd β β βββ jboss-as-datasources_1_1.xsd β β βββ jboss-as-datasources_1_2.xsd β β βββ jboss-as-deployment-scanner_1_0.xsd β β βββ jboss-as-deployment-scanner_1_1.xsd β β βββ jboss-as-ee_1_0.xsd β β βββ jboss-as-ee_1_1.xsd β β βββ jboss-as-ee_1_2.xsd β β βββ jboss-as-ejb3_1_0.xsd β β βββ jboss-as-ejb3_1_1.xsd β β βββ jboss-as-ejb3_1_2.xsd β β βββ jboss-as-ejb3_1_3.xsd β β βββ jboss-as-ejb3_1_4.xsd β β βββ jboss-as-ejb3_1_5.xsd β β βββ jboss-as-infinispan_1_0.xsd β β βββ jboss-as-infinispan_1_1.xsd β β βββ jboss-as-infinispan_1_2.xsd β β βββ jboss-as-infinispan_1_3.xsd β β βββ jboss-as-infinispan_1_4.xsd β β βββ jboss-as-infinispan_1_5.xsd β β βββ jboss-as-jacorb_1_0.xsd β β βββ jboss-as-jacorb_1_1.xsd β β βββ jboss-as-jacorb_1_2.xsd β β βββ jboss-as-jacorb_1_3.xsd β β βββ jboss-as-jacorb_1_4.xsd β β βββ jboss-as-jaxr_1_0.xsd β β βββ jboss-as-jaxr_1_1.xsd β β βββ jboss-as-jaxrs_1_0.xsd β β βββ jboss-as-jca_1_0.xsd β β βββ jboss-as-jca_1_1.xsd β β βββ jboss-as-jdr_1_0.xsd β β βββ jboss-as-jgroups_1_0.xsd β β βββ jboss-as-jgroups_1_1.xsd β β βββ jboss-as-jmx_1_0.xsd β β βββ jboss-as-jmx_1_1.xsd β β βββ jboss-as-jmx_1_2.xsd β β βββ jboss-as-jmx_1_3.xsd β β βββ jboss-as-jpa_1_0.xsd β β βββ jboss-as-jpa_1_1.xsd β β βββ jboss-as-jsf_1_0.xsd β β βββ jboss-as-jsr77_1_0.xsd β β βββ jboss-as-logging_1_0.xsd β β βββ jboss-as-logging_1_1.xsd β β βββ jboss-as-logging_1_2.xsd β β βββ jboss-as-logging_1_3.xsd β β βββ jboss-as-logging_1_4.xsd β β βββ jboss-as-logging_1_5.xsd β β βββ jboss-as-mail_1_0.xsd β β βββ jboss-as-mail_1_1.xsd β β βββ jboss-as-mail_1_2.xsd β β βββ jboss-as-messaging_1_0.xsd β β βββ jboss-as-messaging_1_1.xsd β β βββ jboss-as-messaging_1_2.xsd β β βββ jboss-as-messaging_1_3.xsd β β βββ jboss-as-messaging_1_4.xsd β β βββ jboss-as-messaging-deployment_1_0.xsd β β βββ jboss-as-mod-cluster_1_0.xsd β β βββ jboss-as-mod-cluster_1_1.xsd β β βββ jboss-as-mod-cluster_1_2.xsd β β βββ jboss-as-naming_1_0.xsd β β βββ jboss-as-naming_1_1.xsd β β βββ jboss-as-naming_1_2.xsd β β βββ jboss-as-naming_1_3.xsd β β βββ jboss-as-naming_1_4.xsd β β βββ jboss-as-osgi_1_0.xsd β β βββ jboss-as-osgi_1_1.xsd β β βββ jboss-as-osgi_1_2.xsd β β βββ jboss-as-pojo_1_0.xsd β β βββ jboss-as-remoting_1_0.xsd β β βββ jboss-as-remoting_1_1.xsd β β βββ jboss-as-remoting_1_2.xsd β β βββ jboss-as-resource-adapters_1_0.xsd β β βββ jboss-as-resource-adapters_1_1.xsd β β βββ jboss-as-sar_1_0.xsd β β βββ jboss-as-security_1_0.xsd β β βββ jboss-as-security_1_1.xsd β β βββ jboss-as-security_1_2.xsd β β βββ jboss-as-threads_1_0.xsd β β βββ jboss-as-threads_1_1.xsd β β βββ jboss-as-txn_1_0.xsd β β βββ jboss-as-txn_1_1.xsd β β βββ jboss-as-txn_1_2.xsd β β βββ jboss-as-txn_1_3.xsd β β βββ jboss-as-txn_1_4.xsd β β βββ jboss-as-txn_1_5.xsd β β βββ jboss-as-web_1_0.xsd β β βββ jboss-as-web_1_1.xsd β β βββ jboss-as-web_1_2.xsd β β βββ jboss-as-web_1_3.xsd β β βββ jboss-as-web_1_4.xsd β β βββ jboss-as-web_1_5.xsd β β βββ jboss-as-web_2_1.xsd β β βββ jboss-as-web_2_2.xsd β β βββ jboss-as-webservices_1_0.xsd β β βββ jboss-as-webservices_1_1.xsd β β βββ jboss-as-webservices_1_2.xsd β β βββ jboss-as-weld_1_0.xsd β β βββ jboss-as-xts_1_0.xsd β β βββ jboss-client_6_0.xsd β β βββ jboss-common_5_1.xsd β β βββ jboss-common_6_0.xsd β β βββ jboss-deployment-dependencies-1_0.xsd β β βββ jboss-deployment-structure-1_0.xsd β β βββ jboss-deployment-structure-1_1.xsd β β βββ jboss-deployment-structure-1_2.xsd β β βββ jboss-ejb3-2_0.xsd β β βββ jboss-ejb3-spec-2_0.xsd β β βββ jboss-ejb-cache_1_0.xsd β β βββ jboss-ejb-client_1_0.xsd β β βββ jboss-ejb-client_1_1.xsd β β βββ jboss-ejb-client_1_2.xsd β β βββ jboss-ejb-container-interceptors_1_0.xsd β β βββ jboss-ejb-delivery-active_1_0.xsd β β βββ jboss-ejb-iiop_1_0.xsd β β βββ jboss-ejb-pool_1_0.xsd β β βββ jboss-ejb-resource-adapter-binding_1_0.xsd β β βββ jboss-ejb-security_1_0.xsd β β βββ jboss-ejb-security_1_1.xsd β β βββ jboss-ejb-security-role_1_0.xsd β β βββ jboss-jpa_1_0.xsd β β βββ jboss-pojo_7_0.xsd β β βββ jboss-service_7_0.xsd β β βββ jboss-teiid.xsd β β βββ jboss-web_7_0.xsd β β βββ jboss-web_7_1.xsd β β βββ jboss-web_7_2.xsd β β βββ jbossws-jaxws-config_4_0.xsd β β βββ jbossws-web-services_1_0.xsd β β βββ jbxb_1_0.xsd β β βββ jndi-binding-service_1_0.xsd β β βββ jsp_2_0.xsd β β βββ jsp_2_1.xsd β β βββ jsp_2_2.xsd β β βββ module-1_0.xsd β β βββ module-1_1.xsd β β βββ module-1_2.xsd β β βββ module-1_3.xsd β β βββ orm_1_0.xsd β β βββ persistence_1_0.xsd β β βββ persistence_2_0.xsd β β βββ service-ref_4_0.dtd β β βββ service-ref_4_2.dtd β β βββ service-ref_5_0.dtd β β βββ trans-timeout-1_0.xsd β β βββ user-roles_1_0.xsd β β βββ web-app_2_2.dtd β β βββ web-app_2_3.dtd β β βββ web-app_2_4.xsd β β βββ web-app_2_5.xsd β β βββ web-app_3_0.xsd β β βββ web-common_3_0.xsd β β βββ web-facesconfig_1_0.dtd β β βββ web-facesconfig_1_1.dtd β β βββ web-facesconfig_1_2.xsd β β βββ web-fragment_3_0.xsd β β βββ web-jsptaglibrary_1_1.dtd β β βββ web-jsptaglibrary_1_2.dtd β β βββ web-jsptaglibrary_2_0.xsd β β βββ web-jsptaglibrary_2_1.xsd β β βββ wildfly-picketlink-federation_1_0.xsd β β βββ wildfly-picketlink-federation_1_1.xsd β β βββ wildfly-picketlink-idm_1_0.xsd β β βββ wildfly-picketlink-idm_1_1.xsd β βββ teiid β βββ datasources β β βββ accumulo β β βββ actian-vector β β βββ amazon-s3 β β βββ cassandra β β βββ couchbase β β βββ db2 β β βββ derby β β βββ file β β βββ google β β βββ h2 β β βββ hive β β βββ impala β β βββ infinispan β β βββ infinispan-hotrod-7.1 β β βββ ingres β β βββ intersystems-cache β β βββ ldap β β βββ mongodb β β βββ mysql β β βββ odbc β β βββ olap β β βββ oracle β β βββ osisoft-pi β β βββ phoenix β β βββ postgresql β β βββ prestodb β β βββ redshift β β βββ salesforce β β βββ simpledb β β βββ solr β β βββ sqlserver β β βββ teiid β β βββ ucanaccess β β βββ vertica β β βββ web-service β βββ licenses β β βββ apache-2.0 - LICENSE-2.0.txt β β βββ LICENSE-lgpl-2.1.txt β β βββ MPL-1.0.html β β βββ MPL-1.1.html β β βββ PostgreSQL-BSD.txt β βββ schema β β βββ vdb-deployer.xsd β βββ teiid-releasenotes.html βββ domain β βββ configuration β β βββ application-roles.properties β β βββ application-users.properties β β βββ default-server-logging.properties β β βββ domain.xml β β βββ domain_xml_history β β β βββ 20180223-095813802 β β β βββ current β β β βββ domain.boot.xml β β β βββ domain.initial.xml β β β βββ domain.last.xml β β β βββ snapshot β β βββ host-master.xml β β βββ host-slave.xml β β βββ host.xml β β βββ host_xml_history β β β βββ 20180223-095813622 β β β βββ 20180223-095817935 β β β βββ current β β β βββ host.boot.xml β β β βββ host.initial.xml β β β βββ host.last.xml β β β βββ host-master.boot.xml β β β βββ host-master.initial.xml β β β βββ host-master.last.xml β β β βββ host-slave.boot.xml β β β βββ host-slave.initial.xml β β β βββ host-slave.last.xml β β β βββ snapshot β β βββ logging.properties β β βββ mgmt-groups.properties β β βββ mgmt-users.properties β βββ data β β βββ content β βββ deployments β β βββ integration-platform-console.war β β βββ eap.css β β βββ favicon.ico β β βββ images β β βββ index.jsp β β βββ WEB-INF β βββ log β β βββ host-controller.log β β βββ process-controller.log β βββ servers β βββ tmp β βββ auth βββ fusepatch β βββ repository β βββ workspace β βββ audit.log β βββ jboss-dv β β βββ 6.4.0 β βββ managed-paths.metadata βββ installation β βββ InstallationLog.txt β βββ InstallSummary.html β βββ server-logs β βββ host-controller.log β βββ process-controller.log β βββ server.log βββ JBossEULA.txt βββ jboss-modules.jar βββ LICENSE.txt βββ modules β βββ layers.conf β βββ system β βββ layers β βββ base β βββ dv βββ module.xml βββ pretty-print.xsl βββ quickstarts β βββ build.metadata β βββ drools-integration β β βββ pom.xml β β βββ README.md β β βββ src β β βββ main β β βββ modules β β βββ scripts β β βββ vdb β βββ dynamicvdb-datafederation β β βββ pom.xml β β βββ README.md β β βββ src β β βββ scripts β β βββ teiidfiles β β βββ vdb β βββ dynamicvdb-dataroles β β βββ pom.xml β β βββ README.md β β βββ src β β βββ scripts β β βββ vdb β βββ dynamicvdb-materialization β β βββ pom.xml β β βββ README.md β β βββ src β β βββ scripts β β βββ vdb β βββ dynamicvdb-restservice β β βββ http-client β β β βββ pom.xml β β β βββ src β β βββ pom.xml β β βββ README.md β β βββ resteasy-client β β β βββ pom.xml β β β βββ src β β βββ src β β βββ scripts β β βββ vdb β βββ hbase-as-a-datasource β β βββ pom.xml β β βββ README.md β β βββ src β β βββ scripts β β βββ teiidfiles β β βββ vdb β βββ hibernate-on-top-of-teiid β β βββ pom.xml β β βββ README.md β β βββ src β β βββ main β β βββ scripts β βββ jdg7.1-remote-cache β β βββ pom.xml β β βββ README.md β β βββ src β β βββ scripts β β βββ vdb β βββ jdg7.1-remote-cache-materialization β β βββ pom.xml β β βββ README.md β β βββ src β β βββ scripts β β βββ vdb β βββ jdg-remote-cache β β βββ kits β β β βββ jboss-as7 β β β βββ jboss-as7-dist.xml β β βββ pom.xml β β βββ README.md β β βββ src β β βββ jdg β β βββ main β β βββ vdb β βββ jdg-remote-cache-materialization β β βββ kits β β β βββ jboss-as7 β β β βββ jboss-as7-dist.xml β β βββ pom.xml β β βββ README.md β β βββ src β β βββ jdg β β βββ main β β βββ vdb β βββ ldap-as-a-datasource β β βββ ldap-add-group-users.md β β βββ pom.xml β β βββ README.md β β βββ src β β βββ scripts β β βββ vdb β βββ mongodb-as-a-datasource β β βββ pom.xml β β βββ README.md β β βββ src β β βββ scripts β β βββ vdb β βββ pom.xml β βββ README.md β βββ settings.xml β βββ simpleclient β β βββ pom.xml β β βββ README.md β β βββ src β β βββ main β βββ socialMedia-as-a-datasource β β βββ pom.xml β β βββ README.md β β βββ src β β βββ scripts β β βββ vdb β βββ tpch β β βββ change-tablenames.sh β β βββ config-files β β β βββ datasources.xml β β β βββ drop-create.sql β β β βββ modules β β β βββ postgres-dml.sql β β β βββ postgres-index.sql β β β βββ tpch-vdb.xml β β βββ generate-data.sh β β βββ generate-one-query.sh β β βββ generate-queries.sh β β βββ javaPerfTest β β β βββ pom.xml β β β βββ src β β βββ load-data-into-db.sh β β βββ query-templates β β β βββ 10.sql β β β βββ 11.sql β β β βββ 12.sql β β β βββ 13.sql β β β βββ 14.sql β β β βββ 15.sql β β β βββ 16.sql β β β βββ 17.sql β β β βββ 18.sql β β β βββ 19.sql β β β βββ 1.sql β β β βββ 20.sql β β β βββ 21.sql β β β βββ 22.sql β β β βββ 2.sql β β β βββ 3.sql β β β βββ 4.sql β β β βββ 5.sql β β β βββ 6.sql β β β βββ 7.sql β β β βββ 8.sql β β β βββ 9.sql β β β βββ dists.dss β β βββ README.md β β βββ run-test.sh β β βββ setenv.sh β βββ webservices-as-a-datasource β βββ pom.xml β βββ README.md β βββ src β β βββ main β β βββ scripts β β βββ vdb β βββ webapp β βββ WEB-INF βββ standalone β βββ configuration β β βββ application-roles.properties β β βββ application-users.properties β β βββ logging.properties β β βββ mgmt-groups.properties β β βββ mgmt-users.properties β β βββ standalone-full-ha.xml β β βββ standalone-full.xml β β βββ standalone-ha.xml β β βββ standalone-osgi.xml β β βββ standalone.xml β β βββ standalone_xml_history β β βββ 20180223-095659186 β β βββ 20180223-095719975 β β βββ current β β βββ snapshot β β βββ standalone.boot.xml β β βββ standalone-full-ha.boot.xml β β βββ standalone-full-ha.initial.xml β β βββ standalone-full-ha.last.xml β β βββ standalone-ha.boot.xml β β βββ standalone-ha.initial.xml β β βββ standalone-ha.last.xml β β βββ standalone.initial.xml β β βββ standalone.last.xml β βββ data β β βββ content β βββ deployments β β βββ integration-platform-console.war β β β βββ consoles β β β βββ eap.css β β β βββ favicon.ico β β β βββ images β β β βββ index.jsp β β β βββ WEB-INF β β βββ integration-platform-console.war.dodeploy β β βββ README.txt β βββ lib β β βββ ext β βββ log β β βββ backupgc.log.current β β βββ gc.log.0.current β β βββ server.log β βββ tmp β βββ auth β βββ vfs β βββ temp βββ vault β βββ VAULT.dat βββ vault.keystore βββ version-dv.txt βββ version.txt βββ welcome-content βββ eap.css βββ favicon.ico βββ fonts β βββ OpenSans-BoldItalic.ttf β βββ OpenSans-Bold.ttf β βββ OpenSans-ExtraBoldItalic.ttf β βββ OpenSans-ExtraBold.ttf β βββ OpenSans-Italic.ttf β βββ OpenSans-LightItalic.ttf β βββ OpenSans-Light.ttf β βββ OpenSans-Regular.ttf β βββ OpenSans-SemiboldItalic.ttf β βββ OpenSans-Semibold.ttf βββ images β βββ eap_bg.png β βββ header_bg.png β βββ prod_logo.png β βββ prod_name.png β βββ product_title.png βββ index.html βββ index_noconsole.html βββ noconsole.html βββ noredirect.html"
]
| https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/administration_and_configuration_guide/chap-directory_structure |
13.2.25. SSSD and UID and GID Numbers | 13.2.25. SSSD and UID and GID Numbers When a user is created - using system tools such as useradd or through an application such as Red Hat Identity Management or other client tools - the user is automatically assigned a user ID number and a group ID number. When the user logs into a system or service, SSSD caches that user name with the associated UID/GID numbers. The UID number is then used as the identifying key for the user. If a user with the same name but a different UID attempts to log into the system, then SSSD treats it as two different users with a name collision. What this means is that SSSD does not recognize UID number changes. It interprets it as a different and new user, not an existing user with a different UID number. If an existing user changes the UID number, that user is prevented from logging into SSSD and associated services and domains. This also has an impact on any client applications which use SSSD for identity information; the user with the conflict will not be found or accessible to those applications. Important UID/GID changes are not supported in SSSD. If a user for some reason has a changed UID/GID number, then the SSSD cache must be cleared for that user before that user can log in again. For example: ~]# sss_cache -u jsmith Cleaning the SSSD cache is covered in the section called "Purging the SSSD Cache" . | [
"~]# sss_cache -u jsmith"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sssd-system-uids |
1.4. Pacemaker Architecture Components | 1.4. Pacemaker Architecture Components A cluster configured with Pacemaker comprises separate component daemons that monitor cluster membership, scripts that manage the services, and resource management subsystems that monitor the disparate resources. The following components form the Pacemaker architecture: Cluster Information Base (CIB) The Pacemaker information daemon, which uses XML internally to distribute and synchronize current configuration and status information from the Designated Coordinator (DC) - a node assigned by Pacemaker to store and distribute cluster state and actions by means of the CIB - to all other cluster nodes. Cluster Resource Management Daemon (CRMd) Pacemaker cluster resource actions are routed through this daemon. Resources managed by CRMd can be queried by client systems, moved, instantiated, and changed when needed. Each cluster node also includes a local resource manager daemon (LRMd) that acts as an interface between CRMd and resources. LRMd passes commands from CRMd to agents, such as starting and stopping and relaying status information. Shoot the Other Node in the Head (STONITH) Often deployed in conjunction with a power switch, STONITH acts as a cluster resource in Pacemaker that processes fence requests, forcefully powering down nodes and removing them from the cluster to ensure data integrity. STONITH is configured in CIB and can be monitored as a normal cluster resource. corosync corosync is the component - and a daemon of the same name - that serves the core membership and member-communication needs for high availability clusters. It is required for the High Availability Add-On to function. In addition to those membership and messaging functions, corosync also: Manages quorum rules and determination. Provides messaging capabilities for applications that coordinate or operate across multiple members of the cluster and thus must communicate stateful or other information between instances. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_overview/s1-Pacemakerarchitecture-HAAO |
Chapter 5. FIPS test | Chapter 5. FIPS test The Federal Information Processing Standard (FIPS) Publication 140-2, is a computer security standard, developed by a U.S. Government and industry working group to validate the quality of cryptographic modules. FIPS publications (including 140-2) can be found at the following URL: http://csrc.nist.gov/publications/PubsFIPS.html . Note Red Hat recommends partners to enable FIPS mode on both control plane and data plane nodes. Additional resources For more information about FIPS, see Federal Information Processing Standard (FIPS) and Installing a Cluster in FIPS mode . | null | https://docs.redhat.com/en/documentation/red_hat_software_certification/2025/html/red_hat_openstack_services_on_openshift_certification_policy_guide/con_fips-test_rhoso-policy-certifiation-lifecycle |
Chapter 25. Set Up Transactions | Chapter 25. Set Up Transactions 25.1. About Transactions A transaction consists of a collection of interdependent or related operations or tasks. All operations within a single transaction must succeed for the overall success of the transaction. If any operations within a transaction fail, the transaction as a whole fails and rolls back any changes. Transactions are particularly useful when dealing with a series of changes as part of a larger operation. In Red Hat JBoss Data Grid, transactions are only available in Library mode. Report a bug 25.1.1. About the Transaction Manager In Red Hat JBoss Data Grid, the Transaction Manager coordinates transactions across a single or multiple resources. The responsibilities of a Transaction Manager include: initiating and concluding transactions managing information about each transaction coordinating transactions as they operate over multiple resources recovering from a failed transaction by rolling back changes Report a bug 25.1.2. XA Resources and Synchronizations XA Resources are fully fledged transaction participants. In the prepare phase (see Section F.7, "About Two Phase Commit (2PC)" for details), the XA Resource returns a vote with either the value OK or ABORT . If the Transaction Manager receives OK votes from all XA Resources, the transaction is committed, otherwise it is rolled back. Synchronizations are a type of listener that receive notifications about events leading to the transaction life cycle. Synchronizations receive an event before and after the operation completes. Unless recovery is required, it is not necessary to register as a full XA resource. An advantage of synchronizations is that they allow the Transaction Manager to optimize 2PC (Two Phase Commit) with a 1PC (One Phase Commit) where only one other resource is enlisted with that transaction (last resource commit optimization). This makes registering a synchronization more efficient. However, if the operation fails in the prepare phase within Red Hat JBoss Data Grid, the transaction is not rolled back and if there are more participants in the transaction, they can ignore this failure and commit. Additionally, errors encountered in the commit phase are not propagated to the application code that commits the transaction. By default JBoss Data Grid registers to the transaction as a synchronization. Report a bug 25.1.3. Optimistic and Pessimistic Transactions Pessimistic transactions acquire the locks when the first write operation on the key executes. After the key is locked, no other transaction can modify the key until this transaction is committed or rolled back. It is up to the application code to acquire the locks in correct order to prevent deadlocks. With optimistic transactions locks are acquired at transaction prepare time and are held until the transaction commits (or rolls back). Also, Red Hat JBoss Data Grid sorts keys for all entries modified within a transaction automatically, preventing any deadlocks occurring due to the incorrect order of keys being locked. This results in: less messages being sent during the transaction execution locks held for shorter periods improved throughput Note Read operations never acquire any locks. Acquiring the lock for a read operation on demand is possible only with pessimistic transactions, using the FORCE_WRITE_LOCK flag with the operation. Report a bug 25.1.4. Write Skew Checks A common use case for entries is that they are read and subsequently written in a transaction. However, a third transaction can modify the entry between these two operations. In order to detect such a situation and roll back a transaction Red Hat JBoss Data Grid offers entry versioning and write skew checks. If the modified version is not the same as when it was last read during the transaction, the write skew checks throws an exception and the transaction is rolled back. Enabling write skew check requires the REPEATABLE_READ isolation level. Also, in clustered mode (distributed or replicated modes), set up entry versioning. For local mode, entry versioning is not required. Important With optimistic transactions, write skew checks are required for (atomic) conditional operations. Report a bug 25.1.5. Transactions Spanning Multiple Cache Instances Each cache operates as a separate, standalone Java Transaction API ( JTA ) resource. However, components can be internally shared by Red Hat JBoss Data Grid for optimization, but this sharing does not affect how caches interact with a Java Transaction API ( JTA ) Manager. Report a bug | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/chap-set_up_transactions |
Chapter 13. Key Manager (barbican) Parameters | Chapter 13. Key Manager (barbican) Parameters Parameter Description ATOSVars Hash of atos-hsm role variables used to install ATOS client software. BarbicanDogtagStoreGlobalDefault Whether this plugin is the global default plugin. The default value is False . BarbicanDogtagStoreHost Hostname of the Dogtag server. BarbicanDogtagStoreNSSPassword Password for the NSS DB. BarbicanDogtagStorePEMPath Path for the PEM file used to authenticate requests. The default value is /etc/barbican/kra_admin_cert.pem . BarbicanDogtagStorePort Port for the Dogtag server. The default value is 8443 . BarbicanKmipStoreGlobalDefault Whether this plugin is the global default plugin. The default value is False . BarbicanKmipStoreHost Host for KMIP device. BarbicanKmipStorePassword Password to connect to KMIP device. BarbicanKmipStorePort Port for KMIP device. BarbicanKmipStoreUsername Username to connect to KMIP device. BarbicanPassword The password for the OpenStack Key Manager (barbican) service account. BarbicanPkcs11AlwaysSetCkaSensitive Always set CKA_SENSITIVE=CK_TRUE. The default value is True . BarbicanPkcs11CryptoAESGCMGenerateIV Generate IVs for CKM_AES_GCM encryption mechanism. The default value is True . BarbicanPkcs11CryptoATOSEnabled Enable ATOS for PKCS11. The default value is False . BarbicanPkcs11CryptoEnabled Enable PKCS11. The default value is False . BarbicanPkcs11CryptoEncryptionMechanism Cryptoki Mechanism used for encryption. The default value is CKM_AES_CBC . BarbicanPkcs11CryptoGlobalDefault Whether this plugin is the global default plugin. The default value is False . BarbicanPkcs11CryptoHMACKeyType Cryptoki Key Type for Master HMAC key. The default value is CKK_AES . BarbicanPkcs11CryptoHMACKeygenMechanism Cryptoki Mechanism used to generate Master HMAC Key. The default value is CKM_AES_KEY_GEN . BarbicanPkcs11CryptoHMACLabel Label for the HMAC key. BarbicanPkcs11CryptoLibraryPath Path to vendor PKCS11 library. BarbicanPkcs11CryptoLogin Password to login to PKCS11 session. BarbicanPkcs11CryptoMKEKLabel Label for Master KEK. BarbicanPkcs11CryptoMKEKLength Length of Master KEK in bytes. The default value is 256 . BarbicanPkcs11CryptoRewrapKeys Cryptoki Mechanism used to generate Master HMAC Key. The default value is False . BarbicanPkcs11CryptoSlotId Slot Id for the HSM. The default value is 0 . BarbicanPkcs11CryptoThalesEnabled Enable Thales for PKCS11. The default value is False . BarbicanSimpleCryptoGlobalDefault Whether this plugin is the global default plugin. The default value is False . BarbicanSimpleCryptoKek KEK used to encrypt secrets. BarbicanWorkers Set the number of workers for barbican::wsgi::apache. The default value is %{::processorcount} . NotificationDriver Driver or drivers to handle sending notifications. The default value is messagingv2 . ThalesHSMNetworkName The network that the HSM is listening on. The default value is internal_api . ThalesVars Hash of thales-hsm role variables used to install Thales client software. | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/overcloud_parameters/key-manager-barbican-parameters |
Chapter 4. HorizontalPodAutoscaler [autoscaling/v2] | Chapter 4. HorizontalPodAutoscaler [autoscaling/v2] Description HorizontalPodAutoscaler is the configuration for a horizontal pod autoscaler, which automatically manages the replica count of any resource implementing the scale subresource based on the metrics specified. Type object 4.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata is the standard object metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object HorizontalPodAutoscalerSpec describes the desired functionality of the HorizontalPodAutoscaler. status object HorizontalPodAutoscalerStatus describes the current status of a horizontal pod autoscaler. 4.1.1. .spec Description HorizontalPodAutoscalerSpec describes the desired functionality of the HorizontalPodAutoscaler. Type object Required scaleTargetRef maxReplicas Property Type Description behavior object HorizontalPodAutoscalerBehavior configures the scaling behavior of the target in both Up and Down directions (scaleUp and scaleDown fields respectively). maxReplicas integer maxReplicas is the upper limit for the number of replicas to which the autoscaler can scale up. It cannot be less that minReplicas. metrics array metrics contains the specifications for which to use to calculate the desired replica count (the maximum replica count across all metrics will be used). The desired replica count is calculated multiplying the ratio between the target value and the current value by the current number of pods. Ergo, metrics used must decrease as the pod count is increased, and vice-versa. See the individual metric source types for more information about how each type of metric must respond. If not set, the default metric will be set to 80% average CPU utilization. metrics[] object MetricSpec specifies how to scale based on a single metric (only type and one other matching field should be set at once). minReplicas integer minReplicas is the lower limit for the number of replicas to which the autoscaler can scale down. It defaults to 1 pod. minReplicas is allowed to be 0 if the alpha feature gate HPAScaleToZero is enabled and at least one Object or External metric is configured. Scaling is active as long as at least one metric value is available. scaleTargetRef object CrossVersionObjectReference contains enough information to let you identify the referred resource. 4.1.2. .spec.behavior Description HorizontalPodAutoscalerBehavior configures the scaling behavior of the target in both Up and Down directions (scaleUp and scaleDown fields respectively). Type object Property Type Description scaleDown object HPAScalingRules configures the scaling behavior for one direction. These Rules are applied after calculating DesiredReplicas from metrics for the HPA. They can limit the scaling velocity by specifying scaling policies. They can prevent flapping by specifying the stabilization window, so that the number of replicas is not set instantly, instead, the safest value from the stabilization window is chosen. scaleUp object HPAScalingRules configures the scaling behavior for one direction. These Rules are applied after calculating DesiredReplicas from metrics for the HPA. They can limit the scaling velocity by specifying scaling policies. They can prevent flapping by specifying the stabilization window, so that the number of replicas is not set instantly, instead, the safest value from the stabilization window is chosen. 4.1.3. .spec.behavior.scaleDown Description HPAScalingRules configures the scaling behavior for one direction. These Rules are applied after calculating DesiredReplicas from metrics for the HPA. They can limit the scaling velocity by specifying scaling policies. They can prevent flapping by specifying the stabilization window, so that the number of replicas is not set instantly, instead, the safest value from the stabilization window is chosen. Type object Property Type Description policies array policies is a list of potential scaling polices which can be used during scaling. At least one policy must be specified, otherwise the HPAScalingRules will be discarded as invalid policies[] object HPAScalingPolicy is a single policy which must hold true for a specified past interval. selectPolicy string selectPolicy is used to specify which policy should be used. If not set, the default value Max is used. stabilizationWindowSeconds integer stabilizationWindowSeconds is the number of seconds for which past recommendations should be considered while scaling up or scaling down. StabilizationWindowSeconds must be greater than or equal to zero and less than or equal to 3600 (one hour). If not set, use the default values: - For scale up: 0 (i.e. no stabilization is done). - For scale down: 300 (i.e. the stabilization window is 300 seconds long). 4.1.4. .spec.behavior.scaleDown.policies Description policies is a list of potential scaling polices which can be used during scaling. At least one policy must be specified, otherwise the HPAScalingRules will be discarded as invalid Type array 4.1.5. .spec.behavior.scaleDown.policies[] Description HPAScalingPolicy is a single policy which must hold true for a specified past interval. Type object Required type value periodSeconds Property Type Description periodSeconds integer periodSeconds specifies the window of time for which the policy should hold true. PeriodSeconds must be greater than zero and less than or equal to 1800 (30 min). type string type is used to specify the scaling policy. value integer value contains the amount of change which is permitted by the policy. It must be greater than zero 4.1.6. .spec.behavior.scaleUp Description HPAScalingRules configures the scaling behavior for one direction. These Rules are applied after calculating DesiredReplicas from metrics for the HPA. They can limit the scaling velocity by specifying scaling policies. They can prevent flapping by specifying the stabilization window, so that the number of replicas is not set instantly, instead, the safest value from the stabilization window is chosen. Type object Property Type Description policies array policies is a list of potential scaling polices which can be used during scaling. At least one policy must be specified, otherwise the HPAScalingRules will be discarded as invalid policies[] object HPAScalingPolicy is a single policy which must hold true for a specified past interval. selectPolicy string selectPolicy is used to specify which policy should be used. If not set, the default value Max is used. stabilizationWindowSeconds integer stabilizationWindowSeconds is the number of seconds for which past recommendations should be considered while scaling up or scaling down. StabilizationWindowSeconds must be greater than or equal to zero and less than or equal to 3600 (one hour). If not set, use the default values: - For scale up: 0 (i.e. no stabilization is done). - For scale down: 300 (i.e. the stabilization window is 300 seconds long). 4.1.7. .spec.behavior.scaleUp.policies Description policies is a list of potential scaling polices which can be used during scaling. At least one policy must be specified, otherwise the HPAScalingRules will be discarded as invalid Type array 4.1.8. .spec.behavior.scaleUp.policies[] Description HPAScalingPolicy is a single policy which must hold true for a specified past interval. Type object Required type value periodSeconds Property Type Description periodSeconds integer periodSeconds specifies the window of time for which the policy should hold true. PeriodSeconds must be greater than zero and less than or equal to 1800 (30 min). type string type is used to specify the scaling policy. value integer value contains the amount of change which is permitted by the policy. It must be greater than zero 4.1.9. .spec.metrics Description metrics contains the specifications for which to use to calculate the desired replica count (the maximum replica count across all metrics will be used). The desired replica count is calculated multiplying the ratio between the target value and the current value by the current number of pods. Ergo, metrics used must decrease as the pod count is increased, and vice-versa. See the individual metric source types for more information about how each type of metric must respond. If not set, the default metric will be set to 80% average CPU utilization. Type array 4.1.10. .spec.metrics[] Description MetricSpec specifies how to scale based on a single metric (only type and one other matching field should be set at once). Type object Required type Property Type Description containerResource object ContainerResourceMetricSource indicates how to scale on a resource metric known to Kubernetes, as specified in requests and limits, describing each pod in the current scale target (e.g. CPU or memory). The values will be averaged together before being compared to the target. Such metrics are built in to Kubernetes, and have special scaling options on top of those available to normal per-pod metrics using the "pods" source. Only one "target" type should be set. external object ExternalMetricSource indicates how to scale on a metric not associated with any Kubernetes object (for example length of queue in cloud messaging service, or QPS from loadbalancer running outside of cluster). object object ObjectMetricSource indicates how to scale on a metric describing a kubernetes object (for example, hits-per-second on an Ingress object). pods object PodsMetricSource indicates how to scale on a metric describing each pod in the current scale target (for example, transactions-processed-per-second). The values will be averaged together before being compared to the target value. resource object ResourceMetricSource indicates how to scale on a resource metric known to Kubernetes, as specified in requests and limits, describing each pod in the current scale target (e.g. CPU or memory). The values will be averaged together before being compared to the target. Such metrics are built in to Kubernetes, and have special scaling options on top of those available to normal per-pod metrics using the "pods" source. Only one "target" type should be set. type string type is the type of metric source. It should be one of "ContainerResource", "External", "Object", "Pods" or "Resource", each mapping to a matching field in the object. Note: "ContainerResource" type is available on when the feature-gate HPAContainerMetrics is enabled 4.1.11. .spec.metrics[].containerResource Description ContainerResourceMetricSource indicates how to scale on a resource metric known to Kubernetes, as specified in requests and limits, describing each pod in the current scale target (e.g. CPU or memory). The values will be averaged together before being compared to the target. Such metrics are built in to Kubernetes, and have special scaling options on top of those available to normal per-pod metrics using the "pods" source. Only one "target" type should be set. Type object Required name target container Property Type Description container string container is the name of the container in the pods of the scaling target name string name is the name of the resource in question. target object MetricTarget defines the target value, average value, or average utilization of a specific metric 4.1.12. .spec.metrics[].containerResource.target Description MetricTarget defines the target value, average value, or average utilization of a specific metric Type object Required type Property Type Description averageUtilization integer averageUtilization is the target value of the average of the resource metric across all relevant pods, represented as a percentage of the requested value of the resource for the pods. Currently only valid for Resource metric source type averageValue Quantity averageValue is the target value of the average of the metric across all relevant pods (as a quantity) type string type represents whether the metric type is Utilization, Value, or AverageValue value Quantity value is the target value of the metric (as a quantity). 4.1.13. .spec.metrics[].external Description ExternalMetricSource indicates how to scale on a metric not associated with any Kubernetes object (for example length of queue in cloud messaging service, or QPS from loadbalancer running outside of cluster). Type object Required metric target Property Type Description metric object MetricIdentifier defines the name and optionally selector for a metric target object MetricTarget defines the target value, average value, or average utilization of a specific metric 4.1.14. .spec.metrics[].external.metric Description MetricIdentifier defines the name and optionally selector for a metric Type object Required name Property Type Description name string name is the name of the given metric selector LabelSelector selector is the string-encoded form of a standard kubernetes label selector for the given metric When set, it is passed as an additional parameter to the metrics server for more specific metrics scoping. When unset, just the metricName will be used to gather metrics. 4.1.15. .spec.metrics[].external.target Description MetricTarget defines the target value, average value, or average utilization of a specific metric Type object Required type Property Type Description averageUtilization integer averageUtilization is the target value of the average of the resource metric across all relevant pods, represented as a percentage of the requested value of the resource for the pods. Currently only valid for Resource metric source type averageValue Quantity averageValue is the target value of the average of the metric across all relevant pods (as a quantity) type string type represents whether the metric type is Utilization, Value, or AverageValue value Quantity value is the target value of the metric (as a quantity). 4.1.16. .spec.metrics[].object Description ObjectMetricSource indicates how to scale on a metric describing a kubernetes object (for example, hits-per-second on an Ingress object). Type object Required describedObject target metric Property Type Description describedObject object CrossVersionObjectReference contains enough information to let you identify the referred resource. metric object MetricIdentifier defines the name and optionally selector for a metric target object MetricTarget defines the target value, average value, or average utilization of a specific metric 4.1.17. .spec.metrics[].object.describedObject Description CrossVersionObjectReference contains enough information to let you identify the referred resource. Type object Required kind name Property Type Description apiVersion string apiVersion is the API version of the referent kind string kind is the kind of the referent; More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string name is the name of the referent; More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 4.1.18. .spec.metrics[].object.metric Description MetricIdentifier defines the name and optionally selector for a metric Type object Required name Property Type Description name string name is the name of the given metric selector LabelSelector selector is the string-encoded form of a standard kubernetes label selector for the given metric When set, it is passed as an additional parameter to the metrics server for more specific metrics scoping. When unset, just the metricName will be used to gather metrics. 4.1.19. .spec.metrics[].object.target Description MetricTarget defines the target value, average value, or average utilization of a specific metric Type object Required type Property Type Description averageUtilization integer averageUtilization is the target value of the average of the resource metric across all relevant pods, represented as a percentage of the requested value of the resource for the pods. Currently only valid for Resource metric source type averageValue Quantity averageValue is the target value of the average of the metric across all relevant pods (as a quantity) type string type represents whether the metric type is Utilization, Value, or AverageValue value Quantity value is the target value of the metric (as a quantity). 4.1.20. .spec.metrics[].pods Description PodsMetricSource indicates how to scale on a metric describing each pod in the current scale target (for example, transactions-processed-per-second). The values will be averaged together before being compared to the target value. Type object Required metric target Property Type Description metric object MetricIdentifier defines the name and optionally selector for a metric target object MetricTarget defines the target value, average value, or average utilization of a specific metric 4.1.21. .spec.metrics[].pods.metric Description MetricIdentifier defines the name and optionally selector for a metric Type object Required name Property Type Description name string name is the name of the given metric selector LabelSelector selector is the string-encoded form of a standard kubernetes label selector for the given metric When set, it is passed as an additional parameter to the metrics server for more specific metrics scoping. When unset, just the metricName will be used to gather metrics. 4.1.22. .spec.metrics[].pods.target Description MetricTarget defines the target value, average value, or average utilization of a specific metric Type object Required type Property Type Description averageUtilization integer averageUtilization is the target value of the average of the resource metric across all relevant pods, represented as a percentage of the requested value of the resource for the pods. Currently only valid for Resource metric source type averageValue Quantity averageValue is the target value of the average of the metric across all relevant pods (as a quantity) type string type represents whether the metric type is Utilization, Value, or AverageValue value Quantity value is the target value of the metric (as a quantity). 4.1.23. .spec.metrics[].resource Description ResourceMetricSource indicates how to scale on a resource metric known to Kubernetes, as specified in requests and limits, describing each pod in the current scale target (e.g. CPU or memory). The values will be averaged together before being compared to the target. Such metrics are built in to Kubernetes, and have special scaling options on top of those available to normal per-pod metrics using the "pods" source. Only one "target" type should be set. Type object Required name target Property Type Description name string name is the name of the resource in question. target object MetricTarget defines the target value, average value, or average utilization of a specific metric 4.1.24. .spec.metrics[].resource.target Description MetricTarget defines the target value, average value, or average utilization of a specific metric Type object Required type Property Type Description averageUtilization integer averageUtilization is the target value of the average of the resource metric across all relevant pods, represented as a percentage of the requested value of the resource for the pods. Currently only valid for Resource metric source type averageValue Quantity averageValue is the target value of the average of the metric across all relevant pods (as a quantity) type string type represents whether the metric type is Utilization, Value, or AverageValue value Quantity value is the target value of the metric (as a quantity). 4.1.25. .spec.scaleTargetRef Description CrossVersionObjectReference contains enough information to let you identify the referred resource. Type object Required kind name Property Type Description apiVersion string apiVersion is the API version of the referent kind string kind is the kind of the referent; More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string name is the name of the referent; More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 4.1.26. .status Description HorizontalPodAutoscalerStatus describes the current status of a horizontal pod autoscaler. Type object Required desiredReplicas Property Type Description conditions array conditions is the set of conditions required for this autoscaler to scale its target, and indicates whether or not those conditions are met. conditions[] object HorizontalPodAutoscalerCondition describes the state of a HorizontalPodAutoscaler at a certain point. currentMetrics array currentMetrics is the last read state of the metrics used by this autoscaler. currentMetrics[] object MetricStatus describes the last-read state of a single metric. currentReplicas integer currentReplicas is current number of replicas of pods managed by this autoscaler, as last seen by the autoscaler. desiredReplicas integer desiredReplicas is the desired number of replicas of pods managed by this autoscaler, as last calculated by the autoscaler. lastScaleTime Time lastScaleTime is the last time the HorizontalPodAutoscaler scaled the number of pods, used by the autoscaler to control how often the number of pods is changed. observedGeneration integer observedGeneration is the most recent generation observed by this autoscaler. 4.1.27. .status.conditions Description conditions is the set of conditions required for this autoscaler to scale its target, and indicates whether or not those conditions are met. Type array 4.1.28. .status.conditions[] Description HorizontalPodAutoscalerCondition describes the state of a HorizontalPodAutoscaler at a certain point. Type object Required type status Property Type Description lastTransitionTime Time lastTransitionTime is the last time the condition transitioned from one status to another message string message is a human-readable explanation containing details about the transition reason string reason is the reason for the condition's last transition. status string status is the status of the condition (True, False, Unknown) type string type describes the current condition 4.1.29. .status.currentMetrics Description currentMetrics is the last read state of the metrics used by this autoscaler. Type array 4.1.30. .status.currentMetrics[] Description MetricStatus describes the last-read state of a single metric. Type object Required type Property Type Description containerResource object ContainerResourceMetricStatus indicates the current value of a resource metric known to Kubernetes, as specified in requests and limits, describing a single container in each pod in the current scale target (e.g. CPU or memory). Such metrics are built in to Kubernetes, and have special scaling options on top of those available to normal per-pod metrics using the "pods" source. external object ExternalMetricStatus indicates the current value of a global metric not associated with any Kubernetes object. object object ObjectMetricStatus indicates the current value of a metric describing a kubernetes object (for example, hits-per-second on an Ingress object). pods object PodsMetricStatus indicates the current value of a metric describing each pod in the current scale target (for example, transactions-processed-per-second). resource object ResourceMetricStatus indicates the current value of a resource metric known to Kubernetes, as specified in requests and limits, describing each pod in the current scale target (e.g. CPU or memory). Such metrics are built in to Kubernetes, and have special scaling options on top of those available to normal per-pod metrics using the "pods" source. type string type is the type of metric source. It will be one of "ContainerResource", "External", "Object", "Pods" or "Resource", each corresponds to a matching field in the object. Note: "ContainerResource" type is available on when the feature-gate HPAContainerMetrics is enabled 4.1.31. .status.currentMetrics[].containerResource Description ContainerResourceMetricStatus indicates the current value of a resource metric known to Kubernetes, as specified in requests and limits, describing a single container in each pod in the current scale target (e.g. CPU or memory). Such metrics are built in to Kubernetes, and have special scaling options on top of those available to normal per-pod metrics using the "pods" source. Type object Required name current container Property Type Description container string container is the name of the container in the pods of the scaling target current object MetricValueStatus holds the current value for a metric name string name is the name of the resource in question. 4.1.32. .status.currentMetrics[].containerResource.current Description MetricValueStatus holds the current value for a metric Type object Property Type Description averageUtilization integer currentAverageUtilization is the current value of the average of the resource metric across all relevant pods, represented as a percentage of the requested value of the resource for the pods. averageValue Quantity averageValue is the current value of the average of the metric across all relevant pods (as a quantity) value Quantity value is the current value of the metric (as a quantity). 4.1.33. .status.currentMetrics[].external Description ExternalMetricStatus indicates the current value of a global metric not associated with any Kubernetes object. Type object Required metric current Property Type Description current object MetricValueStatus holds the current value for a metric metric object MetricIdentifier defines the name and optionally selector for a metric 4.1.34. .status.currentMetrics[].external.current Description MetricValueStatus holds the current value for a metric Type object Property Type Description averageUtilization integer currentAverageUtilization is the current value of the average of the resource metric across all relevant pods, represented as a percentage of the requested value of the resource for the pods. averageValue Quantity averageValue is the current value of the average of the metric across all relevant pods (as a quantity) value Quantity value is the current value of the metric (as a quantity). 4.1.35. .status.currentMetrics[].external.metric Description MetricIdentifier defines the name and optionally selector for a metric Type object Required name Property Type Description name string name is the name of the given metric selector LabelSelector selector is the string-encoded form of a standard kubernetes label selector for the given metric When set, it is passed as an additional parameter to the metrics server for more specific metrics scoping. When unset, just the metricName will be used to gather metrics. 4.1.36. .status.currentMetrics[].object Description ObjectMetricStatus indicates the current value of a metric describing a kubernetes object (for example, hits-per-second on an Ingress object). Type object Required metric current describedObject Property Type Description current object MetricValueStatus holds the current value for a metric describedObject object CrossVersionObjectReference contains enough information to let you identify the referred resource. metric object MetricIdentifier defines the name and optionally selector for a metric 4.1.37. .status.currentMetrics[].object.current Description MetricValueStatus holds the current value for a metric Type object Property Type Description averageUtilization integer currentAverageUtilization is the current value of the average of the resource metric across all relevant pods, represented as a percentage of the requested value of the resource for the pods. averageValue Quantity averageValue is the current value of the average of the metric across all relevant pods (as a quantity) value Quantity value is the current value of the metric (as a quantity). 4.1.38. .status.currentMetrics[].object.describedObject Description CrossVersionObjectReference contains enough information to let you identify the referred resource. Type object Required kind name Property Type Description apiVersion string apiVersion is the API version of the referent kind string kind is the kind of the referent; More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string name is the name of the referent; More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 4.1.39. .status.currentMetrics[].object.metric Description MetricIdentifier defines the name and optionally selector for a metric Type object Required name Property Type Description name string name is the name of the given metric selector LabelSelector selector is the string-encoded form of a standard kubernetes label selector for the given metric When set, it is passed as an additional parameter to the metrics server for more specific metrics scoping. When unset, just the metricName will be used to gather metrics. 4.1.40. .status.currentMetrics[].pods Description PodsMetricStatus indicates the current value of a metric describing each pod in the current scale target (for example, transactions-processed-per-second). Type object Required metric current Property Type Description current object MetricValueStatus holds the current value for a metric metric object MetricIdentifier defines the name and optionally selector for a metric 4.1.41. .status.currentMetrics[].pods.current Description MetricValueStatus holds the current value for a metric Type object Property Type Description averageUtilization integer currentAverageUtilization is the current value of the average of the resource metric across all relevant pods, represented as a percentage of the requested value of the resource for the pods. averageValue Quantity averageValue is the current value of the average of the metric across all relevant pods (as a quantity) value Quantity value is the current value of the metric (as a quantity). 4.1.42. .status.currentMetrics[].pods.metric Description MetricIdentifier defines the name and optionally selector for a metric Type object Required name Property Type Description name string name is the name of the given metric selector LabelSelector selector is the string-encoded form of a standard kubernetes label selector for the given metric When set, it is passed as an additional parameter to the metrics server for more specific metrics scoping. When unset, just the metricName will be used to gather metrics. 4.1.43. .status.currentMetrics[].resource Description ResourceMetricStatus indicates the current value of a resource metric known to Kubernetes, as specified in requests and limits, describing each pod in the current scale target (e.g. CPU or memory). Such metrics are built in to Kubernetes, and have special scaling options on top of those available to normal per-pod metrics using the "pods" source. Type object Required name current Property Type Description current object MetricValueStatus holds the current value for a metric name string name is the name of the resource in question. 4.1.44. .status.currentMetrics[].resource.current Description MetricValueStatus holds the current value for a metric Type object Property Type Description averageUtilization integer currentAverageUtilization is the current value of the average of the resource metric across all relevant pods, represented as a percentage of the requested value of the resource for the pods. averageValue Quantity averageValue is the current value of the average of the metric across all relevant pods (as a quantity) value Quantity value is the current value of the metric (as a quantity). 4.2. API endpoints The following API endpoints are available: /apis/autoscaling/v2/horizontalpodautoscalers GET : list or watch objects of kind HorizontalPodAutoscaler /apis/autoscaling/v2/watch/horizontalpodautoscalers GET : watch individual changes to a list of HorizontalPodAutoscaler. deprecated: use the 'watch' parameter with a list operation instead. /apis/autoscaling/v2/namespaces/{namespace}/horizontalpodautoscalers DELETE : delete collection of HorizontalPodAutoscaler GET : list or watch objects of kind HorizontalPodAutoscaler POST : create a HorizontalPodAutoscaler /apis/autoscaling/v2/watch/namespaces/{namespace}/horizontalpodautoscalers GET : watch individual changes to a list of HorizontalPodAutoscaler. deprecated: use the 'watch' parameter with a list operation instead. /apis/autoscaling/v2/namespaces/{namespace}/horizontalpodautoscalers/{name} DELETE : delete a HorizontalPodAutoscaler GET : read the specified HorizontalPodAutoscaler PATCH : partially update the specified HorizontalPodAutoscaler PUT : replace the specified HorizontalPodAutoscaler /apis/autoscaling/v2/watch/namespaces/{namespace}/horizontalpodautoscalers/{name} GET : watch changes to an object of kind HorizontalPodAutoscaler. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /apis/autoscaling/v2/namespaces/{namespace}/horizontalpodautoscalers/{name}/status GET : read status of the specified HorizontalPodAutoscaler PATCH : partially update status of the specified HorizontalPodAutoscaler PUT : replace status of the specified HorizontalPodAutoscaler 4.2.1. /apis/autoscaling/v2/horizontalpodautoscalers HTTP method GET Description list or watch objects of kind HorizontalPodAutoscaler Table 4.1. HTTP responses HTTP code Reponse body 200 - OK HorizontalPodAutoscalerList schema 401 - Unauthorized Empty 4.2.2. /apis/autoscaling/v2/watch/horizontalpodautoscalers HTTP method GET Description watch individual changes to a list of HorizontalPodAutoscaler. deprecated: use the 'watch' parameter with a list operation instead. Table 4.2. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 4.2.3. /apis/autoscaling/v2/namespaces/{namespace}/horizontalpodautoscalers HTTP method DELETE Description delete collection of HorizontalPodAutoscaler Table 4.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 4.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind HorizontalPodAutoscaler Table 4.5. HTTP responses HTTP code Reponse body 200 - OK HorizontalPodAutoscalerList schema 401 - Unauthorized Empty HTTP method POST Description create a HorizontalPodAutoscaler Table 4.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.7. Body parameters Parameter Type Description body HorizontalPodAutoscaler schema Table 4.8. HTTP responses HTTP code Reponse body 200 - OK HorizontalPodAutoscaler schema 201 - Created HorizontalPodAutoscaler schema 202 - Accepted HorizontalPodAutoscaler schema 401 - Unauthorized Empty 4.2.4. /apis/autoscaling/v2/watch/namespaces/{namespace}/horizontalpodautoscalers HTTP method GET Description watch individual changes to a list of HorizontalPodAutoscaler. deprecated: use the 'watch' parameter with a list operation instead. Table 4.9. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 4.2.5. /apis/autoscaling/v2/namespaces/{namespace}/horizontalpodautoscalers/{name} Table 4.10. Global path parameters Parameter Type Description name string name of the HorizontalPodAutoscaler HTTP method DELETE Description delete a HorizontalPodAutoscaler Table 4.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 4.12. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified HorizontalPodAutoscaler Table 4.13. HTTP responses HTTP code Reponse body 200 - OK HorizontalPodAutoscaler schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified HorizontalPodAutoscaler Table 4.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.15. HTTP responses HTTP code Reponse body 200 - OK HorizontalPodAutoscaler schema 201 - Created HorizontalPodAutoscaler schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified HorizontalPodAutoscaler Table 4.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.17. Body parameters Parameter Type Description body HorizontalPodAutoscaler schema Table 4.18. HTTP responses HTTP code Reponse body 200 - OK HorizontalPodAutoscaler schema 201 - Created HorizontalPodAutoscaler schema 401 - Unauthorized Empty 4.2.6. /apis/autoscaling/v2/watch/namespaces/{namespace}/horizontalpodautoscalers/{name} Table 4.19. Global path parameters Parameter Type Description name string name of the HorizontalPodAutoscaler HTTP method GET Description watch changes to an object of kind HorizontalPodAutoscaler. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 4.20. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 4.2.7. /apis/autoscaling/v2/namespaces/{namespace}/horizontalpodautoscalers/{name}/status Table 4.21. Global path parameters Parameter Type Description name string name of the HorizontalPodAutoscaler HTTP method GET Description read status of the specified HorizontalPodAutoscaler Table 4.22. HTTP responses HTTP code Reponse body 200 - OK HorizontalPodAutoscaler schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified HorizontalPodAutoscaler Table 4.23. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.24. HTTP responses HTTP code Reponse body 200 - OK HorizontalPodAutoscaler schema 201 - Created HorizontalPodAutoscaler schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified HorizontalPodAutoscaler Table 4.25. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.26. Body parameters Parameter Type Description body HorizontalPodAutoscaler schema Table 4.27. HTTP responses HTTP code Reponse body 200 - OK HorizontalPodAutoscaler schema 201 - Created HorizontalPodAutoscaler schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/autoscale_apis/horizontalpodautoscaler-autoscaling-v2 |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_hybrid_cloud_console/1-latest/html/getting_started_with_the_red_hat_hybrid_cloud_console_with_fedramp/making-open-source-more-inclusive |
Chapter 2. Ceph Object Gateway administrative API | Chapter 2. Ceph Object Gateway administrative API As a developer, you can administer the Ceph Object Gateway by interacting with the RESTful application programming interface (API). The Ceph Object Gateway makes available the features of the radosgw-admin command in a RESTful API. You can manage users, data, quotas, and usage which you can integrate with other management platforms. Note Red Hat recommends using the command-line interface when configuring the Ceph Object Gateway. The administrative API provides the following functionality: Authentication Requests User Account Management Administrative User Getting User Information Creating Modifying Removing Creating Subuser Modifying Subuser Removing Subuser User Capabilities Management Adding Removing Key Management Creating Removing Bucket Management Getting Bucket Information Checking Index Removing Linking Unlinking Policy Object Management Removing Policy Quota Management Getting User Setting User Getting Bucket Setting Bucket Getting Usage Information Removing Usage Information Standard Error Responses 2.1. Prerequisites A running Red Hat Ceph Storage cluster. A RESTful client. 2.2. Administration operations An administrative Application Programming Interface (API) request will be done on a URI that starts with the configurable 'admin' resource entry point. Authorization for the administrative API duplicates the S3 authorization mechanism. Some operations require that the user holds special administrative capabilities. The response entity type, either XML or JSON, might be specified as the 'format' option in the request and defaults to JSON if not specified. Example 2.3. Administration authentication requests Amazon's S3 service uses the access key and a hash of the request header and the secret key to authenticate the request. It has the benefit of providing an authenticated request, especially large uploads, without SSL overhead. Most use cases for the S3 API involve using open-source S3 clients such as the AmazonS3Client in the Amazon SDK for Java or Python Boto. These libraries do not support the Ceph Object Gateway Admin API. You can subclass and extend these libraries to support the Ceph Admin API. Alternatively, you can create a unique Gateway client. Creating an execute() method The CephAdminAPI example class in this section illustrates how to create an execute() method that can take request parameters, authenticate the request, call the Ceph Admin API and receive a response. The CephAdminAPI class example is not supported or intended for commercial use. It is for illustrative purposes only. Calling the Ceph Object Gateway The client code contains five calls to the Ceph Object Gateway to demonstrate CRUD operations: Create a User Get a User Modify a User Create a Subuser Delete a User To use this example, get the httpcomponents-client-4.5.3 Apache HTTP components. You can download it for example here: http://hc.apache.org/downloads.cgi . Then unzip the tar file, navigate to its lib directory and copy the contents to the /jre/lib/ext directory of the JAVA_HOME directory, or a custom classpath. As you examine the CephAdminAPI class example, notice that the execute() method takes an HTTP method, a request path, an optional subresource, null if not specified, and a map of parameters. To execute with subresources, for example, subuser , and key , you will need to specify the subresource as an argument in the execute() method. The example method: Builds a URI. Builds an HTTP header string. Instantiates an HTTP request, for example, PUT , POST , GET , DELETE . Adds the Date header to the HTTP header string and the request header. Adds the Authorization header to the HTTP request header. Instantiates an HTTP client and passes it the instantiated HTTP request. Makes a request. Returns a response. Building the header string Building the header string is the portion of the process that involves Amazon's S3 authentication procedure. Specifically, the example method does the following: Adds a request type, for example, PUT , POST , GET , DELETE . Adds the date. Adds the requestPath. The request type should be uppercase with no leading or trailing white space. If you do not trim white space, authentication will fail. The date MUST be expressed in GMT, or authentication will fail. The exemplary method does not have any other headers. The Amazon S3 authentication procedure sorts x-amz headers lexicographically. So if you are adding x-amz headers, be sure to add them lexicographically. Once you have built the header string, the step is to instantiate an HTTP request and pass it the URI. The exemplary method uses PUT for creating a user and subuser, GET for getting a user, POST for modifying a user and DELETE for deleting a user. Once you instantiate a request, add the Date header followed by the Authorization header. Amazon's S3 authentication uses the standard Authorization header, and has the following structure: The CephAdminAPI example class has a base64Sha1Hmac() method, which takes the header string and the secret key for the admin user, and returns a SHA1 HMAC as a base-64 encoded string. Each execute() call will invoke the same line of code to build the Authorization header: The following CephAdminAPI example class requires you to pass the access key, secret key, and an endpoint to the constructor. The class provides accessor methods to change them at runtime. Example The subsequent CephAdminAPIClient example illustrates how to instantiate the CephAdminAPI class, build a map of request parameters, and use the execute() method to create, get, update and delete a user. Example Additional Resources See the S3 Authentication section in the Red Hat Ceph Storage Developer Guide for additional details. For a more extensive explanation of the Amazon S3 authentication procedure, consult the Signing and Authenticating REST Requests section of Amazon Simple Storage Service documentation. 2.4. Creating an administrative user Important To run the radosgw-admin command from the Ceph Object Gateway node, ensure the node has the admin key. The admin key can be copied from any Ceph Monitor node. Prerequisites Root-level access to the Ceph Object Gateway node. Procedure Create an object gateway user: Syntax Example The radosgw-admin command-line interface will return the user. Example output Assign administrative capabilities to the user you create: Syntax Example The radosgw-admin command-line interface will return the user. The "caps": will have the capabilities you assigned to the user: Example output Now you have a user with administrative privileges. 2.5. Get user information Get the user's information. Capabilities Syntax Request Parameters uid Description The user for which the information is requested. Type String Example foo_user Required Yes Response Entities user Description A container for the user data information. Type Container Parent N/A user_id Description The user ID. Type String Parent user display_name Description Display name for the user. Type String Parent user suspended Description True if the user is suspended. Type Boolean Parent user max_buckets Description The maximum number of buckets to be owned by the user. Type Integer Parent user subusers Description Subusers associated with this user account. Type Container Parent user keys Description S3 keys associated with this user account. Type Container Parent user swift_keys Description Swift keys associated with this user account. Type Container Parent user caps Description User capabilities. Type Container Parent user If successful, the response contains the user information. Special Error Responses None. 2.6. Create a user Create a new user. By default, an S3 key pair will be created automatically and returned in the response. If only a access-key or secret-key is provided, the omitted key will be automatically generated. By default, a generated key is added to the keyring without replacing an existing key pair. If access-key is specified and refers to an existing key owned by the user then it will be modified. Capabilities Syntax Request Parameters uid Description The user ID to be created. Type String Example foo_user Required Yes display-name Description The display name of the user to be created. Type String Example foo_user Required Yes email Description The email address associated with the user. Type String Example [email protected] Required No key-type Description Key type to be generated, options are: swift, s3 (default). Type String Example s3 [ s3 ] Required No access-key Description Specify access key. Type String Example ABCD0EF12GHIJ2K34LMN Required No secret-key Description Specify secret key. Type String Example 0AbCDEFg1h2i34JklM5nop6QrSTUV+WxyzaBC7D8 Required No user-caps Description User capabilities. Type String Example usage=read, write; users=read Required No generate-key Description Generate a new key pair and add to the existing keyring. Type Boolean Example True [True] Required No max-buckets Description Specify the maximum number of buckets the user can own. Type Integer Example 500 [1000] Required No suspended Description Specify whether the user should be suspended Type Boolean Example False [False] Required No Response Entities user Description Specify whether the user should be suspended Type Boolean Parent No user_id Description The user ID. Type String Parent user display_name Description Display name for the user. Type String Parent user suspended Description True if the user is suspended. Type Boolean Parent user max_buckets Description The maximum number of buckets to be owned by the user. Type Integer Parent user subusers Description Subusers associated with this user account. Type Container Parent user keys Description S3 keys associated with this user account. Type Container Parent user swift_keys Description Swift keys associated with this user account. Type Container Parent user caps Description User capabilities. Type Container Parent If successful, the response contains the user information. Special Error Responses UserExists Description Attempt to create existing user. Code 409 Conflict InvalidAccessKey Description Invalid access key specified. Code 400 Bad Request InvalidKeyType Description Invalid key type specified. Code 400 Bad Request InvalidSecretKey Description Invalid secret key specified. Code 400 Bad Request KeyExists Description Provided access key exists and belongs to another user. Code 409 Conflict EmailExists Description Provided email address exists. Code 409 Conflict InvalidCap Description Attempt to grant invalid admin capability. Code 400 Bad Request Additional Resources See the Red Hat Ceph Storage Developer Guide for creating subusers. 2.7. Modify a user Modify an existing user. Capabilities Syntax Request Parameters uid Description The user ID to be created. Type String Example foo_user Required Yes display-name Description The display name of the user to be created. Type String Example foo_user Required Yes email Description The email address associated with the user. Type String Example [email protected] Required No generate-key Description Generate a new key pair and add to the existing keyring. Type Boolean Example True [False] Required No access-key Description Specify access key. Type String Example ABCD0EF12GHIJ2K34LMN Required No secret-key Description Specify secret key. Type String Example 0AbCDEFg1h2i34JklM5nop6QrSTUV+WxyzaBC7D8 Required No key-type Description Key type to be generated, options are: swift, s3 (default). Type String Example s3 Required No user-caps Description User capabilities. Type String Example usage=read, write; users=read Required No max-buckets Description Specify the maximum number of buckets the user can own. Type Integer Example 500 [1000] Required No suspended Description Specify whether the user should be suspended Type Boolean Example False [False] Required No Response Entities user Description Specify whether the user should be suspended Type Boolean Parent No user_id Description The user ID. Type String Parent user display_name Description Display name for the user. Type String Parent user suspended Description True if the user is suspended. Type Boolean Parent user max_buckets Description The maximum number of buckets to be owned by the user. Type Integer Parent user subusers Description Subusers associated with this user account. Type Container Parent user keys Description S3 keys associated with this user account. Type Container Parent user swift_keys Description Swift keys associated with this user account. Type Container Parent user caps Description User capabilities. Type Container Parent If successful, the response contains the user information. Special Error Responses InvalidAccessKey Description Invalid access key specified. Code 400 Bad Request InvalidKeyType Description Invalid key type specified. Code 400 Bad Request InvalidSecretKey Description Invalid secret key specified. Code 400 Bad Request KeyExists Description Provided access key exists and belongs to another user. Code 409 Conflict EmailExists Description Provided email address exists. Code 409 Conflict InvalidCap Description Attempt to grant invalid admin capability. Code 400 Bad Request Additional Resources See the Red Hat Ceph Storage Developer Guide for modifying subusers. 2.8. Remove a user Remove an existing user. Capabilities Syntax Request Parameters uid Description The user ID to be removed. Type String Example foo_user Required Yes purge-data Description When specified the buckets and objects belonging to the user will also be removed. Type Boolean Example True Required No Response Entities None. Special Error Responses None. Additional Resources See Red Hat Ceph Storage Developer Guide for removing subusers. 2.9. Create a subuser Create a new subuser, primarily useful for clients using the Swift API. Note Either gen-subuser or subuser is required for a valid request. In general, for a subuser to be useful, it must be granted permissions by specifying access . As with user creation if subuser is specified without secret , then a secret key is automatically generated. Capabilities Syntax Request Parameters uid Description The user ID under which a subuser is to be created. Type String Example foo_user Required Yes subuser Description Specify the subuser ID to be created. Type String Example sub_foo Required Yes (or gen-subuser ) gen-subuser Description Specify the subuser ID to be created. Type String Example sub_foo Required Yes (or gen-subuser ) secret-key Description Specify secret key. Type String Example 0AbCDEFg1h2i34JklM5nop6QrSTUV+WxyzaBC7D8 Required No key-type Description Key type to be generated, options are: swift (default), s3. Type String Example swift [ swift ] Required No access Description Set access permissions for sub-user, should be one of read, write, readwrite, full . Type String Example read Required No generate-secret Description Generate the secret key. Type Boolean Example True [False] Required No Response Entities subusers Description Subusers associated with the user account. Type Container Parent N/A permissions Description Subuser access to user account. Type String Parent subusers If successful, the response contains the subuser information. Special Error Responses SubuserExists Description Specified subuser exists. Code 409 Conflict InvalidKeyType Description Invalid key type specified. Code 400 Bad Request InvalidSecretKey Description Invalid secret key specified. Code 400 Bad Request InvalidAccess Description Invalid subuser access specified Code 400 Bad Request 2.10. Modify a subuser Modify an existing subuser. Capabilities Syntax Request Parameters uid Description The user ID under which a subuser is to be created. Type String Example foo_user Required Yes subuser Description The subuser ID to be modified. Type String Example sub_foo Required generate-secret Description Generate a new secret key for the subuser, replacing the existing key. Type Boolean Example True [False] Required No secret Description Specify secret key. Type String Example 0AbCDEFg1h2i34JklM5nop6QrSTUV+WxyzaBC7D8 Required No key-type Description Key type to be generated, options are: swift (default), s3. Type String Example swift [ swift ] Required No access Description Set access permissions for sub-user, should be one of read, write, readwrite, full . Type String Example read Required No Response Entities subusers Description Subusers associated with the user account. Type Container Parent N/A id Description Subuser ID Type String Parent subusers permissions Description Subuser access to user account. Type String Parent subusers If successful, the response contains the subuser information. Special Error Responses InvalidKeyType Description Invalid key type specified. Code 400 Bad Request InvalidSecretKey Description Invalid secret key specified. Code 400 Bad Request InvalidAccess Description Invalid subuser access specified Code 400 Bad Request 2.11. Remove a subuser Remove an existing subuser. Capabilities Syntax Request Parameters uid Description The user ID to be removed. Type String Example foo_user Required Yes subuser Description The subuser ID to be removed. Type String Example sub_foo Required Yes purge-keys Description Remove keys belonging to the subuser. Type Boolean Example True [True] Required No Response Entities None. Special Error Responses None. 2.12. Add capabilities to a user Add an administrative capability to a specified user. Capabilities Syntax Request Parameters uid Description The user ID to add an administrative capability to. Type String Example foo_user Required Yes user-caps Description The administrative capability to add to the user. Type String Example usage=read, write Required Yes Response Entities user Description A container for the user data information. Type Container Parent N/A user_id Description The user ID Type String Parent user caps Description User capabilities, Type Container Parent user If successful, the response contains the user's capabilities. Special Error Responses InvalidCap Description Attempt to grant invalid admin capability. Code 400 Bad Request 2.13. Remove capabilities from a user Remove an administrative capability from a specified user. Capabilities Syntax Request Parameters uid Description The user ID to remove an administrative capability from. Type String Example foo_user Required Yes user-caps Description The administrative capabilities to remove from the user. Type String Example usage=read, write Required Yes Response Entities user Description A container for the user data information. Type Container Parent N/A user_id Description The user ID. Type String Parent user caps Description User capabilities. Type Container Parent user If successful, the response contains the user's capabilities. Special Error Responses InvalidCap Description Attempt to remove an invalid admin capability. Code 400 Bad Request NoSuchCap Description User does not possess specified capability. Code 404 Not Found 2.14. Create a key Create a new key. If a subuser is specified then by default created keys will be swift type. If only one of access-key or secret-key is provided the committed key will be automatically generated, that is if only secret-key is specified then access-key will be automatically generated. By default, a generated key is added to the keyring without replacing an existing key pair. If access-key is specified and refers to an existing key owned by the user then it will be modified. The response is a container listing all keys of the same type as the key created. Note When creating a swift key, specifying the option access-key will have no effect. Additionally, only one swift key might be held by each user or subuser. Capabilities Syntax Request Parameters uid Description The user ID to receive the new key. Type String Example foo_user Required Yes subuser Description The subuser ID to receive the new key. Type String Example sub_foo Required No key-type Description Key type to be generated, options are: swift, s3 (default). Type String Example s3 [ s3 ] Required No access-key Description Specify access key. Type String Example AB01C2D3EF45G6H7IJ8K Required No secret-key Description Specify secret key. Type String Example 0ab/CdeFGhij1klmnopqRSTUv1WxyZabcDEFgHij Required No generate-key Description Generate a new key pair and add to the existing keyring. Type Boolean Example True [ True ] Required No Response Entities keys Description Keys of type created associated with this user account. Type Container Parent N/A user Description The user account associated with the key. Type String Parent keys access-key Description The access key. Type String Parent keys secret-key Description The secret key. Type String Parent keys Special Error Responses InvalidAccessKey Description Invalid access key specified. Code 400 Bad Request InvalidKeyType Description Invalid key type specified. Code 400 Bad Request InvalidSecretKey Description Invalid secret key specified. Code 400 Bad Request InvalidKeyType Description Invalid key type specified. Code 400 Bad Request KeyExists Description Provided access key exists and belongs to another user. Code 409 Conflict 2.15. Remove a key Remove an existing key. Capabilities Syntax Request Parameters access-key Description The S3 access key belonging to the S3 key pair to remove. Type String Example AB01C2D3EF45G6H7IJ8K Required Yes uid Description The user to remove the key from. Type String Example foo_user Required No subuser Description The subuser to remove the key from. Type String Example sub_foo Required No key-type Description Key type to be removed, options are: swift, s3. Note Required to remove swift key. Type String Example swift Required No Special Error Responses None. Response Entities None. 2.16. Bucket notifications As a storage administrator, you can use these APIs to provide configuration and control interfaces for the bucket notification mechanism. The API topics are named objects that contain the definition of a specific endpoint. Bucket notifications associate topics with a specific bucket. The S3 bucket operations section gives more details on bucket notifications. Note In all topic actions, the parameters are URL encoded, and sent in the message body using application/x-www-form-urlencoded content type. Note Any bucket notification already associated with the topic needs to be re-created for the topic update to take effect. 2.16.1. Prerequisites Create bucket notifications on the Ceph Object Gateway. 2.16.2. Overview of bucket notifications Bucket notifications provide a way to send information out of the Ceph Object Gateway when certain events happen in the bucket. Bucket notifications can be sent to HTTP, AMQP0.9.1, and Kafka endpoints. A notification entry must be created to send bucket notifications for events on a specific bucket and to a specific topic. A bucket notification can be created on a subset of event types or by default for all event types. The bucket notification can filter out events based on key prefix or suffix, regular expression matching the keys, and the metadata attributes attached to the object, or the object tags. Bucket notifications have a REST API to provide configuration and control interfaces for the bucket notification mechanism. 2.16.3. Persistent notifications Persistent notifications enable reliable and asynchronous delivery of notifications from the Ceph Object Gateway to the endpoint configured at the topic. Regular notifications are also reliable because the delivery to the endpoint is performed synchronously during the request. With persistent notifications, the Ceph Object Gateway retries sending notifications even when the endpoint is down or there are network issues during the operations, that is notifications are retried if not successfully delivered to the endpoint. Notifications are sent only after all other actions related to the notified operation are successful. If an endpoint goes down for a longer duration, the notification queue fills up and the S3 operations that have configured notifications for these endpoints will fail. Note With kafka-ack-level=none , there is no indication for message failures, and therefore messages sent while broker is down are not retried, when the broker is up again. After the broker is up again, only new notifications are seen. 2.16.4. Creating a topic You can create topics before creating bucket notifications. A topic is a Simple Notification Service (SNS) entity and all the topic operations, that is, create , delete , list , and get , are SNS operations. The topic needs to have endpoint parameters that are used when a bucket notification is created. Once the request is successful, the response includes the topic Amazon Resource Name (ARN) that can be used later to reference this topic in the bucket notification request. Note A topic_arn provides the bucket notification configuration and is generated after a topic is created. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access. Installation of the Ceph Object Gateway. User access key and secret key. Endpoint parameters. Procedure Create a topic with the following request format: Syntax Here are the request parameters: Endpoint : URL of an endpoint to send notifications to. OpaqueData : opaque data is set in the topic configuration and added to all notifications triggered by the topic. persistent : indication of whether notifications to this endpoint are persistent that is asynchronous or not. By default the value is false . HTTP endpoint: URL : https:// FQDN : PORT port defaults to : Use 80/443 for HTTP[S] accordingly. verify-ssl : Indicates whether the server certificate is validated by the client or not. By default , it is true . AMQP0.9.1 endpoint: URL : amqp:// USER : PASSWORD @ FQDN : PORT [/ VHOST ]. User and password defaults to: guest and guest respectively. User and password details should be provided over HTTPS, otherwise the topic creation request is rejected. port defaults to : 5672. vhost defaults to: "/" amqp-exchange : The exchanges must exist and be able to route messages based on topics. This is a mandatory parameter for AMQP0.9.1. Different topics pointing to the same endpoint must use the same exchange. amqp-ack-level : No end to end acknowledgment is required, as messages may persist in the broker before being delivered into their final destination. Three acknowledgment methods exist: none : Message is considered delivered if sent to the broker. broker : By default, the message is considered delivered if acknowledged by the broker. routable : Message is considered delivered if the broker can route to a consumer. Note The key and value of a specific parameter do not have to reside in the same line, or in any specific order, but must use the same index. Attribute indexing does not need to be sequential or start from any specific value. Note The topic-name is used for the AMQP topic. Kafka endpoint: URL : kafka:// USER : PASSWORD @ FQDN : PORT . use-ssl is set to false by default. If use-ssl is set to true , secure connection is used for connecting with the broker. If ca-location is provided, and secure connection is used, the specified CA will be used, instead of the default one, to authenticate the broker. User and password can only be provided over HTTP[S]. Otherwise, the topic creation request is rejected. User and password may only be provided together with use-ssl , otherwise, the connection to the broker will fail. port defaults to : 9092. kafka-ack-level : no end to end acknowledgment required, as messages may persist in the broker before being delivered into their final destination. Two acknowledgment methods exist: none : message is considered delivered if sent to the broker. broker : By default, the message is considered delivered if acknowledged by the broker. The following is an example of the response format: Example Note The topic Amazon Resource Name (ARN) in the response will have the following format: arn:aws:sns: ZONE_GROUP : TENANT : TOPIC The following is an example of AMQP0.9.1 endpoint: Example 2.16.5. Getting topic information Returns information about a specific topic. This can include endpoint information if it is provided. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access. Installation of the Ceph Object Gateway. User access key and secret key. Endpoint parameters. Procedure Get topic information with the following request format: Syntax Here is an example of the response format: The following are the tags and definitions: User : Name of the user that created the topic. Name : Name of the topic. JSON formatted endpoints include: EndpointAddress : The endpoint URL. If the endpoint URL contains user and password information, the request must be made over HTTPS. Otheriwse, the topic get request is rejected. EndPointArgs : The endpoint arguments. EndpointTopic : The topic name that is be sent to the endpoint can be different than the above example topic name. HasStoredSecret : true when the endpoint URL contains user and password information. Persistent : true when the topic is persistent. TopicArn : Topic ARN. OpaqueData : This is an opaque data set on the topic. 2.16.6. Listing topics List the topics that the user has defined. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access. Installation of the Ceph Object Gateway. User access key and secret key. Endpoint parameters. Procedure List topic information with the following request format: Syntax Here is an example of the response format: Note If endpoint URL contains user and password information, in any of the topics, the request must be made over HTTPS. Otherwise, the topic list request is rejected. 2.16.7. Deleting topics Removing a deleted topic results in no operation and is not a failure. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access. Installation of the Ceph Object Gateway. User access key and secret key. Endpoint parameters. Procedure Delete a topic with the following request format: Syntax Here is an example of the response format: 2.16.8. Event record An event holds information about the operation done by the Ceph Object Gateway and is sent as a payload over the chosen endpoint, such as HTTP, HTTPS, Kafka, or AMQ0.9.1. The event record is in JSON format. Example These are the event record keys and their definitions: awsRegion : Zonegroup. eventTime : Timestamp that indicates when the event was triggered. eventName : The type of the event. userIdentity.principalId : The identity of the user that triggered the event. requestParameters.sourceIPAddress : The IP address of the client that triggered the event. This field is not supported. responseElements.x-amz-request-id : The request ID that triggered the event. responseElements.x_amz_id_2 : The identity of the Ceph Object Gateway on which the event was triggered. The identity format is RGWID - ZONE - ZONEGROUP . s3.configurationId : The notification ID that created the event. s3.bucket.name : The name of the bucket. s3.bucket.ownerIdentity.principalId : The owner of the bucket. s3.bucket.arn : Amazon Resource Name (ARN) of the bucket. s3.bucket.id : Identity of the bucket. s3.object.key : The object key. s3.object.size : The size of the object. s3.object.eTag : The object etag. s3.object.version : The object version in a versioned bucket. s3.object.sequencer : Monotonically increasing identifier of the change per object in the hexadecimal format. s3.object.metadata : Any metadata set on the object sent as x-amz-meta . s3.object.tags : Any tags set on the object. s3.eventId : Unique identity of the event. s3.opaqueData : Opaque data is set in the topic configuration and added to all notifications triggered by the topic. Additional Resources See the Event Message Structure for more information. 2.16.9. Supported event types The following event types are supported: s3:ObjectCreated:* s3:ObjectCreated:Put s3:ObjectCreated:Post s3:ObjectCreated:Copy s3:ObjectCreated:CompleteMultipartUpload s3:ObjectRemoved:* s3:ObjectRemoved:Delete s3:ObjectRemoved:DeleteMarkerCreated 2.17. Get bucket information Get information about a subset of the existing buckets. If uid is specified without bucket then all buckets belonging to the user will be returned. If bucket alone is specified, information for that particular bucket will be retrieved. Capabilities Syntax Request Parameters bucket Description The bucket to return info on. Type String Example foo_bucket Required No uid Description The user to retrieve bucket information for. Type String Example foo_user Required No stats Description Return bucket statistics. Type Boolean Example True [False] Required No Response Entities stats Description Per bucket information. Type Container Parent N/A buckets Description Contains a list of one or more bucket containers. Type Container Parent buckets bucket Description Container for single bucket information. Type Container Parent buckets name Description The name of the bucket. Type String Parent bucket pool Description The pool the bucket is stored in. Type String Parent bucket id Description The unique bucket ID. Type String Parent bucket marker Description Internal bucket tag. Type String Parent bucket owner Description The user ID of the bucket owner. Type String Parent bucket usage Description Storage usage information. Type Container Parent bucket index Description Status of bucket index. Type String Parent bucket If successful, then the request returns a bucket's container with the bucket information. Special Error Responses IndexRepairFailed Description Bucket index repair failed. Code 409 Conflict 2.18. Check a bucket index Check the index of an existing bucket. Note To check multipart object accounting with check-objects , fix must be set to True. Capabilities buckets=write Syntax Request Parameters bucket Description The bucket to return info on. Type String Example foo_bucket Required Yes check-objects Description Check multipart object accounting. Type Boolean Example True [False] Required No fix Description Also fix the bucket index when checking. Type Boolean Example False [False] Required No Response Entities index Description Status of bucket index. Type String Special Error Responses IndexRepairFailed Description Bucket index repair failed. Code 409 Conflict 2.19. Remove a bucket Removes an existing bucket. Capabilities Syntax Request Parameters bucket Description The bucket to remove. Type String Example foo_bucket Required Yes purge-objects Description Remove a bucket's objects before deletion. Type Boolean Example True [False] Required No Response Entities None. Special Error Responses BucketNotEmpty Description Attempted to delete non-empty bucket. Code 409 Conflict ObjectRemovalFailed Description Unable to remove objects. Code 409 Conflict 2.20. Link a bucket Link a bucket to a specified user, unlinking the bucket from any user. Capabilities Syntax Request Parameters bucket Description The bucket to unlink. Type String Example foo_bucket Required Yes uid Description The user ID to link the bucket to. Type String Example foo_user Required Yes Response Entities bucket Description Container for single bucket information. Type Container Parent N/A name Description The name of the bucket. Type String Parent bucket pool Description The pool the bucket is stored in. Type String Parent bucket id Description The unique bucket ID. Type String Parent bucket marker Description Internal bucket tag. Type String Parent bucket owner Description The user ID of the bucket owner. Type String Parent bucket usage Description Storage usage information. Type Container Parent bucket index Description Status of bucket index. Type String Parent bucket Special Error Responses BucketUnlinkFailed Description Unable to unlink bucket from specified user. Code 409 Conflict BucketLinkFailed Description Unable to link bucket to specified user. Code 409 Conflict 2.21. Unlink a bucket Unlink a bucket from a specified user. Primarily useful for changing bucket ownership. Capabilities Syntax Request Parameters bucket Description The bucket to unlink. Type String Example foo_bucket Required Yes uid Description The user ID to link the bucket to. Type String Example foo_user Required Yes Response Entities None. Special Error Responses BucketUnlinkFailed Description Unable to unlink bucket from specified user. Type 409 Conflict 2.22. Get a bucket or object policy Read the policy of an object or bucket. Capabilities Syntax Request Parameters bucket Description The bucket to read the policy from. Type String Example foo_bucket Required Yes object Description The object to read the policy from. Type String Example foo.txt Required No Response Entities policy Description Access control policy. Type Container Parent N/A If successful, returns the object or bucket policy Special Error Responses IncompleteBody Description Either bucket was not specified for a bucket policy request or bucket and object were not specified for an object policy request. Code 400 Bad Request 2.23. Remove an object Remove an existing object. Note Does not require owner to be non-suspended. Capabilities Syntax Request Parameters bucket Description The bucket containing the object to be removed. Type String Example foo_bucket Required Yes object Description The object to remove Type String Example foo.txt Required Yes Response Entities None. Special Error Responses NoSuchObject Description Specified object does not exist. Code 404 Not Found ObjectRemovalFailed Description Unable to remove objects. Code 409 Conflict 2.24. Quotas The administrative Operations API enables you to set quotas on users and on buckets owned by users. Quotas include the maximum number of objects in a bucket and the maximum storage size in megabytes. To view quotas, the user must have a users=read capability. To set, modify or disable a quota, the user must have users=write capability. Valid parameters for quotas include: Bucket: The bucket option allows you to specify a quota for buckets owned by a user. Maximum Objects: The max-objects setting allows you to specify the maximum number of objects. A negative value disables this setting. Maximum Size: The max-size option allows you to specify a quota for the maximum number of bytes. A negative value disables this setting. Quota Scope: The quota-scope option sets the scope for the quota. The options are bucket and user . 2.25. Get a user quota To get a quota, the user must have users capability set with read permission. Syntax 2.26. Set a user quota To set a quota, the user must have users capability set with write permission. Syntax The content must include a JSON representation of the quota settings as encoded in the corresponding read operation. 2.27. Get a bucket quota Get information about a subset of the existing buckets. If uid is specified without bucket then all buckets belonging to the user will be returned. If bucket alone is specified, information for that particular bucket will be retrieved. Capabilities Syntax Request Parameters bucket Description The bucket to return info on. Type String Example foo_bucket Required No uid Description The user to retrieve bucket information for. Type String Example foo_user Required No stats Description Return bucket statistics. Type Boolean Example True [False] Required No Response Entities stats Description Per bucket information. Type Container Parent N/A buckets Description Contains a list of one or more bucket containers. Type Container Parent N/A bucket Description Container for single bucket information. Type Container Parent buckets name Description The name of the bucket. Type String Parent bucket pool Description The pool the bucket is stored in. Type String Parent bucket id Description The unique bucket ID. Type String Parent bucket marker Description Internal bucket tag. Type String Parent bucket owner Description The user ID of the bucket owner. Type String Parent bucket usage Description Storage usage information. Type Container Parent bucket index Description Status of bucket index. Type String Parent bucket If successful, then the request returns a bucket's container with the bucket information. Special Error Responses IndexRepairFailed Description Bucket index repair failed. Code 409 Conflict 2.28. Set a bucket quota To set a quota, the user must have users capability set with write permission. Syntax The content must include a JSON representation of the quota settings as encoded in the corresponding read operation. 2.29. Get usage information Requesting bandwidth usage information. Capabilities Syntax Request Parameters uid Description The user for which the information is requested. Type String Required Yes start Description The date, and optionally, the time of when the data request started. For example, 2012-09-25 16:00:00 . Type String Required No end Description The date, and optionally, the time of when the data request ended. For example, 2012-09-25 16:00:00 . Type String Required No show-entries Description Specifies whether data entries should be returned. Type Boolean Required No show-summary Description Specifies whether data entries should be returned. Type Boolean Required No Response Entities usage Description A container for the usage information. Type Container entries Description A container for the usage entries information. Type Container user Description A container for the user data information. Type Container owner Description The name of the user that owns the buckets. Type String bucket Description The bucket name. Type String time Description Time lower bound for which data is being specified that is rounded to the beginning of the first relevant hour. Type String epoch Description The time specified in seconds since 1/1/1970 . Type String categories Description A container for stats categories. Type Container entry Description A container for stats entry. Type Container category Description Name of request category for which the stats are provided. Type String bytes_sent Description Number of bytes sent by the Ceph Object Gateway. Type Integer bytes_received Description Number of bytes received by the Ceph Object Gateway. Type Integer ops Description Number of operations. Type Integer successful_ops Description Number of successful operations. Type Integer summary Description Number of successful operations. Type Container total Description A container for stats summary aggregated total. Type Container If successful, the response contains the requested information. 2.30. Remove usage information Remove usage information. With no dates specified, removes all usage information. Capabilities Syntax Request Parameters uid Description The user for which the information is requested. Type String Example foo_user Required Yes start Description The date, and optionally, the time of when the data request started. For example, 2012-09-25 16:00:00 . Type String Example 2012-09-25 16:00:00 Required No end Description The date, and optionally, the time of when the data request ended. For example, 2012-09-25 16:00:00 . Type String Example 2012-09-25 16:00:00 Required No remove-all Description Required when uid is not specified, in order to acknowledge multi-user data removal. Type Boolean Example True [False] Required No 2.31. Standard error responses The following list details standard error responses and their descriptions. AccessDenied Description Access denied. Code 403 Forbidden InternalError Description Internal server error. Code 500 Internal Server Error NoSuchUser Description User does not exist. Code 404 Not Found NoSuchBucket Description Bucket does not exist. Code 404 Not Found NoSuchKey Description No such access key. Code 404 Not Found | [
"PUT /admin/user?caps&format=json HTTP/1.1 Host: FULLY_QUALIFIED_DOMAIN_NAME Content-Type: text/plain Authorization: AUTHORIZATION_TOKEN usage=read",
"Authorization: AWS ACCESS_KEY : HASH_OF_HEADER_AND_SECRET",
"httpRequest.addHeader(\"Authorization\", \"AWS \" + this.getAccessKey() + \":\" + base64Sha1Hmac(headerString.toString(), this.getSecretKey()));",
"import java.io.IOException; import java.net.URI; import java.net.URISyntaxException; import java.time.OffsetDateTime; import java.time.format.DateTimeFormatter; import java.time.ZoneId; import org.apache.http.HttpEntity; import org.apache.http.NameValuePair; import org.apache.http.Header; import org.apache.http.client.entity.UrlEncodedFormEntity; import org.apache.http.client.methods.CloseableHttpResponse; import org.apache.http.client.methods.HttpRequestBase; import org.apache.http.client.methods.HttpGet; import org.apache.http.client.methods.HttpPost; import org.apache.http.client.methods.HttpPut; import org.apache.http.client.methods.HttpDelete; import org.apache.http.impl.client.CloseableHttpClient; import org.apache.http.impl.client.HttpClients; import org.apache.http.message.BasicNameValuePair; import org.apache.http.util.EntityUtils; import org.apache.http.client.utils.URIBuilder; import java.util.Base64; import java.util.Base64.Encoder; import java.security.MessageDigest; import java.security.NoSuchAlgorithmException; import javax.crypto.spec.SecretKeySpec; import javax.crypto.Mac; import java.util.Map; import java.util.Iterator; import java.util.Set; import java.util.Map.Entry; public class CephAdminAPI { /* * Each call must specify an access key, secret key, endpoint and format. */ String accessKey; String secretKey; String endpoint; String scheme = \"http\"; //http only. int port = 80; /* * A constructor that takes an access key, secret key, endpoint and format. */ public CephAdminAPI(String accessKey, String secretKey, String endpoint){ this.accessKey = accessKey; this.secretKey = secretKey; this.endpoint = endpoint; } /* * Accessor methods for access key, secret key, endpoint and format. */ public String getEndpoint(){ return this.endpoint; } public void setEndpoint(String endpoint){ this.endpoint = endpoint; } public String getAccessKey(){ return this.accessKey; } public void setAccessKey(String accessKey){ this.accessKey = accessKey; } public String getSecretKey(){ return this.secretKey; } public void setSecretKey(String secretKey){ this.secretKey = secretKey; } /* * Takes an HTTP Method, a resource and a map of arguments and * returns a CloseableHTTPResponse. */ public CloseableHttpResponse execute(String HTTPMethod, String resource, String subresource, Map arguments) { String httpMethod = HTTPMethod; String requestPath = resource; StringBuffer request = new StringBuffer(); StringBuffer headerString = new StringBuffer(); HttpRequestBase httpRequest; CloseableHttpClient httpclient; URI uri; CloseableHttpResponse httpResponse = null; try { uri = new URIBuilder() .setScheme(this.scheme) .setHost(this.getEndpoint()) .setPath(requestPath) .setPort(this.port) .build(); if (subresource != null){ uri = new URIBuilder(uri) .setCustomQuery(subresource) .build(); } for (Iterator iter = arguments.entrySet().iterator(); iter.hasNext();) { Entry entry = (Entry)iter.next(); uri = new URIBuilder(uri) .setParameter(entry.getKey().toString(), entry.getValue().toString()) .build(); } request.append(uri); headerString.append(HTTPMethod.toUpperCase().trim() + \"\\n\\n\\n\"); OffsetDateTime dateTime = OffsetDateTime.now(ZoneId.of(\"GMT\")); DateTimeFormatter formatter = DateTimeFormatter.RFC_1123_DATE_TIME; String date = dateTime.format(formatter); headerString.append(date + \"\\n\"); headerString.append(requestPath); if (HTTPMethod.equalsIgnoreCase(\"PUT\")){ httpRequest = new HttpPut(uri); } else if (HTTPMethod.equalsIgnoreCase(\"POST\")){ httpRequest = new HttpPost(uri); } else if (HTTPMethod.equalsIgnoreCase(\"GET\")){ httpRequest = new HttpGet(uri); } else if (HTTPMethod.equalsIgnoreCase(\"DELETE\")){ httpRequest = new HttpDelete(uri); } else { System.err.println(\"The HTTP Method must be PUT, POST, GET or DELETE.\"); throw new IOException(); } httpRequest.addHeader(\"Date\", date); httpRequest.addHeader(\"Authorization\", \"AWS \" + this.getAccessKey() + \":\" + base64Sha1Hmac(headerString.toString(), this.getSecretKey())); httpclient = HttpClients.createDefault(); httpResponse = httpclient.execute(httpRequest); } catch (URISyntaxException e){ System.err.println(\"The URI is not formatted properly.\"); e.printStackTrace(); } catch (IOException e){ System.err.println(\"There was an error making the request.\"); e.printStackTrace(); } return httpResponse; } /* * Takes a uri and a secret key and returns a base64-encoded * SHA-1 HMAC. */ public String base64Sha1Hmac(String uri, String secretKey) { try { byte[] keyBytes = secretKey.getBytes(\"UTF-8\"); SecretKeySpec signingKey = new SecretKeySpec(keyBytes, \"HmacSHA1\"); Mac mac = Mac.getInstance(\"HmacSHA1\"); mac.init(signingKey); byte[] rawHmac = mac.doFinal(uri.getBytes(\"UTF-8\")); Encoder base64 = Base64.getEncoder(); return base64.encodeToString(rawHmac); } catch (Exception e) { throw new RuntimeException(e); } } }",
"import java.io.IOException; import org.apache.http.client.methods.CloseableHttpResponse; import org.apache.http.HttpEntity; import org.apache.http.util.EntityUtils; import java.util.*; public class CephAdminAPIClient { public static void main (String[] args){ CephAdminAPI adminApi = new CephAdminAPI (\"FFC6ZQ6EMIF64194158N\", \"Xac39eCAhlTGcCAUreuwe1ZuH5oVQFa51lbEMVoT\", \"ceph-client\"); /* * Create a user */ Map requestArgs = new HashMap(); requestArgs.put(\"access\", \"usage=read, write; users=read, write\"); requestArgs.put(\"display-name\", \"New User\"); requestArgs.put(\"email\", \"[email protected]\"); requestArgs.put(\"format\", \"json\"); requestArgs.put(\"uid\", \"new-user\"); CloseableHttpResponse response = adminApi.execute(\"PUT\", \"/admin/user\", null, requestArgs); System.out.println(response.getStatusLine()); HttpEntity entity = response.getEntity(); try { System.out.println(\"\\nResponse Content is: \" + EntityUtils.toString(entity, \"UTF-8\") + \"\\n\"); response.close(); } catch (IOException e){ System.err.println (\"Encountered an I/O exception.\"); e.printStackTrace(); } /* * Get a user */ requestArgs = new HashMap(); requestArgs.put(\"format\", \"json\"); requestArgs.put(\"uid\", \"new-user\"); response = adminApi.execute(\"GET\", \"/admin/user\", null, requestArgs); System.out.println(response.getStatusLine()); entity = response.getEntity(); try { System.out.println(\"\\nResponse Content is: \" + EntityUtils.toString(entity, \"UTF-8\") + \"\\n\"); response.close(); } catch (IOException e){ System.err.println (\"Encountered an I/O exception.\"); e.printStackTrace(); } /* * Modify a user */ requestArgs = new HashMap(); requestArgs.put(\"display-name\", \"John Doe\"); requestArgs.put(\"email\", \"[email protected]\"); requestArgs.put(\"format\", \"json\"); requestArgs.put(\"uid\", \"new-user\"); requestArgs.put(\"max-buckets\", \"100\"); response = adminApi.execute(\"POST\", \"/admin/user\", null, requestArgs); System.out.println(response.getStatusLine()); entity = response.getEntity(); try { System.out.println(\"\\nResponse Content is: \" + EntityUtils.toString(entity, \"UTF-8\") + \"\\n\"); response.close(); } catch (IOException e){ System.err.println (\"Encountered an I/O exception.\"); e.printStackTrace(); } /* * Create a subuser */ requestArgs = new HashMap(); requestArgs.put(\"format\", \"json\"); requestArgs.put(\"uid\", \"new-user\"); requestArgs.put(\"subuser\", \"foobar\"); response = adminApi.execute(\"PUT\", \"/admin/user\", \"subuser\", requestArgs); System.out.println(response.getStatusLine()); entity = response.getEntity(); try { System.out.println(\"\\nResponse Content is: \" + EntityUtils.toString(entity, \"UTF-8\") + \"\\n\"); response.close(); } catch (IOException e){ System.err.println (\"Encountered an I/O exception.\"); e.printStackTrace(); } /* * Delete a user */ requestArgs = new HashMap(); requestArgs.put(\"format\", \"json\"); requestArgs.put(\"uid\", \"new-user\"); response = adminApi.execute(\"DELETE\", \"/admin/user\", null, requestArgs); System.out.println(response.getStatusLine()); entity = response.getEntity(); try { System.out.println(\"\\nResponse Content is: \" + EntityUtils.toString(entity, \"UTF-8\") + \"\\n\"); response.close(); } catch (IOException e){ System.err.println (\"Encountered an I/O exception.\"); e.printStackTrace(); } } }",
"radosgw-admin user create --uid=\" USER_NAME \" --display-name=\" DISPLAY_NAME \"",
"[user@client ~]USD radosgw-admin user create --uid=\"admin-api-user\" --display-name=\"Admin API User\"",
"{ \"user_id\": \"admin-api-user\", \"display_name\": \"Admin API User\", \"email\": \"\", \"suspended\": 0, \"max_buckets\": 1000, \"auid\": 0, \"subusers\": [], \"keys\": [ { \"user\": \"admin-api-user\", \"access_key\": \"NRWGT19TWMYOB1YDBV1Y\", \"secret_key\": \"gr1VEGIV7rxcP3xvXDFCo4UDwwl2YoNrmtRlIAty\" } ], \"swift_keys\": [], \"caps\": [], \"op_mask\": \"read, write, delete\", \"default_placement\": \"\", \"placement_tags\": [], \"bucket_quota\": { \"enabled\": false, \"max_size_kb\": -1, \"max_objects\": -1 }, \"user_quota\": { \"enabled\": false, \"max_size_kb\": -1, \"max_objects\": -1 }, \"temp_url_keys\": [] }",
"radosgw-admin caps add --uid=\" USER_NAME \" --caps=\"users=*\"",
"[user@client ~]USD radosgw-admin caps add --uid=admin-api-user --caps=\"users=*\"",
"{ \"user_id\": \"admin-api-user\", \"display_name\": \"Admin API User\", \"email\": \"\", \"suspended\": 0, \"max_buckets\": 1000, \"auid\": 0, \"subusers\": [], \"keys\": [ { \"user\": \"admin-api-user\", \"access_key\": \"NRWGT19TWMYOB1YDBV1Y\", \"secret_key\": \"gr1VEGIV7rxcP3xvXDFCo4UDwwl2YoNrmtRlIAty\" } ], \"swift_keys\": [], \"caps\": [ { \"type\": \"users\", \"perm\": \"*\" } ], \"op_mask\": \"read, write, delete\", \"default_placement\": \"\", \"placement_tags\": [], \"bucket_quota\": { \"enabled\": false, \"max_size_kb\": -1, \"max_objects\": -1 }, \"user_quota\": { \"enabled\": false, \"max_size_kb\": -1, \"max_objects\": -1 }, \"temp_url_keys\": [] }",
"users=read",
"GET /admin/user?format=json HTTP/1.1 Host: FULLY_QUALIFIED_DOMAIN_NAME",
"`users=write`",
"PUT /admin/user?format=json HTTP/1.1 Host: FULLY_QUALIFIED_DOMAIN_NAME",
"`users=write`",
"POST /admin/user?format=json HTTP/1.1 Host: FULLY_QUALIFIED_DOMAIN_NAME",
"`users=write`",
"DELETE /admin/user?format=json HTTP/1.1 Host: FULLY_QUALIFIED_DOMAIN_NAME",
"`users=write`",
"PUT /admin/user?subuser&format=json HTTP/1.1 Host FULLY_QUALIFIED_DOMAIN_NAME",
"`users=write`",
"POST /admin/user?subuser&format=json HTTP/1.1 Host FULLY_QUALIFIED_DOMAIN_NAME",
"`users=write`",
"DELETE /admin/user?subuser&format=json HTTP/1.1 Host FULLY_QUALIFIED_DOMAIN_NAME",
"`users=write`",
"PUT /admin/user?caps&format=json HTTP/1.1 Host FULLY_QUALIFIED_DOMAIN_NAME",
"`users=write`",
"DELETE /admin/user?caps&format=json HTTP/1.1 Host FULLY_QUALIFIED_DOMAIN_NAME",
"`users=write`",
"PUT /admin/user?key&format=json HTTP/1.1 Host FULLY_QUALIFIED_DOMAIN_NAME",
"`users=write`",
"DELETE /admin/user?key&format=json HTTP/1.1 Host FULLY_QUALIFIED_DOMAIN_NAME",
"POST Action=CreateTopic &Name= TOPIC_NAME [&Attributes.entry.1.key=amqp-exchange&Attributes.entry.1.value= EXCHANGE ] [&Attributes.entry.2.key=amqp-ack-level&Attributes.entry.2.value=none|broker|routable] [&Attributes.entry.3.key=verify-ssl&Attributes.entry.3.value=true|false] [&Attributes.entry.4.key=kafka-ack-level&Attributes.entry.4.value=none|broker] [&Attributes.entry.5.key=use-ssl&Attributes.entry.5.value=true|false] [&Attributes.entry.6.key=ca-location&Attributes.entry.6.value= FILE_PATH ] [&Attributes.entry.7.key=OpaqueData&Attributes.entry.7.value= OPAQUE_DATA ] [&Attributes.entry.8.key=push-endpoint&Attributes.entry.8.value= ENDPOINT ] [&Attributes.entry.9.key=persistent&Attributes.entry.9.value=true|false]",
"<CreateTopicResponse xmlns=\"https://sns.amazonaws.com/doc/2010-03-31/\"> <CreateTopicResult> <TopicArn></TopicArn> </CreateTopicResult> <ResponseMetadata> <RequestId></RequestId> </ResponseMetadata> </CreateTopicResponse>",
"client.create_topic(Name='my-topic' , Attributes={'push-endpoint': 'amqp://127.0.0.1:5672', 'amqp-exchange': 'ex1', 'amqp-ack-level': 'broker'}) \"",
"POST Action=GetTopic &TopicArn= TOPIC_ARN",
"<GetTopicResponse> <GetTopicRersult> <Topic> <User></User> <Name></Name> <EndPoint> <EndpointAddress></EndpointAddress> <EndpointArgs></EndpointArgs> <EndpointTopic></EndpointTopic> <HasStoredSecret></HasStoredSecret> <Persistent></Persistent> </EndPoint> <TopicArn></TopicArn> <OpaqueData></OpaqueData> </Topic> </GetTopicResult> <ResponseMetadata> <RequestId></RequestId> </ResponseMetadata> </GetTopicResponse>",
"POST Action=ListTopics",
"<ListTopicdResponse xmlns=\"https://sns.amazonaws.com/doc/2020-03-31/\"> <ListTopicsRersult> <Topics> <member> <User></User> <Name></Name> <EndPoint> <EndpointAddress></EndpointAddress> <EndpointArgs></EndpointArgs> <EndpointTopic></EndpointTopic> </EndPoint> <TopicArn></TopicArn> <OpaqueData></OpaqueData> </member> </Topics> </ListTopicsResult> <ResponseMetadata> <RequestId></RequestId> </ResponseMetadata> </ListTopicsResponse>",
"POST Action=DeleteTopic &TopicArn= TOPIC_ARN",
"<DeleteTopicResponse xmlns=\"https://sns.amazonaws.com/doc/2020-03-31/\"> <ResponseMetadata> <RequestId></RequestId> </ResponseMetadata> </DeleteTopicResponse>",
"{\"Records\":[ { \"eventVersion\":\"2.1\", \"eventSource\":\"ceph:s3\", \"awsRegion\":\"us-east-1\", \"eventTime\":\"2019-11-22T13:47:35.124724Z\", \"eventName\":\"ObjectCreated:Put\", \"userIdentity\":{ \"principalId\":\"tester\" }, \"requestParameters\":{ \"sourceIPAddress\":\"\" }, \"responseElements\":{ \"x-amz-request-id\":\"503a4c37-85eb-47cd-8681-2817e80b4281.5330.903595\", \"x-amz-id-2\":\"14d2-zone1-zonegroup1\" }, \"s3\":{ \"s3SchemaVersion\":\"1.0\", \"configurationId\":\"mynotif1\", \"bucket\":{ \"name\":\"mybucket1\", \"ownerIdentity\":{ \"principalId\":\"tester\" }, \"arn\":\"arn:aws:s3:us-east-1::mybucket1\", \"id\":\"503a4c37-85eb-47cd-8681-2817e80b4281.5332.38\" }, \"object\":{ \"key\":\"myimage1.jpg\", \"size\":\"1024\", \"eTag\":\"37b51d194a7513e45b56f6524f2d51f2\", \"versionId\":\"\", \"sequencer\": \"F7E6D75DC742D108\", \"metadata\":[], \"tags\":[] } }, \"eventId\":\"\", \"opaqueData\":\"[email protected]\" } ]}",
"`buckets=read`",
"GET /admin/bucket?format=json HTTP/1.1 Host FULLY_QUALIFIED_DOMAIN_NAME",
"GET /admin/bucket?index&format=json HTTP/1.1 Host FULLY_QUALIFIED_DOMAIN_NAME",
"`buckets=write`",
"DELETE /admin/bucket?format=json HTTP/1.1 Host FULLY_QUALIFIED_DOMAIN_NAME",
"`buckets=write`",
"PUT /admin/bucket?format=json HTTP/1.1 Host FULLY_QUALIFIED_DOMAIN_NAME",
"`buckets=write`",
"POST /admin/bucket?format=json HTTP/1.1 Host FULLY_QUALIFIED_DOMAIN_NAME",
"`buckets=read`",
"GET /admin/bucket?policy&format=json HTTP/1.1 Host FULLY_QUALIFIED_DOMAIN_NAME",
"`buckets=write`",
"DELETE /admin/bucket?object&format=json HTTP/1.1 Host FULLY_QUALIFIED_DOMAIN_NAME",
"GET /admin/user?quota&uid= UID "a-type=user",
"PUT /admin/user?quota&uid= UID "a-type=user",
"`buckets=read`",
"GET /admin/bucket?format=json HTTP/1.1 Host FULLY_QUALIFIED_DOMAIN_NAME",
"PUT /admin/user?quota&uid= UID "a-type=bucket",
"`usage=read`",
"GET /admin/usage?format=json HTTP/1.1 Host: FULLY_QUALIFIED_DOMAIN_NAME",
"`usage=write`",
"DELETE /admin/usage?format=json HTTP/1.1 Host: FULLY_QUALIFIED_DOMAIN_NAME"
]
| https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/developer_guide/ceph-object-gateway-administrative-api |
Deploying OpenShift Data Foundation using Microsoft Azure | Deploying OpenShift Data Foundation using Microsoft Azure Red Hat OpenShift Data Foundation 4.13 Instructions on deploying OpenShift Data Foundation using Microsoft Azure Red Hat Storage Documentation Team Abstract Read this document for instructions about how to install and manage Red Hat OpenShift Data Foundation using Red Hat OpenShift Container Platform on Microsoft Azure. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/deploying_openshift_data_foundation_using_microsoft_azure/index |
Chapter 7. Operator SDK | Chapter 7. Operator SDK 7.1. Installing the Operator SDK CLI The Operator SDK provides a command-line interface (CLI) tool that Operator developers can use to build, test, and deploy an Operator. You can install the Operator SDK CLI on your workstation so that you are prepared to start authoring your own Operators. Operator authors with cluster administrator access to a Kubernetes-based cluster, such as OpenShift Container Platform, can use the Operator SDK CLI to develop their own Operators based on Go, Ansible, or Helm. Kubebuilder is embedded into the Operator SDK as the scaffolding solution for Go-based Operators, which means existing Kubebuilder projects can be used as is with the Operator SDK and continue to work. See Developing Operators for full documentation on the Operator SDK. Note OpenShift Container Platform 4.9 and later supports Operator SDK v1.10.1. 7.1.1. Installing the Operator SDK CLI You can install the OpenShift SDK CLI tool on Linux. Prerequisites Go v1.16+ docker v17.03+, podman v1.9.3+, or buildah v1.7+ Procedure Navigate to the OpenShift mirror site . From the latest 4.9.0 directory, download the latest version of the tarball for Linux. Unpack the archive: USD tar xvf operator-sdk-v1.10.1-ocp-linux-x86_64.tar.gz Make the file executable: USD chmod +x operator-sdk Move the extracted operator-sdk binary to a directory that is on your PATH . Tip To check your PATH : USD echo USDPATH USD sudo mv ./operator-sdk /usr/local/bin/operator-sdk Verification After you install the Operator SDK CLI, verify that it is available: USD operator-sdk version Example output operator-sdk version: "v1.10.1-ocp", ... 7.2. Operator SDK CLI reference The Operator SDK command-line interface (CLI) is a development kit designed to make writing Operators easier. Operator SDK CLI syntax USD operator-sdk <command> [<subcommand>] [<argument>] [<flags>] See Developing Operators for full documentation on the Operator SDK. 7.2.1. bundle The operator-sdk bundle command manages Operator bundle metadata. 7.2.1.1. validate The bundle validate subcommand validates an Operator bundle. Table 7.1. bundle validate flags Flag Description -h , --help Help output for the bundle validate subcommand. --index-builder (string) Tool to pull and unpack bundle images. Only used when validating a bundle image. Available options are docker , which is the default, podman , or none . --list-optional List all optional validators available. When set, no validators are run. --select-optional (string) Label selector to select optional validators to run. When run with the --list-optional flag, lists available optional validators. 7.2.2. cleanup The operator-sdk cleanup command destroys and removes resources that were created for an Operator that was deployed with the run command. Table 7.2. cleanup flags Flag Description -h , --help Help output for the run bundle subcommand. --kubeconfig (string) Path to the kubeconfig file to use for CLI requests. -n , --namespace (string) If present, namespace in which to run the CLI request. --timeout <duration> Time to wait for the command to complete before failing. The default value is 2m0s . 7.2.3. completion The operator-sdk completion command generates shell completions to make issuing CLI commands quicker and easier. Table 7.3. completion subcommands Subcommand Description bash Generate bash completions. zsh Generate zsh completions. Table 7.4. completion flags Flag Description -h, --help Usage help output. For example: USD operator-sdk completion bash Example output # bash completion for operator-sdk -*- shell-script -*- ... # ex: ts=4 sw=4 et filetype=sh 7.2.4. create The operator-sdk create command is used to create, or scaffold , a Kubernetes API. 7.2.4.1. api The create api subcommand scaffolds a Kubernetes API. The subcommand must be run in a project that was initialized with the init command. Table 7.5. create api flags Flag Description -h , --help Help output for the run bundle subcommand. 7.2.5. generate The operator-sdk generate command invokes a specific generator to generate code or manifests. 7.2.5.1. bundle The generate bundle subcommand generates a set of bundle manifests, metadata, and a bundle.Dockerfile file for your Operator project. Note Typically, you run the generate kustomize manifests subcommand first to generate the input Kustomize bases that are used by the generate bundle subcommand. However, you can use the make bundle command in an initialized project to automate running these commands in sequence. Table 7.6. generate bundle flags Flag Description --channels (string) Comma-separated list of channels to which the bundle belongs. The default value is alpha . --crds-dir (string) Root directory for CustomResoureDefinition manifests. --default-channel (string) The default channel for the bundle. --deploy-dir (string) Root directory for Operator manifests, such as deployments and RBAC. This directory is different from the directory passed to the --input-dir flag. -h , --help Help for generate bundle --input-dir (string) Directory from which to read an existing bundle. This directory is the parent of your bundle manifests directory and is different from the --deploy-dir directory. --kustomize-dir (string) Directory containing Kustomize bases and a kustomization.yaml file for bundle manifests. The default path is config/manifests . --manifests Generate bundle manifests. --metadata Generate bundle metadata and Dockerfile. --output-dir (string) Directory to write the bundle to. --overwrite Overwrite the bundle metadata and Dockerfile if they exist. The default value is true . --package (string) Package name for the bundle. -q , --quiet Run in quiet mode. --stdout Write bundle manifest to standard out. --version (string) Semantic version of the Operator in the generated bundle. Set only when creating a new bundle or upgrading the Operator. Additional resources See Bundling an Operator and deploying with Operator Lifecycle Manager for a full procedure that includes using the make bundle command to call the generate bundle subcommand. 7.2.5.2. kustomize The generate kustomize subcommand contains subcommands that generate Kustomize data for the Operator. 7.2.5.2.1. manifests The generate kustomize manifests subcommand generates or regenerates Kustomize bases and a kustomization.yaml file in the config/manifests directory, which are used to build bundle manifests by other Operator SDK commands. This command interactively asks for UI metadata, an important component of manifest bases, by default unless a base already exists or you set the --interactive=false flag. Table 7.7. generate kustomize manifests flags Flag Description --apis-dir (string) Root directory for API type definitions. -h , --help Help for generate kustomize manifests . --input-dir (string) Directory containing existing Kustomize files. --interactive When set to false , if no Kustomize base exists, an interactive command prompt is presented to accept custom metadata. --output-dir (string) Directory where to write Kustomize files. --package (string) Package name. -q , --quiet Run in quiet mode. 7.2.6. init The operator-sdk init command initializes an Operator project and generates, or scaffolds , a default project directory layout for the given plugin. This command writes the following files: Boilerplate license file PROJECT file with the domain and repository Makefile to build the project go.mod file with project dependencies kustomization.yaml file for customizing manifests Patch file for customizing images for manager manifests Patch file for enabling Prometheus metrics main.go file to run Table 7.8. init flags Flag Description --help, -h Help output for the init command. --plugins (string) Name and optionally version of the plugin to initialize the project with. Available plugins are ansible.sdk.operatorframework.io/v1 , go.kubebuilder.io/v2 , go.kubebuilder.io/v3 , and helm.sdk.operatorframework.io/v1 . --project-version Project version. Available values are 2 and 3-alpha , which is the default. 7.2.7. run The operator-sdk run command provides options that can launch the Operator in various environments. 7.2.7.1. bundle The run bundle subcommand deploys an Operator in the bundle format with Operator Lifecycle Manager (OLM). Table 7.9. run bundle flags Flag Description --index-image (string) Index image in which to inject a bundle. The default image is quay.io/operator-framework/upstream-opm-builder:latest . --install-mode <install_mode_value> Install mode supported by the cluster service version (CSV) of the Operator, for example AllNamespaces or SingleNamespace . --timeout <duration> Install timeout. The default value is 2m0s . --kubeconfig (string) Path to the kubeconfig file to use for CLI requests. -n , --namespace (string) If present, namespace in which to run the CLI request. -h , --help Help output for the run bundle subcommand. Additional resources See Operator group membership for details on possible install modes. 7.2.7.2. bundle-upgrade The run bundle-upgrade subcommand upgrades an Operator that was previously installed in the bundle format with Operator Lifecycle Manager (OLM). Table 7.10. run bundle-upgrade flags Flag Description --timeout <duration> Upgrade timeout. The default value is 2m0s . --kubeconfig (string) Path to the kubeconfig file to use for CLI requests. -n , --namespace (string) If present, namespace in which to run the CLI request. -h , --help Help output for the run bundle subcommand. 7.2.8. scorecard The operator-sdk scorecard command runs the scorecard tool to validate an Operator bundle and provide suggestions for improvements. The command takes one argument, either a bundle image or directory containing manifests and metadata. If the argument holds an image tag, the image must be present remotely. Table 7.11. scorecard flags Flag Description -c , --config (string) Path to scorecard configuration file. The default path is bundle/tests/scorecard/config.yaml . -h , --help Help output for the scorecard command. --kubeconfig (string) Path to kubeconfig file. -L , --list List which tests are available to run. -n , --namespace (string) Namespace in which to run the test images. -o , --output (string) Output format for results. Available values are text , which is the default, and json . -l , --selector (string) Label selector to determine which tests are run. -s , --service-account (string) Service account to use for tests. The default value is default . -x , --skip-cleanup Disable resource cleanup after tests are run. -w , --wait-time <duration> Seconds to wait for tests to complete, for example 35s . The default value is 30s . Additional resources See Validating Operators using the scorecard tool for details about running the scorecard tool. | [
"tar xvf operator-sdk-v1.10.1-ocp-linux-x86_64.tar.gz",
"chmod +x operator-sdk",
"echo USDPATH",
"sudo mv ./operator-sdk /usr/local/bin/operator-sdk",
"operator-sdk version",
"operator-sdk version: \"v1.10.1-ocp\",",
"operator-sdk <command> [<subcommand>] [<argument>] [<flags>]",
"operator-sdk completion bash",
"bash completion for operator-sdk -*- shell-script -*- ex: ts=4 sw=4 et filetype=sh"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/cli_tools/operator-sdk |
Chapter 4. Customizing the Red Hat Ceph Storage cluster | Chapter 4. Customizing the Red Hat Ceph Storage cluster Director deploys Red Hat Ceph Storage with a default configuration. You can customize this default configuration. Prerequisites Ceph Storage nodes deployed with their storage network configured. The deployed bare metal file output by openstack overcloud node provision -o ~/deployed_metal.yaml ... . 4.1. Configuration options There are several options for configuring the Red Hat Ceph Storage cluster. Procedure Log in to the undercloud node as the stack user. Optional: Use a standard format initialization (ini) file to configure the Ceph cluster. Create the file with configuration options. The following is an example of a simple configuration file: Save the configuration file. Use the openstack overcloud ceph deploy --config <configuration_file_name> command to deploy the configuration. Replace <configuration_file_name> with the name of the file you created. Optional: Send configuration values to the cephadm bootstrap command: openstack overcloud ceph deploy --force \ --cephadm-extra-args '<optional_arguments>' \ Replace <optional_arguments> with the configuration values to provide to the underlying command. Note When using the arguments --log-to-file and --skip-prepare-host , the command openstack overcloud ceph deploy --force \ --cephadm-extra-args '--log-to-file --skip-prepare-host' \ is used. 4.2. Generating the service specification (optional) The Red Hat Ceph Storage cluster service specification is a YAML file that describes the deployment of Ceph Storage services. It is automatically generated by tripleo before the Ceph Storage cluster is deployed. It does not usually have to be generated separately. A custom service specification can be created to customize the Red Hat Ceph Storage cluster. Procedure Log in to the undercloud node as the stack user. Generate the specification file: openstack overcloud ceph spec -o '<specification_file>' Replace <specification_file> with the name of the file to generate with the current service specification. openstack overcloud ceph spec -o '~/ceph_spec.yaml' Edit the generated file with the required configuration. Deploy the custom service specification: openstack overcloud ceph deploy \ deployed_metal.yaml \ -o deployed_ceph.yaml \ --ceph-spec <specification_file> Replace <specification_file> with the name of the custom service specification file. openstack overcloud ceph deploy \ deployed_metal.yaml \ -o deployed_ceph.yaml \ --ceph-spec ~/ceph_spec.yaml 4.3. Ceph containers for Red Hat OpenStack Platform with Red Hat Ceph Storage You must have a Ceph Storage container to configure Red Hat Openstack Platform (RHOSP) to use Red Hat Ceph Storage with NFS Ganesha. You do not require a Ceph Storage container if the external Ceph Storage cluster only provides Block (through RBD), Object (through RGW), or File (through native CephFS) storage. RHOSP 17.0 requires Red Hat Ceph Storage 5.x (Ceph package 16.x) or later to be compatible with Red Hat Enterprise Linux 9. The Ceph Storage 5.x containers are hosted at registry.redhat.io , a registry that requires authentication. For more information, see Container image preparation parameters . 4.4. Configuring advanced OSD specifications Configure an advanced OSD specification when the default specification does not provide the necessary functionality for your Ceph Storage cluster. Procedure Log in to the undercloud node as the stack user. Create a YAML format file that defines the advanced OSD specification. The following is an example of a custom OSD specification. This example would create an OSD specification where all rotating devices will be data devices and all non-rotating devices will be used as shared devices. When the dynamic Ceph service specification is built, whatever is in the specification file is appended to the section of the specification if the service_type is osd . Save the specification file. Deploy the specification: openstack overcloud ceph deploy \ --osd-spec <osd_specification_file> Replace <osd_specification_file> with the name of the specification file you created. USD openstack overcloud ceph deploy \ --osd-spec osd_spec.yaml \ Additional resources For a list of OSD-related attributes used to configure OSDs in the service specification, see Advanced service specifications and filters for deploying OSDs in the Red Hat Ceph Storage Operations Guide . 4.5. Migrating from node-specific overrides Node-specific overrides were used to manage non-homogenous server hardware before Red Hat OpenStack Platform 17.0. This is now done with a custom OSD specification file. See Configuring advanced OSD specifications for information on how to create a custom OSD specification file. 4.6. Enabling Ceph on-wire encryption Enable encryption for all Ceph Storage traffic using the secure mode of the messenger version 2 protocol. Configure Ceph Storage as described in Encryption and Key Management in the Red Hat Ceph Storage Data Security and Hardening Guide to enable Ceph on-wire encryption. Additional resources For more information about Ceph on-wire encryption, see Ceph on-wire encryption in the Red Hat Ceph Storage Architecture Guide . | [
"[global] osd crush chooseleaf type = 0 log_file = /var/log/ceph/USDcluster-USDtype.USDid.log [mon] mon_cluster_log_to_syslog = true",
"openstack overcloud ceph deploy --config initial-ceph.conf",
"data_devices: rotational: 1 db_devices: rotational: 0"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/deploying_red_hat_ceph_storage_and_red_hat_openstack_platform_together_with_director/assembly_customizing-the-ceph-storage-cluster_deployingcontainerizedrhcs |
21.3.5. Adding an IPP Printer | 21.3.5. Adding an IPP Printer An IPP printer is a printer attached to a different system on the same TCP/IP network. The system this printer is attached to may either be running CUPS or configured to use IPP. If a firewall is enabled on the printer server, then the firewall must be configured to allow incoming TCP connections on port 631. Note that the CUPS browsing protocol allows client machines to discover shared CUPS queues automatically. To enable this, the firewall on the client machine must be configured to allow incoming UDP packets on port 631. Follow this procedure to add an IPP printer: Open the New Printer dialog (see Section 21.3.2, "Starting Printer Setup" ). In the list of devices on the left, select Network Printer and Internet Printing Protocol (ipp) or Internet Printing Protocol (https) . On the right, enter the connection settings: Host The host name of the IPP printer. Queue The queue name to be given to the new queue (if the box is left empty, a name based on the device node will be used). Figure 21.6. Adding an IPP printer Click Forward to continue. Select the printer model. See Section 21.3.8, "Selecting the Printer Model and Finishing" for details. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s1-printing-ipp-printer |
Chapter 28. Finding information on Kafka restarts | Chapter 28. Finding information on Kafka restarts After the Cluster Operator restarts a Kafka pod in an OpenShift cluster, it emits an OpenShift event into the pod's namespace explaining why the pod restarted. For help in understanding cluster behavior, you can check restart events from the command line. Tip You can export and monitor restart events using metrics collection tools like Prometheus. Use the metrics tool with an event exporter that can export the output in a suitable format. 28.1. Reasons for a restart event The Cluster Operator initiates a restart event for a specific reason. You can check the reason by fetching information on the restart event. Table 28.1. Restart reasons Event Description CaCertHasOldGeneration The pod is still using a server certificate signed with an old CA, so needs to be restarted as part of the certificate update. CaCertRemoved Expired CA certificates have been removed, and the pod is restarted to run with the current certificates. CaCertRenewed CA certificates have been renewed, and the pod is restarted to run with the updated certificates. ClientCaCertKeyReplaced The key used to sign clients CA certificates has been replaced, and the pod is being restarted as part of the CA renewal process. ClusterCaCertKeyReplaced The key used to sign the cluster's CA certificates has been replaced, and the pod is being restarted as part of the CA renewal process. ConfigChangeRequiresRestart Some Kafka configuration properties are changed dynamically, but others require that the broker be restarted. FileSystemResizeNeeded The file system size has been increased, and a restart is needed to apply it. KafkaCertificatesChanged One or more TLS certificates used by the Kafka broker have been updated, and a restart is needed to use them. ManualRollingUpdate A user annotated the pod, or the StrimziPodSet set it belongs to, to trigger a restart. PodForceRestartOnError An error occurred that requires a pod restart to rectify. PodHasOldRevision A disk was added or removed from the Kafka volumes, and a restart is needed to apply the change. When using StrimziPodSet resources, the same reason is given if the pod needs to be recreated. PodHasOldRevision The StrimziPodSet that the pod is a member of has been updated, so the pod needs to be recreated. When using StrimziPodSet resources, the same reason is given if a disk was added or removed from the Kafka volumes. PodStuck The pod is still pending, and is not scheduled or cannot be scheduled, so the operator has restarted the pod in a final attempt to get it running. PodUnresponsive Streams for Apache Kafka was unable to connect to the pod, which can indicate a broker not starting correctly, so the operator restarted it in an attempt to resolve the issue. 28.2. Restart event filters When checking restart events from the command line, you can specify a field-selector to filter on OpenShift event fields. The following fields are available when filtering events with field-selector . regardingObject.kind The object that was restarted, and for restart events, the kind is always Pod . regarding.namespace The namespace that the pod belongs to. regardingObject.name The pod's name, for example, strimzi-cluster-kafka-0 . regardingObject.uid The unique ID of the pod. reason The reason the pod was restarted, for example, JbodVolumesChanged . reportingController The reporting component is always strimzi.io/cluster-operator for Streams for Apache Kafka restart events. source source is an older version of reportingController . The reporting component is always strimzi.io/cluster-operator for Streams for Apache Kafka restart events. type The event type, which is either Warning or Normal . For Streams for Apache Kafka restart events, the type is Normal . Note In older versions of OpenShift, the fields using the regarding prefix might use an involvedObject prefix instead. reportingController was previously called reportingComponent . 28.3. Checking Kafka restarts Use a oc command to list restart events initiated by the Cluster Operator. Filter restart events emitted by the Cluster Operator by setting the Cluster Operator as the reporting component using the reportingController or source event fields. Prerequisites The Cluster Operator is running in the OpenShift cluster. Procedure Get all restart events emitted by the Cluster Operator: oc -n kafka get events --field-selector reportingController=strimzi.io/cluster-operator Example showing events returned LAST SEEN TYPE REASON OBJECT MESSAGE 2m Normal CaCertRenewed pod/strimzi-cluster-kafka-0 CA certificate renewed 58m Normal PodForceRestartOnError pod/strimzi-cluster-kafka-1 Pod needs to be forcibly restarted due to an error 5m47s Normal ManualRollingUpdate pod/strimzi-cluster-kafka-2 Pod was manually annotated to be rolled You can also specify a reason or other field-selector options to constrain the events returned. Here, a specific reason is added: oc -n kafka get events --field-selector reportingController=strimzi.io/cluster-operator,reason=PodForceRestartOnError Use an output format, such as YAML, to return more detailed information about one or more events. oc -n kafka get events --field-selector reportingController=strimzi.io/cluster-operator,reason=PodForceRestartOnError -o yaml Example showing detailed events output apiVersion: v1 items: - action: StrimziInitiatedPodRestart apiVersion: v1 eventTime: "2022-05-13T00:22:34.168086Z" firstTimestamp: null involvedObject: kind: Pod name: strimzi-cluster-kafka-1 namespace: kafka kind: Event lastTimestamp: null message: Pod needs to be forcibly restarted due to an error metadata: creationTimestamp: "2022-05-13T00:22:34Z" generateName: strimzi-event name: strimzi-eventwppk6 namespace: kafka resourceVersion: "432961" uid: 29fcdb9e-f2cf-4c95-a165-a5efcd48edfc reason: PodForceRestartOnError reportingController: strimzi.io/cluster-operator reportingInstance: strimzi-cluster-operator-6458cfb4c6-6bpdp source: {} type: Normal kind: List metadata: resourceVersion: "" selfLink: "" The following fields are deprecated, so they are not populated for these events: firstTimestamp lastTimestamp source | [
"-n kafka get events --field-selector reportingController=strimzi.io/cluster-operator",
"LAST SEEN TYPE REASON OBJECT MESSAGE 2m Normal CaCertRenewed pod/strimzi-cluster-kafka-0 CA certificate renewed 58m Normal PodForceRestartOnError pod/strimzi-cluster-kafka-1 Pod needs to be forcibly restarted due to an error 5m47s Normal ManualRollingUpdate pod/strimzi-cluster-kafka-2 Pod was manually annotated to be rolled",
"-n kafka get events --field-selector reportingController=strimzi.io/cluster-operator,reason=PodForceRestartOnError",
"-n kafka get events --field-selector reportingController=strimzi.io/cluster-operator,reason=PodForceRestartOnError -o yaml",
"apiVersion: v1 items: - action: StrimziInitiatedPodRestart apiVersion: v1 eventTime: \"2022-05-13T00:22:34.168086Z\" firstTimestamp: null involvedObject: kind: Pod name: strimzi-cluster-kafka-1 namespace: kafka kind: Event lastTimestamp: null message: Pod needs to be forcibly restarted due to an error metadata: creationTimestamp: \"2022-05-13T00:22:34Z\" generateName: strimzi-event name: strimzi-eventwppk6 namespace: kafka resourceVersion: \"432961\" uid: 29fcdb9e-f2cf-4c95-a165-a5efcd48edfc reason: PodForceRestartOnError reportingController: strimzi.io/cluster-operator reportingInstance: strimzi-cluster-operator-6458cfb4c6-6bpdp source: {} type: Normal kind: List metadata: resourceVersion: \"\" selfLink: \"\""
]
| https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/deploying_and_managing_streams_for_apache_kafka_on_openshift/assembly-deploy-restart-events-str |
Chapter 129. Guava EventBus Component | Chapter 129. Guava EventBus Component Available as of Camel version 2.10 The Google Guava EventBus allows publish-subscribe-style communication between components without requiring the components to explicitly register with one another (and thus be aware of each other). The guava-eventbus: component provides integration bridge between Camel and Google Guava EventBus infrastructure. With the latter component, messages exchanged with the Guava EventBus can be transparently forwarded to the Camel routes. EventBus component allows also to route body of Camel exchanges to the Guava EventBus . Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-guava-eventbus</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> 129.1. URI format guava-eventbus:busName[?options] Where busName represents the name of the com.google.common.eventbus.EventBus instance located in the Camel registry. 129.2. Options The Guava EventBus component supports 3 options, which are listed below. Name Description Default Type eventBus (common) To use the given Guava EventBus instance EventBus listenerInterface (common) The interface with method(s) marked with the Subscribe annotation. Dynamic proxy will be created over the interface so it could be registered as the EventBus listener. Particularly useful when creating multi-event listeners and for handling DeadEvent properly. This option cannot be used together with eventClass option. Class resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The Guava EventBus endpoint is configured using URI syntax: with the following path and query parameters: 129.2.1. Path Parameters (1 parameters): Name Description Default Type eventBusRef To lookup the Guava EventBus from the registry with the given name String 129.2.2. Query Parameters (6 parameters): Name Description Default Type eventClass (common) If used on the consumer side of the route, will filter events received from the EventBus to the instances of the class and superclasses of eventClass. Null value of this option is equal to setting it to the java.lang.Object i.e. the consumer will capture all messages incoming to the event bus. This option cannot be used together with listenerInterface option. Class listenerInterface (common) The interface with method(s) marked with the Subscribe annotation. Dynamic proxy will be created over the interface so it could be registered as the EventBus listener. Particularly useful when creating multi-event listeners and for handling DeadEvent properly. This option cannot be used together with eventClass option. Class bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean 129.3. Spring Boot Auto-Configuration The component supports 4 options, which are listed below. Name Description Default Type camel.component.guava-eventbus.enabled Enable guava-eventbus component true Boolean camel.component.guava-eventbus.event-bus To use the given Guava EventBus instance. The option is a com.google.common.eventbus.EventBus type. String camel.component.guava-eventbus.listener-interface The interface with method(s) marked with the Subscribe annotation. Dynamic proxy will be created over the interface so it could be registered as the EventBus listener. Particularly useful when creating multi-event listeners and for handling DeadEvent properly. This option cannot be used together with eventClass option. Class camel.component.guava-eventbus.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean 129.4. Usage Using guava-eventbus component on the consumer side of the route will capture messages sent to the Guava EventBus and forward them to the Camel route. Guava EventBus consumer processes incoming messages asynchronously . SimpleRegistry registry = new SimpleRegistry(); EventBus eventBus = new EventBus(); registry.put("busName", eventBus); CamelContext camel = new DefaultCamelContext(registry); from("guava-eventbus:busName").to("seda:queue"); eventBus.post("Send me to the SEDA queue."); Using guava-eventbus component on the producer side of the route will forward body of the Camel exchanges to the Guava EventBus instance. SimpleRegistry registry = new SimpleRegistry(); EventBus eventBus = new EventBus(); registry.put("busName", eventBus); CamelContext camel = new DefaultCamelContext(registry); from("direct:start").to("guava-eventbus:busName"); ProducerTemplate producerTemplate = camel.createProducerTemplate(); producer.sendBody("direct:start", "Send me to the Guava EventBus."); eventBus.register(new Object(){ @Subscribe public void messageHander(String message) { System.out.println("Message received from the Camel: " + message); } }); 129.5. DeadEvent considerations Keep in mind that due to the limitations caused by the design of the Guava EventBus, you cannot specify event class to be received by the listener without creating class annotated with @Subscribe method. This limitation implies that endpoint with eventClass option specified actually listens to all possible events ( java.lang.Object ) and filter appropriate messages programmatically at runtime. The snipped below demonstrates an appropriate excerpt from the Camel code base. @Subscribe public void eventReceived(Object event) { if (eventClass == null || eventClass.isAssignableFrom(event.getClass())) { doEventReceived(event); ... This drawback of this approach is that EventBus instance used by Camel will never generate com.google.common.eventbus.DeadEvent notifications. If you want Camel to listen only to the precisely specified event (and therefore enable DeadEvent support), use listenerInterface endpoint option. Camel will create dynamic proxy over the interface you specify with the latter option and listen only to messages specified by the interface handler methods. The example of the listener interface with single method handling only SpecificEvent instances is demonstrated below. package com.example; public interface CustomListener { @Subscribe void eventReceived(SpecificEvent event); } The listener presented above could be used in the endpoint definition as follows. from("guava-eventbus:busName?listenerInterface=com.example.CustomListener").to("seda:queue"); 129.6. Consuming multiple type of events In order to define multiple type of events to be consumed by Guava EventBus consumer use listenerInterface endpoint option, as listener interface could provide multiple methods marked with the @Subscribe annotation. package com.example; public interface MultipleEventsListener { @Subscribe void someEventReceived(SomeEvent event); @Subscribe void anotherEventReceived(AnotherEvent event); } The listener presented above could be used in the endpoint definition as follows. from("guava-eventbus:busName?listenerInterface=com.example.MultipleEventsListener").to("seda:queue"); | [
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-guava-eventbus</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>",
"guava-eventbus:busName[?options]",
"guava-eventbus:eventBusRef",
"SimpleRegistry registry = new SimpleRegistry(); EventBus eventBus = new EventBus(); registry.put(\"busName\", eventBus); CamelContext camel = new DefaultCamelContext(registry); from(\"guava-eventbus:busName\").to(\"seda:queue\"); eventBus.post(\"Send me to the SEDA queue.\");",
"SimpleRegistry registry = new SimpleRegistry(); EventBus eventBus = new EventBus(); registry.put(\"busName\", eventBus); CamelContext camel = new DefaultCamelContext(registry); from(\"direct:start\").to(\"guava-eventbus:busName\"); ProducerTemplate producerTemplate = camel.createProducerTemplate(); producer.sendBody(\"direct:start\", \"Send me to the Guava EventBus.\"); eventBus.register(new Object(){ @Subscribe public void messageHander(String message) { System.out.println(\"Message received from the Camel: \" + message); } });",
"@Subscribe public void eventReceived(Object event) { if (eventClass == null || eventClass.isAssignableFrom(event.getClass())) { doEventReceived(event);",
"package com.example; public interface CustomListener { @Subscribe void eventReceived(SpecificEvent event); }",
"from(\"guava-eventbus:busName?listenerInterface=com.example.CustomListener\").to(\"seda:queue\");",
"package com.example; public interface MultipleEventsListener { @Subscribe void someEventReceived(SomeEvent event); @Subscribe void anotherEventReceived(AnotherEvent event); }",
"from(\"guava-eventbus:busName?listenerInterface=com.example.MultipleEventsListener\").to(\"seda:queue\");"
]
| https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/guava-eventbus-component |
function::commit | function::commit Name function::commit - Write out all output related to a speculation buffer Synopsis Arguments id of the buffer to store the information in Description Output all the output for id in the order that it was entered into the speculative buffer by speculative . | [
"commit(id:long)"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-commit |
Deploying OpenShift Data Foundation using bare metal infrastructure | Deploying OpenShift Data Foundation using bare metal infrastructure Red Hat OpenShift Data Foundation 4.18 Instructions on deploying OpenShift Data Foundation using local storage on bare metal infrastructure Red Hat Storage Documentation Team Abstract Read this document for instructions about how to install Red Hat OpenShift Data Foundation to use local storage on bare metal infrastructure. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Jira ticket: Log in to the Jira . Click Create in the top navigation bar Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Select Documentation in the Components field. Click Create at the bottom of the dialogue. Preface Red Hat OpenShift Data Foundation supports deployment on existing Red Hat OpenShift Container Platform (RHOCP) bare metal clusters in connected or disconnected environments along with out-of-the-box support for proxy environments. Both internal and external OpenShift Data Foundation clusters are supported on bare metal. See Planning your deployment and Preparing to deploy OpenShift Data Foundation for more information about deployment requirements. To deploy OpenShift Data Foundation, follow the appropriate deployment process based on your requirement: Internal mode Deploy using local storage devices Deploy standalone Multicloud Object Gateway component External mode Chapter 1. Preparing to deploy OpenShift Data Foundation When you deploy OpenShift Data Foundation on OpenShift Container Platform using the local storage devices, you can create internal cluster resources. This approach internally provisions base services so that all the applications can access additional storage classes. Before you begin the deployment of Red Hat OpenShift Data Foundation using a local storage, ensure that you meet the resource requirements. See Requirements for installing OpenShift Data Foundation using local storage devices . Optional: If you want to enable cluster-wide encryption using an external Key Management System (KMS) follow these steps: Ensure that you have a valid Red Hat OpenShift Data Foundation Advanced subscription. To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . When the Token authentication method is selected for encryption, refer to Enabling cluster-wide encryption with the Token authentication using KMS . When the Kubernetes authentication method is selected for encryption then refer to Enabling cluster-wide encryption with KMS using the Kubernetes authentication method . Ensure that you are using signed certificates on your vault servers. After you have addressed the above, perform the following steps: Install the Local Storage Operator . Install the Red Hat OpenShift Data Foundation Operator . Create the OpenShift Data Foundation cluster on bare metal . 1.1. Requirements for installing OpenShift Data Foundation using local storage devices Node requirements The cluster must consist of at least three OpenShift Container Platform worker or infrastructure nodes with locally attached-storage devices on each of them. Each of the three selected nodes must have at least one raw block device available. OpenShift Data Foundation uses the one or more available raw block devices. Note Make sure that the devices have a unique by-id device name for each available raw block device. The devices you use must be empty, the disks must not include Physical Volumes (PVs), Volume Groups (VGs), or Logical Volumes (LVs) remaining on the disk. For more information, see the Resource requirements section in the Planning guide . Disaster recovery requirements Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites to successfully implement a disaster recovery solution: A valid Red Hat OpenShift Data Foundation Advanced subscription. A valid Red Hat Advanced Cluster Management (RHACM) for Kubernetes subscription. To know in detail how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . For detailed disaster recovery solution requirements, see Configuring OpenShift Data Foundation Disaster Recovery for OpenShift Workloads guide, and Requirements and recommendations section of the Install guide in Red Hat Advanced Cluster Management for Kubernetes documentation. Arbiter stretch cluster requirements In this case, a single cluster is stretched across two zones with a third zone as the location for the arbiter. This solution is currently intended for deployment in the OpenShift Container Platform on-premises and in the same data center. This solution is not recommended for deployments stretching over multiple data centers. Instead, consider Metro-DR as a first option for no data loss DR solution deployed over multiple data centers with low latency networks. To know in detail how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . Note You cannot enable Flexible scaling and Arbiter both at the same time as they have conflicting scaling logic. With Flexible scaling, you can add one node at a time to your OpenShift Data Foundation cluster. Whereas, in an Arbiter cluster, you need to add at least one node in each of the two data zones. Compact mode requirements You can install OpenShift Data Foundation on a three-node OpenShift compact bare-metal cluster, where all the workloads run on three strong master nodes. There are no worker or storage nodes. To configure OpenShift Container Platform in compact mode, see the Configuring a three-node cluster section of the Installing guide in OpenShift Container Platform documentation, and Delivering a Three-node Architecture for Edge Deployments . Minimum starting node requirements An OpenShift Data Foundation cluster is deployed with a minimum configuration when the resource requirement for a standard deployment is not met. For more information, see the Resource requirements section in the Planning guide . Chapter 2. Deploy OpenShift Data Foundation using local storage devices You can deploy OpenShift Data Foundation on bare metal infrastructure where OpenShift Container Platform is already installed. Also, it is possible to deploy only the Multicloud Object Gateway (MCG) component with OpenShift Data Foundation. For more information, see Deploy standalone Multicloud Object Gateway . Perform the following steps to deploy OpenShift Data Foundation: Install the Local Storage Operator . Install the Red Hat OpenShift Data Foundation Operator . Create an OpenShift Data Foundation cluster on bare metal . 2.1. Installing Local Storage Operator Install the Local Storage Operator from the Operator Hub before creating Red Hat OpenShift Data Foundation clusters on local storage devices. Procedure Log in to the OpenShift Web Console. Click Operators -> OperatorHub . Type local storage in the Filter by keyword box to find the Local Storage Operator from the list of operators, and click on it. Set the following options on the Install Operator page: Update channel as stable . Installation mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-local-storage . Update approval as Automatic . Click Install . Verification steps Verify that the Local Storage Operator shows a green tick indicating successful installation. 2.2. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators -> OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.18 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 2.3. Enabling cluster-wide encryption with KMS using the Token authentication method You can enable the key value backend path and policy in the vault for token authentication. Prerequisites Administrator access to the vault. A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . Carefully, select a unique path name as the backend path that follows the naming convention since you cannot change it later. Procedure Enable the Key/Value (KV) backend path in the vault. For vault KV secret engine API, version 1: For vault KV secret engine API, version 2: Create a policy to restrict the users to perform a write or delete operation on the secret: Create a token that matches the above policy: 2.4. Enabling cluster-wide encryption with KMS using the Kubernetes authentication method You can enable the Kubernetes authentication method for cluster-wide encryption using the Key Management System (KMS). Prerequisites Administrator access to Vault. A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . The OpenShift Data Foundation operator must be installed from the Operator Hub. Select a unique path name as the backend path that follows the naming convention carefully. You cannot change this path name later. Procedure Create a service account: where, <serviceaccount_name> specifies the name of the service account. For example: Create clusterrolebindings and clusterroles : For example: Create a secret for the serviceaccount token and CA certificate. where, <serviceaccount_name> is the service account created in the earlier step. Get the token and the CA certificate from the secret. Retrieve the OCP cluster endpoint. Fetch the service account issuer: Use the information collected in the step to setup the Kubernetes authentication method in Vault: Important To configure the Kubernetes authentication method in Vault when the issuer is empty: Enable the Key/Value (KV) backend path in Vault. For Vault KV secret engine API, version 1: For Vault KV secret engine API, version 2: Create a policy to restrict the users to perform a write or delete operation on the secret: Generate the roles: The role odf-rook-ceph-op is later used while you configure the KMS connection details during the creation of the storage system. 2.5. Creating OpenShift Data Foundation cluster on bare metal Prerequisites Ensure that all the requirements in the Requirements for installing OpenShift Data Foundation using local storage devices section are met. Ensure that the disk type is SSD, which is the only supported disk type. If you want to use the multi network plug-in (Multus), before deployment you must create network attachment definitions (NADs) that is later attached to the cluster. For more information, see Multi network plug-in (Multus) support and Creating network attachment definitions . Procedure In the OpenShift Web Console, click Operators -> Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click on the OpenShift Data Foundation operator, and then click Create StorageSystem . In the Backing storage page, perform the following: Select Full Deployment for the Deployment type option. Select the Create a new StorageClass using the local storage devices option. Optional: Select Use Ceph RBD as the default StorageClass . This avoids having to manually annotate a StorageClass. Optional: Select Use external PostgreSQL checkbox to use an external PostgreSQL [Technology preview] . This provides high availability solution for Multicloud Object Gateway where the PostgreSQL pod is a single point of failure. Provide the following connection details: Username Password Server name and Port Database name Select Enable TLS/SSL checkbox to enable encryption for the Postgres server. Click . Important You are prompted to install the Local Storage Operator if it is not already installed. Click Install , and follow the procedure as described in Installing Local Storage Operator . In the Create local volume set page, provide the following information: Enter a name for the LocalVolumeSet and the StorageClass . The local volume set name appears as the default value for the storage class name. You can change the name. Select one of the following: Disks on all nodes Uses the available disks that match the selected filters on all the nodes. Disks on selected nodes Uses the available disks that match the selected filters only on the selected nodes. Important The flexible scaling feature is enabled only when the storage cluster that you created with three or more nodes are spread across fewer than the minimum requirement of three availability zones. For information about flexible scaling, see knowledgebase article on Scaling OpenShift Data Foundation cluster using YAML when flexible scaling is enabled . Flexible scaling features get enabled at the time of deployment and can not be enabled or disabled later on. If the nodes selected do not match the OpenShift Data Foundation cluster requirement of an aggregated 30 CPUs and 72 GiB of RAM, a minimal cluster is deployed. For minimum starting node requirements, see the Resource requirements section in the Planning guide. From the available list of Disk Type , select SSD/NVMe . Expand the Advanced section and set the following options: Volume Mode Block is selected as the default value. Device Type Select one or more device types from the dropdown list. Disk Size Set a minimum size of 100GB for the device and maximum available size of the device that needs to be included. Maximum Disks Limit This indicates the maximum number of Persistent Volumes (PVs) that you can create on a node. If this field is left empty, then PVs are created for all the available disks on the matching nodes. Click . A pop-up to confirm the creation of LocalVolumeSet is displayed. Click Yes to continue. In the Capacity and nodes page, configure the following: Available raw capacity is populated with the capacity value based on all the attached disks associated with the storage class. This takes some time to show up. The Selected nodes list shows the nodes based on the storage class. In the Configure performance section, select one of the following performance profiles: Lean Use this in a resource constrained environment with minimum resources that are lower than the recommended. This profile minimizes resource consumption by allocating fewer CPUs and less memory. Balanced (default) Use this when recommended resources are available. This profile provides a balance between resource consumption and performance for diverse workloads. Performance Use this in an environment with sufficient resources to get the best performance. This profile is tailored for high performance by allocating ample memory and CPUs to ensure optimal execution of demanding workloads. Note You have the option to configure the performance profile even after the deployment using the Configure performance option from the options menu of the StorageSystems tab. Important Before selecting a resource profile, make sure to check the current availability of resources within the cluster. Opting for a higher resource profile in a cluster with insufficient resources might lead to installation failures. For more information about resource requirements, see Resource requirement for performance profiles . Optional: Select the Taint nodes checkbox to dedicate the selected nodes for OpenShift Data Foundation. Click . Optional: In the Security and network page, configure the following based on your requirement: To enable encryption, select Enable data encryption for block and file storage . Select one or both of the following Encryption level : Cluster-wide encryption Encrypts the entire cluster (block and file). StorageClass encryption Creates encrypted persistent volume (block only) using encryption enabled storage class. Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop-down list, select one of the following providers and provide the necessary details: Vault Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save . Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save . Note In case you need to enable key rotation for Vault KMS, run the following command in the OpenShift web console after the storage cluster is created: Thales CipherTrust Manager (using KMIP) Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Select a Network . Select one of the following: Default (SDN) If you are using a single network. Custom (Multus) If you are using multiple network interfaces. Select a Public Network Interface from the dropdown. Select a Cluster Network Interface from the dropdown. Note If you are using only one additional network interface, select the single NetworkAttachementDefinition , that is, ocs-public-cluster for the Public Network Interface and leave the Cluster Network Interface blank. Click . In the Data Protection page, if you are configuring Regional-DR solution for Openshift Data Foundation then select the Prepare cluster for disaster recovery (Regional-DR only) checkbox, else click . In the Review and create page, review the configuration details. To modify any configuration settings, click Back to go back to the configuration page. Click Create StorageSystem . Note When your deployment has five or more nodes, racks, or rooms, and when there are five or more number of failure domains present in the deployment, you can configure Ceph monitor counts based on the number of racks or zones. An alert is displayed in the notification panel or Alert Center of the OpenShift Web Console to indicate the option to increase the number of Ceph monitor counts. You can use the Configure option in the alert to configure the Ceph monitor counts. For more information, see Resolving low Ceph monitor count alert . Verification steps To verify the final Status of the installed storage cluster: In the OpenShift Web Console, navigate to Installed Operators -> OpenShift Data Foundation -> Storage System Click ocs-storagecluster-storagesystem -> Resources . Verify that the Status of the StorageCluster is Ready and has a green tick mark to it. To verify if the flexible scaling is enabled on your storage cluster, perform the following steps (for arbiter mode, flexible scaling is disabled): In the OpenShift Web Console, navigate to Installed Operators -> OpenShift Data Foundation -> Storage System Click ocs-storagecluster-storagesystem -> Resources -> ocs-storagecluster . In the YAML tab, search for the keys flexibleScaling in the spec section and failureDomain in the status section. If flexible scaling is true and failureDomain is set to host, flexible scaling feature is enabled: To verify that all the components for OpenShift Data Foundation are successfully installed, see Verifying your OpenShift Data Foundation installation . To verify the multi networking (Multus), see Verifying the Multus networking . Additional resources To expand the capacity of the initial cluster, see the Scaling Storage guide. 2.6. Verifying OpenShift Data Foundation deployment To verify that OpenShift Data Foundation is deployed correctly: Verify the state of the pods . Verify that the OpenShift Data Foundation cluster is healthy . Verify that the Multicloud Object Gateway is healthy . Verify that the OpenShift Data Foundation specific storage classes exist . Verify the Multus networking . 2.6.1. Verifying the state of the pods Procedure Click Workloads -> Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. For more information on the expected number of pods for each component and how it varies depending on the number of nodes, see the following table: Set filter for Running and Completed pods to verify that the following pods are in Running and Completed state: Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any storage node) ocs-metrics-exporter-* (1 pod on any storage node) odf-operator-controller-manager-* (1 pod on any storage node) odf-console-* (1 pod on any storage node) csi-addons-controller-manager-* (1 pod on any storage node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any storage node) Multicloud Object Gateway noobaa-operator-* (1 pod on any storage node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node) MON rook-ceph-mon-* (3 pods distributed across storage nodes) MGR rook-ceph-mgr-* (1 pod on any storage node) MDS rook-ceph-mds-ocs-storagecluster-cephfilesystem-* (2 pods distributed across storage nodes) RGW rook-ceph-rgw-ocs-storagecluster-cephobjectstore-* (1 pod on any storage node) CSI cephfs csi-cephfsplugin-* (1 pod on each storage node) csi-cephfsplugin-provisioner-* (2 pods distributed across storage nodes) rbd csi-rbdplugin-* (1 pod on each storage node) csi-rbdplugin-provisioner-* (2 pods distributed across storage nodes) rook-ceph-crashcollector rook-ceph-crashcollector-* (1 pod on each storage node) OSD rook-ceph-osd-* (1 pod for each device) 2.6.2. Verifying the OpenShift Data Foundation cluster is healthy Procedure In the OpenShift Web Console, click Storage -> Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Block and File tab, verify that the Storage Cluster has a green tick. In the Details card, verify that the cluster information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the Block and File dashboard, see Monitoring OpenShift Data Foundation . 2.6.3. Verifying the Multicloud Object Gateway is healthy Procedure In the OpenShift Web Console, click Storage -> Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the object service dashboard, see Monitoring OpenShift Data Foundation . Important The Multicloud Object Gateway only has a single copy of the database (NooBaa DB). This means if NooBaa DB PVC gets corrupted and we are unable to recover it, can result in total data loss of applicative data residing on the Multicloud Object Gateway. Because of this, Red Hat recommends taking a backup of NooBaa DB PVC regularly. If NooBaa DB fails and cannot be recovered, then you can revert to the latest backed-up version. For instructions on backing up your NooBaa DB, follow the steps in this knowledgabase article . 2.6.4. Verifying that the specific storage classes exist Procedure Click Storage -> Storage Classes from the left pane of the OpenShift Web Console. Verify that the following storage classes are created with the OpenShift Data Foundation cluster creation: ocs-storagecluster-ceph-rbd ocs-storagecluster-cephfs openshift-storage.noobaa.io ocs-storagecluster-ceph-rgw 2.6.5. Verifying the Multus networking To determine if Multus is working in your cluster, verify the Multus networking. Procedure Based on your Network configuration choices, the OpenShift Data Foundation operator will do one of the following: If only a single NetworkAttachmentDefinition (for example, ocs-public-cluster ) was selected for the Public Network Interface, then the traffic between the application pods and the OpenShift Data Foundation cluster will happen on this network. Additionally the cluster will be self configured to also use this network for the replication and rebalancing traffic between OSDs. If both NetworkAttachmentDefinitions (for example, ocs-public and ocs-cluster ) were selected for the Public Network Interface and the Cluster Network Interface respectively during the Storage Cluster installation, then client storage traffic will be on the public network and cluster network for the replication and rebalancing traffic between OSDs. To verify the network configuration is correct, complete the following: In the OpenShift console, navigate to Installed Operators -> OpenShift Data Foundation -> Storage System -> ocs-storagecluster-storagesystem -> Resources -> ocs-storagecluster . In the YAML tab, search for network in the spec section and ensure the configuration is correct for your network interface choices. This example is for separating the client storage traffic from the storage replication traffic. Sample output: To verify the network configuration is correct using the command line interface, run the following commands: Sample output: Confirm the OSD pods are using correct network In the openshift-storage namespace use one of the OSD pods to verify the pod has connectivity to the correct networks. This example is for separating the client storage traffic from the storage replication traffic. Note Only the OSD pods will connect to both Multus public and cluster networks if both are created. All other OCS pods will connect to the Multus public network. Sample output: To confirm the OSD pods are using correct network using the command line interface, run the following command (requires the jq utility): Sample output: Chapter 3. Deploy standalone Multicloud Object Gateway Deploying only the Multicloud Object Gateway component with OpenShift Data Foundation provides the flexibility in deployment and helps to reduce the resource consumption. After deploying the MCG component, you can create and manage buckets using MCG object browser. For more information, see Creating and managing buckets using MCG object browser . Use this section to deploy only the standalone Multicloud Object Gateway component, which involves the following steps: Installing the Local Storage Operator. Installing Red Hat OpenShift Data Foundation Operator Creating standalone Multicloud Object Gateway Important The Multicloud Object Gateway only has a single copy of the database (NooBaa DB). This means if NooBaa DB PVC gets corrupted and we are unable to recover it, can result in total data loss of applicative data residing on the Multicloud Object Gateway. Because of this, Red Hat recommends taking a backup of NooBaa DB PVC regularly. If NooBaa DB fails and cannot be recovered, then you can revert to the latest backed-up version. For instructions on backing up your NooBaa DB, follow the steps in this knowledgabase article . 3.1. Installing Local Storage Operator Install the Local Storage Operator from the Operator Hub before creating Red Hat OpenShift Data Foundation clusters on local storage devices. Procedure Log in to the OpenShift Web Console. Click Operators -> OperatorHub . Type local storage in the Filter by keyword box to find the Local Storage Operator from the list of operators, and click on it. Set the following options on the Install Operator page: Update channel as stable . Installation mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-local-storage . Update approval as Automatic . Click Install . Verification steps Verify that the Local Storage Operator shows a green tick indicating successful installation. 3.2. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators -> OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.18 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 3.3. Creating a standalone Multicloud Object Gateway You can create only the standalone Multicloud Object Gateway (MCG) component while deploying OpenShift Data Foundation. After you create the MCG component, you can create and manage buckets using the MCG object browser. For more information, see Creating and managing buckets using MCG object browser . Prerequisites Ensure that the OpenShift Data Foundation Operator is installed. Procedure In the OpenShift Web Console, click Operators -> Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click OpenShift Data Foundation operator and then click Create StorageSystem . In the Backing storage page, select the following: Select Multicloud Object Gateway for Deployment type . Select the Create a new StorageClass using the local storage devices option. Click . Note You are prompted to install the Local Storage Operator if it is not already installed. Click Install , and follow the procedure as described in Installing Local Storage Operator . In the Create local volume set page, provide the following information: Enter a name for the LocalVolumeSet and the StorageClass . By default, the local volume set name appears for the storage class name. You can change the name. Choose one of the following: Disks on all nodes Uses the available disks that match the selected filters on all the nodes. Disks on selected nodes Uses the available disks that match the selected filters only on the selected nodes. From the available list of Disk Type , select SSD/NVMe . Expand the Advanced section and set the following options: Volume Mode Filesystem is selected by default. Always ensure that the Filesystem is selected for Volume Mode . Device Type Select one or more device types from the dropdown list. Disk Size Set a minimum size of 100GB for the device and maximum available size of the device that needs to be included. Maximum Disks Limit This indicates the maximum number of PVs that can be created on a node. If this field is left empty, then PVs are created for all the available disks on the matching nodes. Click . A pop-up to confirm the creation of LocalVolumeSet is displayed. Click Yes to continue. In the Capacity and nodes page, configure the following: Available raw capacity is populated with the capacity value based on all the attached disks associated with the storage class. This takes some time to show up. The Selected nodes list shows the nodes based on the storage class. Click . Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop-down list, either select Vault or Thales CipherTrust Manager (using KMIP) . If you selected Vault , go to the step. If you selected Thales CipherTrust Manager (using KMIP) , go to step iii. Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. To use Thales CipherTrust Manager (using KMIP) as the KMS provider, follow the steps below: Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Select a Network . Click . In the Review and create page, review the configuration details: To modify any configuration settings, click Back . Click Create StorageSystem . Verification steps Verifying that the OpenShift Data Foundation cluster is healthy In the OpenShift Web Console, click Storage -> Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. Verifying the state of the pods Click Workloads -> Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list and verify that the following pods are in Running state. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any storage node) ocs-metrics-exporter-* (1 pod on any storage node) odf-operator-controller-manager-* (1 pod on any storage node) odf-console-* (1 pod on any storage node) csi-addons-controller-manager-* (1 pod on any storage node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any storage node) Multicloud Object Gateway noobaa-operator-* (1 pod on any storage node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node) Chapter 4. View OpenShift Data Foundation Topology The topology shows the mapped visualization of the OpenShift Data Foundation storage cluster at various abstraction levels and also lets you to interact with these layers. The view also shows how the various elements compose the Storage cluster altogether. Procedure On the OpenShift Web Console, navigate to Storage -> Data Foundation -> Topology . The view shows the storage cluster and the zones inside it. You can see the nodes depicted by circular entities within the zones, which are indicated by dotted lines. The label of each item or resource contains basic information such as status and health or indication for alerts. Choose a node to view node details on the right-hand panel. You can also access resources or deployments within a node by clicking on the search/preview decorator icon. To view deployment details Click the preview decorator on a node. A modal window appears above the node that displays all of the deployments associated with that node along with their statuses. Click the Back to main view button in the model's upper left corner to close and return to the view. Select a specific deployment to see more information about it. All relevant data is shown in the side panel. Click the Resources tab to view the pods information. This tab provides a deeper understanding of the problems and offers granularity that aids in better troubleshooting. Click the pod links to view the pod information page on OpenShift Container Platform. The link opens in a new window. Chapter 5. Uninstalling OpenShift Data Foundation 5.1. Uninstalling OpenShift Data Foundation in Internal mode To uninstall OpenShift Data Foundation in Internal mode, refer to the knowledgebase article on Uninstalling OpenShift Data Foundation . | [
"oc annotate namespace openshift-storage openshift.io/node-selector=",
"vault secrets enable -path=odf kv",
"vault secrets enable -path=odf kv-v2",
"echo ' path \"odf/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] }'| vault policy write odf -",
"vault token create -policy=odf -format json",
"oc -n openshift-storage create serviceaccount <serviceaccount_name>",
"oc -n openshift-storage create serviceaccount odf-vault-auth",
"oc -n openshift-storage create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:_<serviceaccount_name>_",
"oc -n openshift-storage create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:odf-vault-auth",
"cat <<EOF | oc create -f - apiVersion: v1 kind: Secret metadata: name: odf-vault-auth-token namespace: openshift-storage annotations: kubernetes.io/service-account.name: <serviceaccount_name> type: kubernetes.io/service-account-token data: {} EOF",
"SA_JWT_TOKEN=USD(oc -n openshift-storage get secret odf-vault-auth-token -o jsonpath=\"{.data['token']}\" | base64 --decode; echo) SA_CA_CRT=USD(oc -n openshift-storage get secret odf-vault-auth-token -o jsonpath=\"{.data['ca\\.crt']}\" | base64 --decode; echo)",
"OCP_HOST=USD(oc config view --minify --flatten -o jsonpath=\"{.clusters[0].cluster.server}\")",
"oc proxy & proxy_pid=USD! issuer=\"USD( curl --silent http://127.0.0.1:8001/.well-known/openid-configuration | jq -r .issuer)\" kill USDproxy_pid",
"vault auth enable kubernetes",
"vault write auth/kubernetes/config token_reviewer_jwt=\"USDSA_JWT_TOKEN\" kubernetes_host=\"USDOCP_HOST\" kubernetes_ca_cert=\"USDSA_CA_CRT\" issuer=\"USDissuer\"",
"vault write auth/kubernetes/config token_reviewer_jwt=\"USDSA_JWT_TOKEN\" kubernetes_host=\"USDOCP_HOST\" kubernetes_ca_cert=\"USDSA_CA_CRT\"",
"vault secrets enable -path=odf kv",
"vault secrets enable -path=odf kv-v2",
"echo ' path \"odf/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] }'| vault policy write odf -",
"vault write auth/kubernetes/role/odf-rook-ceph-op bound_service_account_names=rook-ceph-system,rook-ceph-osd,noobaa bound_service_account_namespaces=openshift-storage policies=odf ttl=1440h",
"vault write auth/kubernetes/role/odf-rook-ceph-osd bound_service_account_names=rook-ceph-osd bound_service_account_namespaces=openshift-storage policies=odf ttl=1440h",
"patch storagecluster ocs-storagecluster -n openshift-storage --type=json -p '[{\"op\": \"add\", \"path\":\"/spec/encryption/keyRotation/enable\", \"value\": true}]'",
"spec: flexibleScaling: true [...] status: failureDomain: host",
"[..] spec: [..] network: ipFamily: IPv4 provider: multus selectors: cluster: openshift-storage/ocs-cluster public: openshift-storage/ocs-public [..]",
"oc get storagecluster ocs-storagecluster -n openshift-storage -o=jsonpath='{.spec.network}{\"\\n\"}'",
"{\"ipFamily\":\"IPv4\",\"provider\":\"multus\",\"selectors\":{\"cluster\":\"openshift-storage/ocs-cluster\",\"public\":\"openshift-storage/ocs-public\"}}",
"oc get -n openshift-storage USD(oc get pods -n openshift-storage -o name -l app=rook-ceph-osd | grep 'osd-0') -o=jsonpath='{.metadata.annotations.k8s\\.v1\\.cni\\.cncf\\.io/network-status}{\"\\n\"}'",
"[{ \"name\": \"openshift-sdn\", \"interface\": \"eth0\", \"ips\": [ \"10.129.2.30\" ], \"default\": true, \"dns\": {} },{ \"name\": \"openshift-storage/ocs-cluster\", \"interface\": \"net1\", \"ips\": [ \"192.168.2.1\" ], \"mac\": \"e2:04:c6:81:52:f1\", \"dns\": {} },{ \"name\": \"openshift-storage/ocs-public\", \"interface\": \"net2\", \"ips\": [ \"192.168.1.1\" ], \"mac\": \"ee:a0:b6:a4:07:94\", \"dns\": {} }]",
"oc get -n openshift-storage USD(oc get pods -n openshift-storage -o name -l app=rook-ceph-osd | grep 'osd-0') -o=jsonpath='{.metadata.annotations.k8s\\.v1\\.cni\\.cncf\\.io/network-status}{\"\\n\"}' | jq -r '.[].name'",
"openshift-sdn openshift-storage/ocs-cluster openshift-storage/ocs-public",
"oc annotate namespace openshift-storage openshift.io/node-selector="
]
| https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html-single/deploying_openshift_data_foundation_using_bare_metal_infrastructure/deploy-using-local-storage-devices-bm |
Chapter 9. IPv6 and dual-stack deployments | Chapter 9. IPv6 and dual-stack deployments Your standalone Red Hat Quay deployment can now be served in locations that only support IPv6, such as Telco and Edge environments. Support is also offered for dual-stack networking so your Red Hat Quay deployment can listen on IPv4 and IPv6 simultaneously. For a list of known limitations, see IPv6 limitations 9.1. Enabling the IPv6 protocol family Use the following procedure to enable IPv6 support on your standalone Red Hat Quay deployment. Prerequisites You have updated Red Hat Quay to 3.8. Your host and container software platform (Docker, Podman) must be configured to support IPv6. Procedure In your deployment's config.yaml file, add the FEATURE_LISTEN_IP_VERSION parameter and set it to IPv6 , for example: --- FEATURE_GOOGLE_LOGIN: false FEATURE_INVITE_ONLY_USER_CREATION: false FEATURE_LISTEN_IP_VERSION: IPv6 FEATURE_MAILING: false FEATURE_NONSUPERUSER_TEAM_SYNCING_SETUP: false --- Start, or restart, your Red Hat Quay deployment. Check that your deployment is listening to IPv6 by entering the following command: USD curl <quay_endpoint>/health/instance {"data":{"services":{"auth":true,"database":true,"disk_space":true,"registry_gunicorn":true,"service_key":true,"web_gunicorn":true}},"status_code":200} After enabling IPv6 in your deployment's config.yaml , all Red Hat Quay features can be used as normal, so long as your environment is configured to use IPv6 and is not hindered by the ipv6-limitations[current limitations]. Warning If your environment is configured to IPv4, but the FEATURE_LISTEN_IP_VERSION configuration field is set to IPv6 , Red Hat Quay will fail to deploy. 9.2. Enabling the dual-stack protocol family Use the following procedure to enable dual-stack (IPv4 and IPv6) support on your standalone Red Hat Quay deployment. Prerequisites You have updated Red Hat Quay to 3.8. Your host and container software platform (Docker, Podman) must be configured to support IPv6. Procedure In your deployment's config.yaml file, add the FEATURE_LISTEN_IP_VERSION parameter and set it to dual-stack , for example: --- FEATURE_GOOGLE_LOGIN: false FEATURE_INVITE_ONLY_USER_CREATION: false FEATURE_LISTEN_IP_VERSION: dual-stack FEATURE_MAILING: false FEATURE_NONSUPERUSER_TEAM_SYNCING_SETUP: false --- Start, or restart, your Red Hat Quay deployment. Check that your deployment is listening to both channels by entering the following command: For IPv4, enter the following command: USD curl --ipv4 <quay_endpoint> {"data":{"services":{"auth":true,"database":true,"disk_space":true,"registry_gunicorn":true,"service_key":true,"web_gunicorn":true}},"status_code":200} For IPv6, enter the following command: USD curl --ipv6 <quay_endpoint> {"data":{"services":{"auth":true,"database":true,"disk_space":true,"registry_gunicorn":true,"service_key":true,"web_gunicorn":true}},"status_code":200} After enabling dual-stack in your deployment's config.yaml , all Red Hat Quay features can be used as normal, so long as your environment is configured for dual-stack. 9.3. IPv6 and dual-stack limitations Currently, attempting to configure your Red Hat Quay deployment with the common Azure Blob Storage configuration will not work on IPv6 single stack environments. Because the endpoint of Azure Blob Storage does not support IPv6, there is no workaround in place for this issue. For more information, see PROJQUAY-4433 . Currently, attempting to configure your Red Hat Quay deployment with Amazon S3 CloudFront will not work on IPv6 single stack environments. Because the endpoint of Amazon S3 CloudFront does not support IPv6, there is no workaround in place for this issue. For more information, see PROJQUAY-4470 . | [
"--- FEATURE_GOOGLE_LOGIN: false FEATURE_INVITE_ONLY_USER_CREATION: false FEATURE_LISTEN_IP_VERSION: IPv6 FEATURE_MAILING: false FEATURE_NONSUPERUSER_TEAM_SYNCING_SETUP: false ---",
"curl <quay_endpoint>/health/instance {\"data\":{\"services\":{\"auth\":true,\"database\":true,\"disk_space\":true,\"registry_gunicorn\":true,\"service_key\":true,\"web_gunicorn\":true}},\"status_code\":200}",
"--- FEATURE_GOOGLE_LOGIN: false FEATURE_INVITE_ONLY_USER_CREATION: false FEATURE_LISTEN_IP_VERSION: dual-stack FEATURE_MAILING: false FEATURE_NONSUPERUSER_TEAM_SYNCING_SETUP: false ---",
"curl --ipv4 <quay_endpoint> {\"data\":{\"services\":{\"auth\":true,\"database\":true,\"disk_space\":true,\"registry_gunicorn\":true,\"service_key\":true,\"web_gunicorn\":true}},\"status_code\":200}",
"curl --ipv6 <quay_endpoint> {\"data\":{\"services\":{\"auth\":true,\"database\":true,\"disk_space\":true,\"registry_gunicorn\":true,\"service_key\":true,\"web_gunicorn\":true}},\"status_code\":200}"
]
| https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/manage_red_hat_quay/proc_manage-ipv6-dual-stack |
Power Monitoring | Power Monitoring OpenShift Container Platform 4.15 Configuring and using power monitoring for OpenShift Container Platform Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/power_monitoring/index |
Chapter 1. OpenShift Container Platform 4.15 Documentation | Chapter 1. OpenShift Container Platform 4.15 Documentation Welcome to the official OpenShift Container Platform 4.15 documentation, where you can learn about OpenShift Container Platform and start exploring its features. To navigate the OpenShift Container Platform 4.15 documentation, you can use one of the following methods: Use the left navigation bar to browse the documentation. Select the task that interests you from the contents of this Welcome page. Start with Architecture and Security and compliance . , view the release notes . 1.1. Cluster installer activities Explore the following OpenShift Container Platform installation tasks: OpenShift Container Platform installation overview : Depending on the platform, you can install OpenShift Container Platform on installer-provisioned or user-provisioned infrastructure. The OpenShift Container Platform installation program provides the flexibility to deploy OpenShift Container Platform on a range of different platforms. Install a cluster on Alibaba : On Alibaba Cloud, you can install OpenShift Container Platform on installer-provisioned infrastructure. This is currently a Technology Preview feature only. Install a cluster on AWS : On AWS, you can install OpenShift Container Platform on installer-provisioned infrastructure or user-provisioned infrastructure. Install a cluster on Azure : On Microsoft Azure, you can install OpenShift Container Platform on installer-provisioned infrastructure or user-provisioned infrastructure. Install a cluster on Azure Stack Hub : On Microsoft Azure Stack Hub, you can install OpenShift Container Platform on installer-provisioned infrastructure or user-provisioned infrastructure. Installing OpenShift Container Platform with the Assisted Installer : The Assisted Installer is an installation solution that is provided on the Red Hat Red Hat Hybrid Cloud Console. The Assisted Installer supports installing an OpenShift Container Platform cluster on many platforms, but with a focus on bare metal, Nutanix, and VMware vSphere infrastructures. Installing OpenShift Container Platform with the Agent-based Installer : You can use the Agent-based Installer to generate a bootable ISO image that contains the Assisted discovery agent, the Assisted Service, and all the other information required to deploy an OpenShift Container Platform cluster. The Agent-based Installer leverages the advantages of the Assisted Installer in a disconnected environment Install a cluster on bare metal : On bare metal, you can install OpenShift Container Platform on installer-provisioned infrastructure or user-provisioned infrastructure. If none of the available platform and cloud provider deployment options meet your needs, consider using bare metal user-provisioned infrastructure. Install a cluster on GCP : On Google Cloud Platform (GCP) you can install OpenShift Container Platform on installer-provisioned infrastructure or user-provisioned infrastructure. Install a cluster on IBM Cloud(R) : On IBM Cloud(R), you can install OpenShift Container Platform on installer-provisioned infrastructure. Install a cluster on IBM Power(R) Virtual Server : On IBM Power(R) Virtual Server, you can install OpenShift Container Platform on installer-provisioned infrastructure. Install a cluster on IBM Power(R) : On IBM Power(R), you can install OpenShift Container Platform on user-provisioned infrastructure. Install a cluster on IBM Z(R) and IBM(R) LinuxONE : On IBM Z(R) and IBM(R) LinuxONE, you can install OpenShift Container Platform on user-provisioned infrastructure. Install a cluster on Oracle(R) Cloud Infrastructure (OCI) : You can use the Assisted Installer or the Agent-based Installer to install a cluster on OCI. This means that you can run cluster workloads on infrastructure that supports dedicated, hybrid, public, and multiple cloud environments. See Installing a cluster on Oracle Cloud Infrastructure (OCI) by using the Assisted Installer and Installing a cluster on Oracle Cloud Infrastructure (OCI) by using the Agent-based Installer . Install a cluster on Nutanix : On Nutanix, you can install a cluster on your OpenShift Container Platform on installer-provisioned infrastructure. Install a cluster on Red Hat OpenStack Platform (RHOSP) : On RHOSP, you can install OpenShift Container Platform on installer-provisioned infrastructure or user-provisioned infrastructure. Install a cluster on VMware vSphere : You can install OpenShift Container Platform on supported versions of vSphere. 1.2. Other cluster installer activities Install a cluster in a restricted network : If your cluster uses user-provisioned infrastructure on AWS , GCP , vSphere , IBM Cloud(R) , IBM Z(R) and IBM(R) LinuxONE , IBM Power(R) , or bare metal and the cluster does not have full access to the internet, you must mirror the OpenShift Container Platform installation images. To do this action, use one of the following methods, so that you can install a cluster in a restricted network. Mirroring images for a disconnected installation Mirroring images for a disconnected installation by using the oc-mirror plug-in Install a cluster in an existing network : If you use an existing Virtual Private Cloud (VPC) in AWS or GCP or an existing VNet on Microsoft Azure, you can install a cluster. Also consider Installing a cluster on GCP into a shared VPC Install a private cluster : If your cluster does not require external internet access, you can install a private cluster on AWS , Azure , GCP , or IBM Cloud(R) . Internet access is still required to access the cloud APIs and installation media. Check installation logs : Access installation logs to evaluate issues that occur during OpenShift Container Platform installation. Access OpenShift Container Platform : Use credentials output at the end of the installation process to log in to the OpenShift Container Platform cluster from the command line or web console. Install Red Hat OpenShift Data Foundation : You can install Red Hat OpenShift Data Foundation as an Operator to provide highly integrated and simplified persistent storage management for containers. Red Hat Enterprise Linux CoreOS (RHCOS) image layering : As a post-installation task, you can add new images on top of the base RHCOS image. This layering does not modify the base RHCOS image. Instead, the layering creates a custom layered image that includes all RHCOS functions and adds additional functions to specific nodes in the cluster. 1.3. Developer activities Develop and deploy containerized applications with OpenShift Container Platform. OpenShift Container Platform is a platform for developing and deploying containerized applications. Read the following OpenShift Container Platform documentation, so that you can better understand OpenShift Container Platform functions: Understand OpenShift Container Platform development : Learn the different types of containerized applications, from simple containers to advanced Kubernetes deployments and Operators. Work with projects : Create projects from the OpenShift Container Platform web console or OpenShift CLI ( oc ) to organize and share the software you develop. Creating applications using the Developer perspective : Use the Developer perspective in the OpenShift Container Platform web console to easily create and deploy applications. Viewing application composition using the Topology view : Use the Topology view to visually interact with your applications, monitor status, connect and group components, and modify your code base. Understanding Service Binding Operator : With the Service Binding Operator, an application developer can bind workloads with Operator-managed backing services by automatically collecting and sharing binding data with the workloads. The Service Binding Operator improves the development lifecycle with a consistent and declarative service binding method that prevents discrepancies in cluster environments. Create CI/CD Pipelines : Pipelines are serverless, cloud-native, continuous integration and continuous deployment systems that run in isolated containers. Pipelines use standard Tekton custom resources to automate deployments and are designed for decentralized teams that work on microservice-based architecture. Manage your infrastructure and application configurations : GitOps is a declarative way to implement continuous deployment for cloud native applications. GitOps defines infrastructure and application definitions as code. GitOps uses this code to manage multiple workspaces and clusters to simplify the creation of infrastructure and application configurations. GitOps also handles and automates complex deployments at a fast pace, which saves time during deployment and release cycles. Deploy Helm charts : Helm is a software package manager that simplifies deployment of applications and services to OpenShift Container Platform clusters. Helm uses a packaging format called charts . A Helm chart is a collection of files that describes the OpenShift Container Platform resources. Understand image builds : Choose from different build strategies (Docker, S2I, custom, and pipeline) that can include different kinds of source materials, such as Git repositories, local binary inputs, and external artifacts. You can follow examples of build types from basic builds to advanced builds. Create container images : A container image is the most basic building block in OpenShift Container Platform and Kubernetes applications. By defining image streams, you can gather multiple versions of an image in one place as you continue to develop the image stream. With S2I containers, you can insert your source code into a base container. The base container is configured to run code of a particular type, such as Ruby, Node.js, or Python. Create deployments : Use Deployment objects to exert fine-grained management over applications. Deployments create replica sets according to the rollout strategy, which orchestrates pod lifecycles. Create templates : Use existing templates or create your own templates that describe how an application is built or deployed. A template can combine images with descriptions, parameters, replicas, exposed ports and other content that defines how an application can be run or built. Understand Operators : Operators are the preferred method for creating on-cluster applications for OpenShift Container Platform 4.15. Learn about the Operator Framework and how to deploy applications by using installed Operators into your projects. Develop Operators : Operators are the preferred method for creating on-cluster applications for OpenShift Container Platform 4.15. Learn the workflow for building, testing, and deploying Operators. You can then create your own Operators based on Ansible or Helm , or configure built-in Prometheus monitoring by using the Operator SDK. Reference the REST API index : Learn about OpenShift Container Platform application programming interface endpoints. Software Supply Chain Security enhancements : The PipelineRun details page in the Developer or Administrator perspective of the web console provides a visual representation of identified vulnerabilities, which are categorized by severity. Additionally, these enhancements provide an option to download or view Software Bill of Materials (SBOMs) for enhanced transparency and control within your supply chain. Learn about setting up OpenShift Pipelines in the web console to view Software Supply Chain Security elements . 1.4. Cluster administrator activities Manage machines, provide services to users, and follow monitoring and logging reports. Read the following OpenShift Container Platform documentation, so that you can better understand OpenShift Container Platform functions: Understand OpenShift Container Platform management : Learn about components of the OpenShift Container Platform 4.15 control plane. See how OpenShift Container Platform control plane and compute nodes are managed and updated through the Machine API and Operators . Cluster capabilities : As a cluster administrator, you can enable cluster capabilities that were disabled before installation. 1.4.1. Manage cluster components Manage machines : Manage compute and control plane machines in your cluster with machine sets, by deploying health checks , and applying autoscaling . Manage container registries : Each OpenShift Container Platform cluster includes a built-in container registry for storing its images. You can also configure a separate Red Hat Quay registry to use with OpenShift Container Platform. The Quay.io website provides a public container registry that stores OpenShift Container Platform containers and Operators. Manage users and groups : Add users and groups with different levels of permissions to use or modify clusters. Manage authentication : Learn how user, group, and API authentication works in OpenShift Container Platform. OpenShift Container Platform supports multiple identity providers . Manage ingress , API server , and service certificates : OpenShift Container Platform creates certificates by default for the Ingress Operator, the API server, and for services needed by complex middleware applications that require encryption. You might need to change, add, or rotate these certificates. Manage networking : The cluster network in OpenShift Container Platform is managed by the Cluster Network Operator (CNO). The CNO uses iptables rules in kube-proxy to direct traffic between nodes and pods running on those nodes. The Multus Container Network Interface adds the capability to attach multiple network interfaces to a pod. By using network policy features, you can isolate your pods or permit selected traffic. Manage storage : With OpenShift Container Platform, a cluster administrator can configure persistent storage by using Red Hat OpenShift Data Foundation , AWS Elastic Block Store , NFS , iSCSI , Container Storage Interface (CSI) , and more. You can expand persistent volumes , configure dynamic provisioning , and use CSI to configure , clone , and use snapshots of persistent storage. Manage Operators : Lists of Red Hat, ISV, and community Operators can be reviewed by cluster administrators and installed on their clusters . After you install them, you can run , upgrade , back up, or otherwise manage the Operator on your cluster. Understanding Windows container workloads . You can use the Red Hat OpenShift support for Windows Containers feature to run Windows compute nodes in an OpenShift Container Platform cluster. This is possible by using the Red Hat Windows Machine Config Operator (WMCO) to install and manage Windows nodes. 1.4.2. Change cluster components Use custom resource definitions (CRDs) to modify the cluster : Cluster features implemented with Operators can be modified with CRDs. Learn to create a CRD and manage resources from CRDs . Set resource quotas : Choose from CPU, memory, and other system resources to set quotas . Prune and reclaim resources : Reclaim space by pruning unneeded Operators, groups, deployments, builds, images, registries, and cron jobs. Scale and tune clusters : Set cluster limits, tune nodes, scale cluster monitoring, and optimize networking, storage, and routes for your environment. Update a cluster : Use the Cluster Version Operator (CVO) to upgrade your OpenShift Container Platform cluster. If an update is available from the OpenShift Update Service (OSUS), you apply that cluster update from the OpenShift Container Platform web console or the OpenShift CLI ( oc ). Using the OpenShift Update Service in a disconnected environment : You can use the OpenShift Update Service for recommending OpenShift Container Platform updates in disconnected environments. Improving cluster stability in high latency environments by using worker latency profiles : If your network has latency issues, you can use one of three worker latency profiles to help ensure that your control plane does not accidentally evict pods in case it cannot reach a worker node. You can configure or modify the profile at any time during the life of the cluster. 1.4.3. Observe a cluster OpenShift Logging : Learn about logging and configure different logging components, such as log storage, log collectors, and the logging web console plugin. Red Hat OpenShift distributed tracing platform : Store and visualize large volumes of requests passing through distributed systems, across the whole stack of microservices, and under heavy loads. Use the distributed tracing platform for monitoring distributed transactions, gathering insights into your instrumented services, network profiling, performance and latency optimization, root cause analysis, and troubleshooting the interaction between components in modern cloud-native microservices-based applications. Red Hat build of OpenTelemetry : Instrument, generate, collect, and export telemetry traces, metrics, and logs to analyze and understand your software's performance and behavior. Use open source backends like Tempo or Prometheus, or use commercial offerings. Learn a single set of APIs and conventions, and own the data that you generate. Network Observability : Observe network traffic for OpenShift Container Platform clusters by using eBPF technology to create and enrich network flows. You can view dashboards, customize alerts , and analyze network flow information for further insight and troubleshooting. In-cluster monitoring : Learn to configure the monitoring stack . After configuring monitoring, use the web console to access monitoring dashboards . In addition to infrastructure metrics, you can also scrape and view metrics for your own services. Remote health monitoring : OpenShift Container Platform collects anonymized aggregated information about your cluster. By using Telemetry and the Insights Operator, this data is received by Red Hat and used to improve OpenShift Container Platform. You can view the data collected by remote health monitoring . Power monitoring for Red Hat OpenShift (Technology Preview) : You can use power monitoring for Red Hat OpenShift to monitor the power usage and identify power-consuming containers running in an OpenShift Container Platform cluster. Power monitoring collects and exports energy-related system statistics from various components, such as CPU and DRAM. Power monitoring provides granular power consumption data for Kubernetes pods, namespaces, and nodes. 1.5. Hosted control plane activities Support for bare metal and OpenShift Virtualization : Hosted control planes for OpenShift Container Platform is now Generally Available on bare metal and OpenShift Virtualization platforms. For more information, see the following documentation: Configuring hosted control plane clusters on bare metal Managing hosted control plane clusters on OpenShift Virtualization Technology Preview features : Hosted control planes remains available as a Technology Preview feature on the Amazon Web Services, IBM Power(R), and IBM Z(R) platforms. You can now provision a hosted control plane cluster by using the non bare metal agent machines. For more information, see the following documentation: Configuring the hosting cluster on AWS (Technology Preview) Configuring the hosting cluster on a 64-bit x86 OpenShift Container Platform cluster to create hosted control planes for IBM Power(R) compute nodes (Technology Preview) Configuring the hosted cluster on 64-bit x86 bare metal for IBM Z(R) compute nodes (Technology Preview) Configuring hosted control plane clusters using non bare metal agent machines (Technology Preview) | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/about/welcome-index |
Chapter 3. Managing services | Chapter 3. Managing services 3.1. Configuring OpenAPI services The OpenAPI Specification (OAS) defines a standard, programming language-agnostic interface for HTTP APIs. You can understand a service's capabilities without access to the source code, additional documentation, or network traffic inspection. When you define a service by using the OpenAPI, you can understand and interact with it using minimal implementation logic. Just as interface descriptions simplify lower-level programming, the OpenAPI Specification eliminates guesswork in calling a service. 3.1.1. OpenAPI function definition OpenShift Serverless Logic allows the workflows to interact with remote services using an OpenAPI specfication reference in a function. Example OpenAPI function definition { "functions": [ { "name": "myFunction1", "operation": "classpath:/myopenapi-file.yaml#myFunction1" } ] } The operation attribute is a string composed of the following parameters: URI : The engine uses this to locate the specification file, such as classpath . Operation identifier: You can find this identifier in the OpenAPI specification file. OpenShift Serverless Logic supports the following URI schemes: classpath: Use this for files located in the src/main/resources folder of the application project. classpath is the default URI scheme. If you do not define a URI scheme, the file location is src/main/resources/myopenapifile.yaml . file: Use this for files located in the file system. http or https: Use these for remotely located files. Ensure the OpenAPI specification files are available during build time. OpenShift Serverless Logic uses an internal code generation feature to send requests at runtime. After you build the application image, OpenShift Serverless Logic will not have access to these files. If the OpenAPI service you want to add to the workflow does not have a specification file, you can either create one or update the service to generate and expose the file. 3.1.2. Sending REST requests based on the OpenAPI specification To send REST requests that are based on the OpenAPI specification files, you must perform the following procedures: Define the function references Access the defined functions in the workflow states Prerequisites You have OpenShift Serverless Logic Operator installed on your cluster. You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. You have access to the OpenAPI specification files. Procedure To define the OpenAPI functions: Identify and access the OpenAPI specification files for the services you intend to invoke. Copy the OpenAPI specification files into your workflow service directory, such as src/main/resources/specs . The following example shows the OpenAPI specification for the multiplication REST service: Example multiplication REST service OpenAPI specification openapi: 3.0.3 info: title: Generated API version: "1.0" paths: /: post: operationId: doOperation parameters: - in: header name: notUsed schema: type: string required: false requestBody: content: application/json: schema: USDref: '#/components/schemas/MultiplicationOperation' responses: "200": description: OK content: application/json: schema: type: object properties: product: format: float type: number components: schemas: MultiplicationOperation: type: object properties: leftElement: format: float type: number rightElement: format: float type: number To define functions in the workflow, use the operationId from the OpenAPI specification to reference the desired operations in your function definitions. Example function definitions in the temperature conversion application { "functions": [ { "name": "multiplication", "operation": "specs/multiplication.yaml#doOperation" }, { "name": "subtraction", "operation": "specs/subtraction.yaml#doOperation" } ] } Ensure that your function definitions reference the correct paths to the OpenAPI files stored in the src/main/resources/specs directory. To access the defined functions in the workflow states: Define workflow actions to call the function definitions you added. Ensure each action references a function defined earlier. Use the functionRef attribute to refer to the specific function by its name. Map the arguments in the functionRef using the parameters defined in the OpenAPI specification. The following example shows about mapping parameters in the request path instead of request body, you can refer to the following PetStore API example: Example for mapping function arguments in workflow { "states": [ { "name": "SetConstants", "type": "inject", "data": { "subtractValue": 32.0, "multiplyValue": 0.5556 }, "transition": "Computation" }, { "name": "Computation", "actionMode": "sequential", "type": "operation", "actions": [ { "name": "subtract", "functionRef": { "refName": "subtraction", "arguments": { "leftElement": ".fahrenheit", "rightElement": ".subtractValue" } } }, { "name": "multiply", "functionRef": { "refName": "multiplication", "arguments": { "leftElement": ".difference", "rightElement": ".multiplyValue" } } } ], "end": { "terminate": true } } ] } Check the Operation Object section of the OpenAPI specification to understand how to structure parameters in the request. Use jq expressions to extract data from the payload and map it to the required parameters. Ensure the engine maps parameter names according to the OpenAPI specification. For operations requiring parameters in the request path instead of the body, refer to the parameter definitions in the OpenAPI specification. For more information about mapping parameters in the request path instead of request body, you can refer to the following PetStore API example: Example for mapping path parameters { "/pet/{petId}": { "get": { "tags": ["pet"], "summary": "Find pet by ID", "description": "Returns a single pet", "operationId": "getPetById", "parameters": [ { "name": "petId", "in": "path", "description": "ID of pet to return", "required": true, "schema": { "type": "integer", "format": "int64" } } ] } } } Following is an example invocation of a function, in which only one parameter named petId is added in the request path: Example of calling the PetStore function { "name": "CallPetStore", 1 "actionMode": "sequential", "type": "operation", "actions": [ { "name": "getPet", "functionRef": { "refName": "getPetById", 2 "arguments": { 3 "petId": ".petId" } } } ] } 1 State definition, such as CallPetStore . 2 Function definition reference. In the example, the function definition getPetById is for PetStore OpenAPI specification. 3 Arguments definition. OpenShift Serverless Logic adds the argument petId to the request path before sending a request. 3.1.3. Configuring the endpoint URL of OpenAPI services After accessing the function definitions in workflow states, you can configure the endpoint URL of OpenAPI services. Prerequisites You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. You have created your OpenShift Serverless Logic project. You have access to the OpenAPI specification files. You have defined the function definitions in the workflow. You have the access to the defined functions in the workflow states. Procedure Locate the OpenAPI specification file you want to configure. For example, substraction.yaml . Convert the file name into a valid configuration key by replacing special characters, such as . , with underscores and converting letters to lowercase. For example, change substraction.yaml to substraction_yaml . To define the configuration key, use the converted file name as the REST client configuration key. Set this key as an environment variable, as shown in the following example: quarkus.rest-client.subtraction_yaml.url=http://myserver.com To prevent hardcoding URLs in the application.properties file, use environment variable substitution, as shown in the following example: quarkus.rest-client.subtraction_yaml.url=USD{SUBTRACTION_URL:http://myserver.com} In this example: Configuration Key: quarkus.rest-client.subtraction_yaml.url Environment variable: SUBTRACTION_URL Fallback URL: http://myserver.com Ensure that the (SUBTRACTION_URL) environment variable is set in your system or deployment environment. If the variable is not found, the application uses the fallback URL (http://myserver.com) . Add the configuration key and URL substitution to the application.properties file: quarkus.rest-client.subtraction_yaml.url=USD{SUBTRACTION_URL:http://myserver.com} Deploy or restart your application to apply the new configuration settings. 3.2. Configuring OpenAPI services endpoints OpenShift Serverless Logic uses the kogito.sw.operationIdStrategy property to generate the REST client for invoking services defined in OpenAPI documents. This property determines how the configuration key is derived for the REST client configuration. The kogito.sw.operationIdStrategy property supports the following values: FILE_NAME , FULL_URI , FUNCTION_NAME , and SPEC_TITLE . FILE_NAME OpenShift Serverless Logic uses the OpenAPI document file name to create the configuration key. The key is based on the file name, where special characters are replaced with underscores. Example configuration: quarkus.rest-client.stock_portfolio_svc_yaml.url=http://localhost:8282/ 1 1 The OpenAPI File Path is src/main/resources/openapi/stock-portfolio-svc.yaml . The generated key that configures the URL for the REST client is stock_portfolio_svc_yaml FULL_URI OpenShift Serverless Logic uses the complete URI path of the OpenAPI document as the configuration key. The full URI is sanitized to form the key. Example for Serverless Workflow { "id": "myworkflow", "functions": [ { "name": "myfunction", "operation": "https://my.remote.host/apicatalog/apis/123/document" } ] ... } Example configuration: quarkus.rest-client.apicatalog_apis_123_document.url=http://localhost:8282/ 1 1 The URI path is https://my.remote.host/apicatalog/apis/123/document . The generated key that configures the URL for the REST client is apicatalog_apis_123_document . FUNCTION_NAME OpenShift Serverless Logic combines the workflow ID and the function name referencing the OpenAPI document to generate the configuration key. Example for Serverless Workflow { "id": "myworkflow", "functions": [ { "name": "myfunction", "operation": "https://my.remote.host/apicatalog/apis/123/document" } ] ... } Example configuration: quarkus.rest-client.myworkflow_myfunction.url=http://localhost:8282/ 1 1 The workflow ID is myworkflow . The function name is myfunction . The generated key that configures the URL for the REST client is myworkflow_myfunction . SPEC_TITLE OpenShift Serverless Logic uses the info.title value from the OpenAPI document to create the configuration key. The title is sanitized to form the key. Example for OpenAPI document openapi: 3.0.3 info: title: stock-service API version: 2.0.0-SNAPSHOT paths: /stock-price/{symbol}: ... Example configuration: quarkus.rest-client.stock-service_API.url=http://localhost:8282/ 1 1 The OpenAPI document title is stock-service API . The generated key that configures the URL for the REST client is stock-service_API . 3.2.1. Using URI alias As an alternative to the kogito.sw.operationIdStrategy property, you can assign an alias to a URI by using the workflow-uri-definitions custom extension. This alias simplifies the configuration process and can be used as a configuration key in REST client settings and function definitions. The workflow-uri-definitions extension allows you to map a URI to an alias, which you can reference throughout the workflow and in your configuration files. This approach provides a centralized way to manage URIs and their configurations. Prerequisites You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. You have access to the OpenAPI specification files. Procedure Add the workflow-uri-definitions extension to your workflow. Within this extension, create aliases for your URIs. Example workflow { "extensions": [ { "extensionid": "workflow-uri-definitions", 1 "definitions": { "remoteCatalog": "https://my.remote.host/apicatalog/apis/123/document" 2 } } ], "functions": [ 3 { "name": "operation1", "operation": "remoteCatalog#operation1" }, { "name": "operation2", "operation": "remoteCatalog#operation2" } ] } 1 Set the extension ID to workflow-uri-definitions . 2 Set the alias definition by mapping the remoteCatalog alias to a URI, for example, https://my.remote.host/apicatalog/apis/123/document URI. 3 Set the function operations by using the remoteCatalog alias with the operation identifiers, for example, operation1 and operation2 operation identifiers. In the application.properties file, configure the REST client by using the alias defined in the workflow. Example property quarkus.rest-client.remoteCatalog.url=http://localhost:8282/ In the example, the configuration key is set to quarkus.rest-client.remoteCatalog.url , and the URL is set to http://localhost:8282/ , which the REST clients use by referring to the remoteCatalog alias. In your workflow, use the alias when defining functions that operate on the URI. Example Workflow (continued): { "functions": [ { "name": "operation1", "operation": "remoteCatalog#operation1" }, { "name": "operation2", "operation": "remoteCatalog#operation2" } ] } 3.3. Troubleshooting services Efficient troubleshooting of the HTTP-based function invocations, such as those using OpenAPI functions, is crucial for maintaining workflow orchestrations. To diagnose issues, you can trace HTTP requests and responses. 3.3.1. Tracing HTTP requests and responses OpenShift Serverless Logic uses the Apache HTTP client to the trace HTTP requests and responses. Prerequisites You have access to an OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. You have access to the OpenAPI specification files. You have access to the workflow definition and instance IDs for correlating HTTP requests and responses. You have access to the log configuration of the application where the HTTP service invocations are occurring Procedure To trace HTTP requests and responses, OpenShift Serverless Logic uses the Apache HTTP client by setting the following property: # Turning HTTP tracing on quarkus.log.category."org.apache.http".level=DEBUG Add the following configuration to your application's application.properties file to turn on debugging for the Apache HTTP Client: quarkus.log.category."org.apache.http".level=DEBUG Restart your application to propagate the log configuration changes. After restarting, check the logs for HTTP request traces. Example logs of a traced HTTP request 2023-09-25 19:00:55,242 DEBUG Executing request POST /v2/models/yolo-model/infer HTTP/1.1 2023-09-25 19:00:55,243 DEBUG http-outgoing-0 >> POST /v2/models/yolo-model/infer HTTP/1.1 2023-09-25 19:00:55,243 DEBUG http-outgoing-0 >> Accept: application/json 2023-09-25 19:00:55,243 DEBUG http-outgoing-0 >> Content-Type: application/json 2023-09-25 19:00:55,243 DEBUG http-outgoing-0 >> kogitoprocid: inferencepipeline 2023-09-25 19:00:55,243 DEBUG http-outgoing-0 >> kogitoprocinstanceid: 85114b2d-9f64-496a-bf1d-d3a0760cde8e 2023-09-25 19:00:55,243 DEBUG http-outgoing-0 >> kogitoprocist: Active 2023-09-25 19:00:55,243 DEBUG http-outgoing-0 >> kogitoproctype: SW 2023-09-25 19:00:55,243 DEBUG http-outgoing-0 >> kogitoprocversion: 1.0 2023-09-25 19:00:55,243 DEBUG http-outgoing-0 >> Content-Length: 23177723 2023-09-25 19:00:55,244 DEBUG http-outgoing-0 >> Host: yolo-model-opendatahub-model.apps.trustyai.dzzt.p1.openshiftapps.com Check the logs for HTTP response traces following the request logs. Example logs of a traced HTTP response 2023-09-25 19:01:00,738 DEBUG http-outgoing-0 << "HTTP/1.1 500 Internal Server Error[\r][\n]" 2023-09-25 19:01:00,738 DEBUG http-outgoing-0 << "content-type: application/json[\r][\n]" 2023-09-25 19:01:00,738 DEBUG http-outgoing-0 << "date: Mon, 25 Sep 2023 19:01:00 GMT[\r][\n]" 2023-09-25 19:01:00,738 DEBUG http-outgoing-0 << "content-length: 186[\r][\n]" 2023-09-25 19:01:00,738 DEBUG http-outgoing-0 << "set-cookie: 276e4597d7fcb3b2cba7b5f037eeacf5=5427fafade21f8e7a4ee1fa6c221cf40; path=/; HttpOnly; Secure; SameSite=None[\r][\n]" 2023-09-25 19:01:00,738 DEBUG http-outgoing-0 << "[\r][\n]" 2023-09-25 19:01:00,738 DEBUG http-outgoing-0 << "{"code":13, "message":"Failed to load Model due to adapter error: Error calling stat on model file: stat /models/yolo-model__isvc-1295fd6ba9/yolov5s-seg.onnx: no such file or directory"}" | [
"{ \"functions\": [ { \"name\": \"myFunction1\", \"operation\": \"classpath:/myopenapi-file.yaml#myFunction1\" } ] }",
"openapi: 3.0.3 info: title: Generated API version: \"1.0\" paths: /: post: operationId: doOperation parameters: - in: header name: notUsed schema: type: string required: false requestBody: content: application/json: schema: USDref: '#/components/schemas/MultiplicationOperation' responses: \"200\": description: OK content: application/json: schema: type: object properties: product: format: float type: number components: schemas: MultiplicationOperation: type: object properties: leftElement: format: float type: number rightElement: format: float type: number",
"{ \"functions\": [ { \"name\": \"multiplication\", \"operation\": \"specs/multiplication.yaml#doOperation\" }, { \"name\": \"subtraction\", \"operation\": \"specs/subtraction.yaml#doOperation\" } ] }",
"{ \"states\": [ { \"name\": \"SetConstants\", \"type\": \"inject\", \"data\": { \"subtractValue\": 32.0, \"multiplyValue\": 0.5556 }, \"transition\": \"Computation\" }, { \"name\": \"Computation\", \"actionMode\": \"sequential\", \"type\": \"operation\", \"actions\": [ { \"name\": \"subtract\", \"functionRef\": { \"refName\": \"subtraction\", \"arguments\": { \"leftElement\": \".fahrenheit\", \"rightElement\": \".subtractValue\" } } }, { \"name\": \"multiply\", \"functionRef\": { \"refName\": \"multiplication\", \"arguments\": { \"leftElement\": \".difference\", \"rightElement\": \".multiplyValue\" } } } ], \"end\": { \"terminate\": true } } ] }",
"{ \"/pet/{petId}\": { \"get\": { \"tags\": [\"pet\"], \"summary\": \"Find pet by ID\", \"description\": \"Returns a single pet\", \"operationId\": \"getPetById\", \"parameters\": [ { \"name\": \"petId\", \"in\": \"path\", \"description\": \"ID of pet to return\", \"required\": true, \"schema\": { \"type\": \"integer\", \"format\": \"int64\" } } ] } } }",
"{ \"name\": \"CallPetStore\", 1 \"actionMode\": \"sequential\", \"type\": \"operation\", \"actions\": [ { \"name\": \"getPet\", \"functionRef\": { \"refName\": \"getPetById\", 2 \"arguments\": { 3 \"petId\": \".petId\" } } } ] }",
"quarkus.rest-client.subtraction_yaml.url=http://myserver.com",
"quarkus.rest-client.subtraction_yaml.url=USD{SUBTRACTION_URL:http://myserver.com}",
"quarkus.rest-client.subtraction_yaml.url=USD{SUBTRACTION_URL:http://myserver.com}",
"quarkus.rest-client.stock_portfolio_svc_yaml.url=http://localhost:8282/ 1",
"{ \"id\": \"myworkflow\", \"functions\": [ { \"name\": \"myfunction\", \"operation\": \"https://my.remote.host/apicatalog/apis/123/document\" } ] }",
"quarkus.rest-client.apicatalog_apis_123_document.url=http://localhost:8282/ 1",
"{ \"id\": \"myworkflow\", \"functions\": [ { \"name\": \"myfunction\", \"operation\": \"https://my.remote.host/apicatalog/apis/123/document\" } ] }",
"quarkus.rest-client.myworkflow_myfunction.url=http://localhost:8282/ 1",
"openapi: 3.0.3 info: title: stock-service API version: 2.0.0-SNAPSHOT paths: /stock-price/{symbol}:",
"quarkus.rest-client.stock-service_API.url=http://localhost:8282/ 1",
"{ \"extensions\": [ { \"extensionid\": \"workflow-uri-definitions\", 1 \"definitions\": { \"remoteCatalog\": \"https://my.remote.host/apicatalog/apis/123/document\" 2 } } ], \"functions\": [ 3 { \"name\": \"operation1\", \"operation\": \"remoteCatalog#operation1\" }, { \"name\": \"operation2\", \"operation\": \"remoteCatalog#operation2\" } ] }",
"quarkus.rest-client.remoteCatalog.url=http://localhost:8282/",
"{ \"functions\": [ { \"name\": \"operation1\", \"operation\": \"remoteCatalog#operation1\" }, { \"name\": \"operation2\", \"operation\": \"remoteCatalog#operation2\" } ] }",
"Turning HTTP tracing on quarkus.log.category.\"org.apache.http\".level=DEBUG",
"quarkus.log.category.\"org.apache.http\".level=DEBUG",
"2023-09-25 19:00:55,242 DEBUG Executing request POST /v2/models/yolo-model/infer HTTP/1.1 2023-09-25 19:00:55,243 DEBUG http-outgoing-0 >> POST /v2/models/yolo-model/infer HTTP/1.1 2023-09-25 19:00:55,243 DEBUG http-outgoing-0 >> Accept: application/json 2023-09-25 19:00:55,243 DEBUG http-outgoing-0 >> Content-Type: application/json 2023-09-25 19:00:55,243 DEBUG http-outgoing-0 >> kogitoprocid: inferencepipeline 2023-09-25 19:00:55,243 DEBUG http-outgoing-0 >> kogitoprocinstanceid: 85114b2d-9f64-496a-bf1d-d3a0760cde8e 2023-09-25 19:00:55,243 DEBUG http-outgoing-0 >> kogitoprocist: Active 2023-09-25 19:00:55,243 DEBUG http-outgoing-0 >> kogitoproctype: SW 2023-09-25 19:00:55,243 DEBUG http-outgoing-0 >> kogitoprocversion: 1.0 2023-09-25 19:00:55,243 DEBUG http-outgoing-0 >> Content-Length: 23177723 2023-09-25 19:00:55,244 DEBUG http-outgoing-0 >> Host: yolo-model-opendatahub-model.apps.trustyai.dzzt.p1.openshiftapps.com",
"2023-09-25 19:01:00,738 DEBUG http-outgoing-0 << \"HTTP/1.1 500 Internal Server Error[\\r][\\n]\" 2023-09-25 19:01:00,738 DEBUG http-outgoing-0 << \"content-type: application/json[\\r][\\n]\" 2023-09-25 19:01:00,738 DEBUG http-outgoing-0 << \"date: Mon, 25 Sep 2023 19:01:00 GMT[\\r][\\n]\" 2023-09-25 19:01:00,738 DEBUG http-outgoing-0 << \"content-length: 186[\\r][\\n]\" 2023-09-25 19:01:00,738 DEBUG http-outgoing-0 << \"set-cookie: 276e4597d7fcb3b2cba7b5f037eeacf5=5427fafade21f8e7a4ee1fa6c221cf40; path=/; HttpOnly; Secure; SameSite=None[\\r][\\n]\" 2023-09-25 19:01:00,738 DEBUG http-outgoing-0 << \"[\\r][\\n]\" 2023-09-25 19:01:00,738 DEBUG http-outgoing-0 << \"{\"code\":13, \"message\":\"Failed to load Model due to adapter error: Error calling stat on model file: stat /models/yolo-model__isvc-1295fd6ba9/yolov5s-seg.onnx: no such file or directory\"}\""
]
| https://docs.redhat.com/en/documentation/red_hat_openshift_serverless/1.35/html/serverless_logic/managing-services |
Chapter 13. Scheduling Messages | Chapter 13. Scheduling Messages You can specify a time in the future, at the earliest, for a message to be delivered. This can be done by setting the _AMQ_SCHED_DELIVERY scheduled delivery property before the message is sent. The specified value must be a positive long that corresponds to the time in milliseconds for the message to be delivered. Below is an example of sending a scheduled message using the Jakarta Messaging API. // Create a message to be delivered in 5 seconds TextMessage message = session.createTextMessage("This is a scheduled message message that will be delivered in 5 sec."); message.setLongProperty("_AMQ_SCHED_DELIVERY", System.currentTimeMillis() + 5000); producer.send(message); ... // The message will not be received immediately, but 5 seconds later TextMessage messageReceived = (TextMessage) consumer.receive(); Scheduled messages can also be sent using the core API by setting the _AMQ_SCHED_DELIVERY property before sending the message. | [
"// Create a message to be delivered in 5 seconds TextMessage message = session.createTextMessage(\"This is a scheduled message message that will be delivered in 5 sec.\"); message.setLongProperty(\"_AMQ_SCHED_DELIVERY\", System.currentTimeMillis() + 5000); producer.send(message); // The message will not be received immediately, but 5 seconds later TextMessage messageReceived = (TextMessage) consumer.receive();"
]
| https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/configuring_messaging/scheduling_messages |
1.6. Encryption | 1.6. Encryption Teiid Transports Teiid provides built-in support for JDBC/ODBC over SSL. JDBC defaults to just sensitive message encryption (login mode), while ODBC (the pg transport) defaults to just clear text passwords if using simple username/password authentication. The Red Hat JBoss EAP instance must be configured for SSL as well so that any web services consuming Teiid may use SSL. Configuration Passwords in configuration files are stored as a hash. Source Access Encrypting remote source access is the responsibility for the resource adapter and library/driver used to access the source system. Temporary Data Teiid temporary data which can be stored on the file system as configured by the BufferManager may optionally be encrypted. Set the buffer-service-encrypt-files property to true on the Teiid subsystem to use 128-bit AES to encrypt any files written by the BufferManager. A new symmetric key will be generated for each start of the Teiid system on each server. A performance hit will be seen for processing that is memory intensive such that data typically spills to disk. This setting does not affect how VDBs (either the artifact or an exploded form) or log files are written to disk. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/security_guide/encryption |
Chapter 4. User Management | Chapter 4. User Management This section describes the administration functions for managing users. 4.1. Searching For Users If you need to manage a specific user, click on Users in the left menu bar. Users This menu option brings you to the user list page. In the search box you can type in a full name, last name, or email address you want to search for in the user database. The query will bring up all users that match your criteria. The View all users button will list every user in the system. This will search just local Red Hat Single Sign-On database and not the federated database (ie. LDAP) because some backends like LDAP don't have a way to page through users. So if you want the users from federated backend to be synced into Red Hat Single Sign-On database you need to either: Adjust search criteria. That will sync just the backend users matching the criteria into Red Hat Single Sign-On database. Go to User Federation tab and click Sync all users or Sync changed users in the page with your federation provider. See User Federation for more details. 4.2. Creating New Users To create a user click on Users in the left menu bar. Users This menu option brings you to the user list page. On the right side of the empty user list, you should see an Add User button. Click that to start creating your new user. Add User The only required field is Username . Click save. This will bring you to the management page for your new user. 4.3. Deleting Users To delete a user click on Users in the left menu bar. Users This menu option brings you to the user list page. Click View all users or search to find the user you intend to delete. View All Users In the list of users, click Delete to the user you want to remove. You will be asked to confirm that you are sure you want to delete this user. Click Delete in the confirmation box to confirm. 4.4. User Attributes Beyond basic user metadata like name and email, you can store arbitrary user attributes. Choose a user to manage then click on the Attributes tab. Users Enter in the attribute name and value in the empty fields and click the Add button to it to add a new field. Note that any edits you make on this page will not be stored until you hit the Save button. 4.5. User Credentials When viewing a user if you go to the Credentials tab you can manage a user's credentials. Credential Management The credentials are listed in a table, which has the following fields: Position The arrow buttons in this column allows you to shift the priority of the credential for the user, with the topmost credential having the highest priority. This priority determines which credential will be shown first to a user in case of a choice during login. The highest priority of those available to the user will be the one selected. Type This shows the type of the credential, for example password or otp . User Label This is an assignable label to recognise the credential when presented as a selection option during login. It can be set to any value to describe the credential. Data This shows the non-confidential technical information about the credential. It is originally hidden, but you can press Show data... to reveal it for a credential. Actions This column has two buttons. Save records the value of the User Label, while Delete will remove the credential. 4.5.1. Creating a Password for the User If a user doesn't have a password, or if the password has been deleted, the Set Password section will be shown on the page. Credential Management - Set Password To create a password for a user, type in a new one. Click on the Set Password button after you've typed everything in. If the Temporary switch is on, this new password can only be used once and the user will be asked to change their password after they have logged in. If a user already has a password, it can be reset in the Reset Password section. Alternatively, if you have email set up, you can send an email to the user that asks them to reset their password. Choose Update Password from the Reset Actions list box and click Send Email . You can optionally set the validity of the e-mail link which defaults to the one preset in Tokens tab in the realm settings. The sent email contains a link that will bring the user to the update password screen. Note that a user can only have a single credential of type password. 4.5.2. Creating other credentials You cannot configure other types of credentials for a specific user within the Admin Console. This is the responsibility of the user. You can only delete credentials for a user on the Credentials tab, for example if the user has lost an OTP device, or if a credential has been compromised. 4.5.2.1. Creating an OTP If OTP is conditional in your realm, the user will have to go to the User Account Management service to re-configure a new OTP generator. If OTP is required, then the user will be asked to re-configure a new OTP generator when they log in. Like passwords, you can alternatively send an email to the user that will ask them to reset their OTP generator. Choose Configure OTP in the Reset Actions list box and click the Send Email button. The sent email contains a link that will bring the user to the OTP setup screen. You can use this method even if the user already has an OTP credential, and would like to set up some more. 4.6. Required Actions Required Actions are tasks that a user must finish before they are allowed to log in. A user must provide their credentials before required actions are executed. Once a required action is completed, the user will not have to perform the action again. Here are explanations of some of the built-in required action types: Update Password When set, a user must change their password. Configure OTP When set, a user must configure a one-time password generator on their mobile device using either the Free OTP or Google Authenticator application. Verify Email When set, a user must verify that they have a valid email account. An email will be sent to the user with a link they have to click. Once this workflow is successfully completed, they will be allowed to log in. Update Profile This required action asks the user to update their profile information, i.e. their name, address, email, and/or phone number. Admins can add required actions for each individual user within the user's Details tab in the Admin Console. Setting Required Action In the Required User Actions list box, select all the actions you want to add to the account. If you want to remove one, click the X to the action name. Also remember to click the Save button after you've decided what actions to add. 4.6.1. Default Required Actions You can also specify required actions that will be added to an account whenever a new user is created, i.e. through the Add User button the user list screen, or via the user registration link on the login page. To specify the default required actions go to the Authentication left menu item and click on the Required Actions tab. Default Required Actions Simply click the checkbox in the Default Action column of the required actions that you want to be executed when a brand new user logs in. 4.6.2. Terms and Conditions Many organizations have a requirement that when a new user logs in for the first time, they need to agree to the terms and conditions of the website. Red Hat Single Sign-On has this functionality implemented as a required action, but it requires some configuration. For one, you have to go to the Required Actions tab described earlier and enable the Terms and Conditions action. You must also edit the terms.ftl file in the base login theme. See the Server Developer Guide for more information on extending and creating themes. 4.7. Impersonation It is often useful for an admin to impersonate a user. For example, a user may be experiencing a bug in one of your applications and an admin may want to impersonate the user to see if they can duplicate the problem. Admins with the appropriate permission can impersonate a user. There are two locations an admin can initiate impersonation. The first is on the Users list tab. Users You can see here that the admin has searched for john . to John's account you can see an impersonate button. Click that to impersonate the user. Also, you can impersonate the user from the user Details tab. User Details Near the bottom of the page you can see the Impersonate button. Click that to impersonate the user. When impersonating, if the admin and the user are in the same realm, then the admin will be logged out and automatically logged in as the user being impersonated. If the admin and user are not in the same realm, the admin will remain logged in, but additionally be logged in as the user in that user's realm. In both cases, the browser will be redirected to the impersonated user's User Account Management page. Any user with the realm's impersonation role can impersonate a user. Please see the Admin Console Access Control chapter for more details on assigning administration permissions. 4.8. User Registration You can enable Red Hat Single Sign-On to allow user self registration. When enabled, the login page has a registration link the user can click on to create their new account. When user self registration is enabled it is possible to use the registration form to detect valid usernames and emails. It is also possible to enable reCAPTCHA Support . Enabling registration is pretty simple. Go to the Realm Settings left menu and click it. Then go to the Login tab. There is a User Registration switch on this tab. Turn it on, then click the Save button. Login Tab After you enable this setting, a Register link should show up on the login page. Registration Link Clicking on this link will bring the user to the registration page where they have to enter in some user profile information and a new password. Registration Form You can change the look and feel of the registration form as well as removing or adding additional fields that must be entered. See the Server Developer Guide for more information. 4.8.1. reCAPTCHA Support To safeguard registration against bots, Red Hat Single Sign-On has integration with Google reCAPTCHA. To enable this you need to first go to Google Recaptcha Website and create an API key so that you can get your reCAPTCHA site key and secret. (FYI, localhost works by default so you don't have to specify a domain). , there are a few steps you need to perform in the Red Hat Single Sign-On Admin Console. Click the Authentication left menu item and go to the Flows tab. Select the Registration flow from the drop down list on this page. Registration Flow Set the 'reCAPTCHA' requirement to Required by clicking the appropriate radio button. This will enable reCAPTCHA on the screen. , you have to enter in the reCAPTCHA site key and secret that you generated at the Google reCAPTCHA Website. Click on the 'Actions' button that is to the right of the reCAPTCHA flow entry, then "Config" link, and enter in the reCAPTCHA site key and secret on this config page. Recaptcha Config Page The final step you have to do is to change some default HTTP response headers that Red Hat Single Sign-On sets. Red Hat Single Sign-On will prevent a website from including any login page within an iframe. This is to prevent clickjacking attacks. You need to authorize Google to use the registration page within an iframe. Go to the Realm Settings left menu item and then go to the Security Defenses tab. You will need to add https://www.google.com to the values of both the X-Frame-Options and Content-Security-Policy headers. Authorizing Iframes Once you do this, reCAPTCHA should show up on your registration page. You may want to edit register.ftl in your login theme to muck around with the placement and styling of the reCAPTCHA button. See the Server Developer Guide for more information on extending and creating themes. 4.9. Personal data collected by Red Hat Single Sign-On By default, Red Hat Single Sign-On collects the following: Basic user profile, such as email, firstname, and lastname Basic user profile used for social accounts and references to the social account when using a social login Device information collected for audit and security purposes, such as the IP address, operating system name, and browser name The information collected in Red Hat Single Sign-On is highly customizable. Be aware of the following guidelines when making customizations: Registration and account forms could contain custom fields, such as birthday, gender, and nationality. An administrator could configure Red Hat Single Sign-On to retrieve that data from a social provider or a user storage provider such as LDAP. Red Hat Single Sign-On collects user credentials, such as password, OTP codes, and WebAuthn public keys. This information is encrypted and saved in a database, so it is not visible to Red Hat Single Sign-On administrators. However, each type of credential can include non-confidential metadata that is visible to administrators such as the algorithm that is used to hash the password and the number of hash iterations used to hash the password. With authorization services and UMA support enabled, Red Hat Single Sign-On can hold information about some objects for which a particular user is the owner. For example, Red Hat Single Sign-On can track that the user john is the owner of a photoalbum album with animals and a few photos called lion picture and cow picture in this album. | null | https://docs.redhat.com/en/documentation/red_hat_single_sign-on/7.4/html/server_administration_guide/user_management |
Installing on vSphere | Installing on vSphere OpenShift Container Platform 4.12 Installing OpenShift Container Platform on vSphere Red Hat OpenShift Documentation Team | [
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"certs βββ lin β βββ 108f4d17.0 β βββ 108f4d17.r1 β βββ 7e757f6a.0 β βββ 8e4f8471.0 β βββ 8e4f8471.r0 βββ mac β βββ 108f4d17.0 β βββ 108f4d17.r1 β βββ 7e757f6a.0 β βββ 8e4f8471.0 β βββ 8e4f8471.r0 βββ win βββ 108f4d17.0.crt βββ 108f4d17.r1.crl βββ 7e757f6a.0.crt βββ 8e4f8471.0.crt βββ 8e4f8471.r0.crl 3 directories, 15 files",
"cp certs/lin/* /etc/pki/ca-trust/source/anchors",
"update-ca-trust extract",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resourses found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim: 1",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m",
"oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4",
"oc create -f pvc.yaml -n openshift-image-registry",
"oc edit config.imageregistry.operator.openshift.io -o yaml",
"storage: pvc: claim: 1",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"certs βββ lin β βββ 108f4d17.0 β βββ 108f4d17.r1 β βββ 7e757f6a.0 β βββ 8e4f8471.0 β βββ 8e4f8471.r0 βββ mac β βββ 108f4d17.0 β βββ 108f4d17.r1 β βββ 7e757f6a.0 β βββ 8e4f8471.0 β βββ 8e4f8471.r0 βββ win βββ 108f4d17.0.crt βββ 108f4d17.r1.crl βββ 7e757f6a.0.crt βββ 8e4f8471.0.crt βββ 8e4f8471.r0.crl 3 directories, 15 files",
"cp certs/lin/* /etc/pki/ca-trust/source/anchors",
"update-ca-trust extract",
"./openshift-install create install-config --dir <installation_directory> 1",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 name: worker replicas: 3 platform: vsphere: 3 cpus: 2 coresPerSocket: 2 memoryMB: 8192 osDisk: diskSizeGB: 120 controlPlane: 4 name: master replicas: 3 platform: vsphere: 5 cpus: 4 coresPerSocket: 2 memoryMB: 16384 osDisk: diskSizeGB: 120 metadata: name: cluster 6 platform: vsphere: vcenter: your.vcenter.server username: username password: password datacenter: datacenter defaultDatastore: datastore folder: folder resourcePool: resource_pool 7 diskType: thin 8 network: VM_Network cluster: vsphere_cluster_name 9 apiVIPs: - api_vip ingressVIPs: - ingress_vip fips: false pullSecret: '{\"auths\": ...}' sshKey: 'ssh-ed25519 AAAA...'",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"govc tags.category.create -d \"OpenShift region\" openshift-region",
"govc tags.category.create -d \"OpenShift zone\" openshift-zone",
"govc tags.create -c <region_tag_category> <region_tag>",
"govc tags.create -c <zone_tag_category> <zone_tag>",
"govc tags.attach -c <region_tag_category> <region_tag_1> /<datacenter_1>",
"govc tags.attach -c <zone_tag_category> <zone_tag_1> /<datacenter_1>/host/vcs-mdcnc-workload-1",
"apiVersion: v1 baseDomain: example.com featureSet: TechPreviewNoUpgrade 1 compute: name: worker replicas: 3 vsphere: zones: 2 - \"<machine_pool_zone_1>\" - \"<machine_pool_zone_2>\" controlPlane: name: master replicas: 3 vsphere: zones: 3 - \"<machine_pool_zone_1>\" - \"<machine_pool_zone_2>\" metadata: name: cluster platform: vsphere: vcenter: <vcenter_server> 4 username: <username> 5 password: <password> 6 datacenter: datacenter 7 defaultDatastore: datastore 8 folder: \"/<datacenter_name>/vm/<folder_name>/<subfolder_name>\" 9 cluster: cluster 10 resourcePool: \"/<datacenter_name>/host/<cluster_name>/Resources/<resource_pool_name>\" 11 diskType: thin failureDomains: 12 - name: <machine_pool_zone_1> 13 region: <region_tag_1> 14 zone: <zone_tag_1> 15 topology: 16 datacenter: <datacenter1> 17 computeCluster: \"/<datacenter1>/host/<cluster1>\" 18 resourcePool: \"/<datacenter1>/host/<cluster1>/Resources/<resourcePool1>\" 19 networks: 20 - <VM_Network1_name> datastore: \"/<datacenter1>/datastore/<datastore1>\" 21 - name: <machine_pool_zone_2> region: <region_tag_2> zone: <zone_tag_2> topology: datacenter: <datacenter2> computeCluster: \"/<datacenter2>/host/<cluster2>\" networks: - <VM_Network2_name> datastore: \"/<datacenter2>/datastore/<datastore2>\" resourcePool: \"/<datacenter2>/host/<cluster2>/Resources/<resourcePool2>\" folder: \"/<datacenter2>/vm/<folder2>\"",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resourses found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim: 1",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m",
"oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4",
"oc create -f pvc.yaml -n openshift-image-registry",
"oc edit config.imageregistry.operator.openshift.io -o yaml",
"storage: pvc: claim: 1",
"Path: HTTPS:6443/readyz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10",
"Path: HTTPS:22623/healthz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10",
"Path: HTTP:1936/healthz/ready Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 5 Interval: 10",
"# listen my-cluster-api-6443 bind 192.168.1.100:6443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /readyz http-check expect status 200 server my-cluster-master-2 192.168.1.101:6443 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:6443 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:6443 check inter 10s rise 2 fall 2 listen my-cluster-machine-config-api-22623 bind 192.168.1.100:22623 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz http-check expect status 200 server my-cluster-master-2 192.168.1.101:22623 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:22623 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:22623 check inter 10s rise 2 fall 2 listen my-cluster-apps-443 bind 192.168.1.100:443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2 listen my-cluster-apps-80 bind 192.168.1.100:80 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2",
"curl https://<loadbalancer_ip_address>:6443/version --insecure",
"{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }",
"curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure",
"HTTP/1.1 200 OK Content-Length: 0",
"curl -I -L -H \"Host: console-openshift-console.apps.<cluster_name>.<base_domain>\" http://<load_balancer_front_end_IP_address>",
"HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.ocp4.private.opequon.net/ cache-control: no-cache",
"curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain>",
"HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"<load_balancer_ip_address> A api.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End",
"<load_balancer_ip_address> A apps.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End",
"curl https://api.<cluster_name>.<base_domain>:6443/version --insecure",
"{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }",
"curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure",
"HTTP/1.1 200 OK Content-Length: 0",
"curl http://console-openshift-console.apps.<cluster_name>.<base_domain -I -L --insecure",
"HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure",
"HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"certs βββ lin β βββ 108f4d17.0 β βββ 108f4d17.r1 β βββ 7e757f6a.0 β βββ 8e4f8471.0 β βββ 8e4f8471.r0 βββ mac β βββ 108f4d17.0 β βββ 108f4d17.r1 β βββ 7e757f6a.0 β βββ 8e4f8471.0 β βββ 8e4f8471.r0 βββ win βββ 108f4d17.0.crt βββ 108f4d17.r1.crl βββ 7e757f6a.0.crt βββ 8e4f8471.0.crt βββ 8e4f8471.r0.crl 3 directories, 15 files",
"cp certs/lin/* /etc/pki/ca-trust/source/anchors",
"update-ca-trust extract",
"./openshift-install create install-config --dir <installation_directory> 1",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 name: worker replicas: 3 platform: vsphere: 3 cpus: 2 coresPerSocket: 2 memoryMB: 8192 osDisk: diskSizeGB: 120 controlPlane: 4 name: master replicas: 3 platform: vsphere: 5 cpus: 4 coresPerSocket: 2 memoryMB: 16384 osDisk: diskSizeGB: 120 metadata: name: cluster 6 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 7 serviceNetwork: - 172.30.0.0/16 platform: vsphere: vcenter: your.vcenter.server username: username password: password datacenter: datacenter defaultDatastore: datastore folder: folder resourcePool: resource_pool 8 diskType: thin 9 network: VM_Network cluster: vsphere_cluster_name 10 apiVIPs: - api_vip ingressVIPs: - ingress_vip fips: false pullSecret: '{\"auths\": ...}' sshKey: 'ssh-ed25519 AAAA...'",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"govc tags.category.create -d \"OpenShift region\" openshift-region",
"govc tags.category.create -d \"OpenShift zone\" openshift-zone",
"govc tags.create -c <region_tag_category> <region_tag>",
"govc tags.create -c <zone_tag_category> <zone_tag>",
"govc tags.attach -c <region_tag_category> <region_tag_1> /<datacenter_1>",
"govc tags.attach -c <zone_tag_category> <zone_tag_1> /<datacenter_1>/host/vcs-mdcnc-workload-1",
"apiVersion: v1 baseDomain: example.com featureSet: TechPreviewNoUpgrade 1 compute: name: worker replicas: 3 vsphere: zones: 2 - \"<machine_pool_zone_1>\" - \"<machine_pool_zone_2>\" controlPlane: name: master replicas: 3 vsphere: zones: 3 - \"<machine_pool_zone_1>\" - \"<machine_pool_zone_2>\" metadata: name: cluster platform: vsphere: vcenter: <vcenter_server> 4 username: <username> 5 password: <password> 6 datacenter: datacenter 7 defaultDatastore: datastore 8 folder: \"/<datacenter_name>/vm/<folder_name>/<subfolder_name>\" 9 cluster: cluster 10 resourcePool: \"/<datacenter_name>/host/<cluster_name>/Resources/<resource_pool_name>\" 11 diskType: thin failureDomains: 12 - name: <machine_pool_zone_1> 13 region: <region_tag_1> 14 zone: <zone_tag_1> 15 topology: 16 datacenter: <datacenter1> 17 computeCluster: \"/<datacenter1>/host/<cluster1>\" 18 resourcePool: \"/<datacenter1>/host/<cluster1>/Resources/<resourcePool1>\" 19 networks: 20 - <VM_Network1_name> datastore: \"/<datacenter1>/datastore/<datastore1>\" 21 - name: <machine_pool_zone_2> region: <region_tag_2> zone: <zone_tag_2> topology: datacenter: <datacenter2> computeCluster: \"/<datacenter2>/host/<cluster2>\" networks: - <VM_Network2_name> datastore: \"/<datacenter2>/datastore/<datastore2>\" resourcePool: \"/<datacenter2>/host/<cluster2>/Resources/<resourcePool2>\" folder: \"/<datacenter2>/vm/<folder2>\"",
"./openshift-install create manifests --dir <installation_directory> 1",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec:",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: vxlanPort: 4800",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: {}",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {}",
"kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resourses found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim: 1",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m",
"oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4",
"oc create -f pvc.yaml -n openshift-image-registry",
"oc edit config.imageregistry.operator.openshift.io -o yaml",
"storage: pvc: claim: 1",
"Path: HTTPS:6443/readyz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10",
"Path: HTTPS:22623/healthz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10",
"Path: HTTP:1936/healthz/ready Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 5 Interval: 10",
"# listen my-cluster-api-6443 bind 192.168.1.100:6443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /readyz http-check expect status 200 server my-cluster-master-2 192.168.1.101:6443 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:6443 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:6443 check inter 10s rise 2 fall 2 listen my-cluster-machine-config-api-22623 bind 192.168.1.100:22623 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz http-check expect status 200 server my-cluster-master-2 192.168.1.101:22623 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:22623 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:22623 check inter 10s rise 2 fall 2 listen my-cluster-apps-443 bind 192.168.1.100:443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2 listen my-cluster-apps-80 bind 192.168.1.100:80 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2",
"curl https://<loadbalancer_ip_address>:6443/version --insecure",
"{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }",
"curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure",
"HTTP/1.1 200 OK Content-Length: 0",
"curl -I -L -H \"Host: console-openshift-console.apps.<cluster_name>.<base_domain>\" http://<load_balancer_front_end_IP_address>",
"HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.ocp4.private.opequon.net/ cache-control: no-cache",
"curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain>",
"HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"<load_balancer_ip_address> A api.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End",
"<load_balancer_ip_address> A apps.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End",
"curl https://api.<cluster_name>.<base_domain>:6443/version --insecure",
"{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }",
"curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure",
"HTTP/1.1 200 OK Content-Length: 0",
"curl http://console-openshift-console.apps.<cluster_name>.<base_domain -I -L --insecure",
"HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure",
"HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF",
"global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s",
"dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1",
"api.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>",
"api-int.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>",
"random.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>",
"console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>",
"bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.5",
"5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.96",
"96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 name: worker replicas: 0 3 controlPlane: 4 name: master replicas: 3 5 metadata: name: test 6 platform: vsphere: vcenter: your.vcenter.server 7 username: username 8 password: password 9 datacenter: datacenter 10 defaultDatastore: datastore 11 folder: \"/<datacenter_name>/vm/<folder_name>/<subfolder_name>\" 12 resourcePool: \"/<datacenter_name>/host/<cluster_name>/Resources/<resource_pool_name>\" 13 diskType: thin 14 fips: false 15 pullSecret: '{\"auths\": ...}' 16 sshKey: 'ssh-ed25519 AAAA...' 17",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"govc tags.category.create -d \"OpenShift region\" openshift-region",
"govc tags.category.create -d \"OpenShift zone\" openshift-zone",
"govc tags.create -c <region_tag_category> <region_tag>",
"govc tags.create -c <zone_tag_category> <zone_tag>",
"govc tags.attach -c <region_tag_category> <region_tag_1> /<datacenter_1>",
"govc tags.attach -c <zone_tag_category> <zone_tag_1> /<datacenter_1>/host/vcs-mdcnc-workload-1",
"apiVersion: v1 baseDomain: example.com featureSet: TechPreviewNoUpgrade 1 compute: name: worker replicas: 3 vsphere: zones: 2 - \"<machine_pool_zone_1>\" - \"<machine_pool_zone_2>\" controlPlane: name: master replicas: 3 vsphere: zones: 3 - \"<machine_pool_zone_1>\" - \"<machine_pool_zone_2>\" metadata: name: cluster platform: vsphere: vcenter: <vcenter_server> 4 username: <username> 5 password: <password> 6 datacenter: datacenter 7 defaultDatastore: datastore 8 folder: \"/<datacenter_name>/vm/<folder_name>/<subfolder_name>\" 9 cluster: cluster 10 resourcePool: \"/<datacenter_name>/host/<cluster_name>/Resources/<resource_pool_name>\" 11 diskType: thin failureDomains: 12 - name: <machine_pool_zone_1> 13 region: <region_tag_1> 14 zone: <zone_tag_1> 15 topology: 16 datacenter: <datacenter1> 17 computeCluster: \"/<datacenter1>/host/<cluster1>\" 18 resourcePool: \"/<datacenter1>/host/<cluster1>/Resources/<resourcePool1>\" 19 networks: 20 - <VM_Network1_name> datastore: \"/<datacenter1>/datastore/<datastore1>\" 21 - name: <machine_pool_zone_2> region: <region_tag_2> zone: <zone_tag_2> topology: datacenter: <datacenter2> computeCluster: \"/<datacenter2>/host/<cluster2>\" networks: - <VM_Network2_name> datastore: \"/<datacenter2>/datastore/<datastore2>\" resourcePool: \"/<datacenter2>/host/<cluster2>/Resources/<resourcePool2>\" folder: \"/<datacenter2>/vm/<folder2>\"",
"./openshift-install create manifests --dir <installation_directory> 1",
"rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". βββ auth β βββ kubeadmin-password β βββ kubeconfig βββ bootstrap.ign βββ master.ign βββ metadata.json βββ worker.ign",
"jq -r .infraID <installation_directory>/metadata.json 1",
"openshift-vw9j6 1",
"{ \"ignition\": { \"config\": { \"merge\": [ { \"source\": \"<bootstrap_ignition_config_url>\", 1 \"verification\": {} } ] }, \"timeouts\": {}, \"version\": \"3.2.0\" }, \"networkd\": {}, \"passwd\": {}, \"storage\": {}, \"systemd\": {} }",
"base64 -w0 <installation_directory>/master.ign > <installation_directory>/master.64",
"base64 -w0 <installation_directory>/worker.ign > <installation_directory>/worker.64",
"base64 -w0 <installation_directory>/merge-bootstrap.ign > <installation_directory>/merge-bootstrap.64",
"export IPCFG=\"ip=<ip>::<gateway>:<netmask>:<hostname>:<iface>:none nameserver=srv1 [nameserver=srv2 [nameserver=srv3 [...]]]\"",
"export IPCFG=\"ip=192.168.100.101::192.168.100.254:255.255.255.0:::none nameserver=8.8.8.8\"",
"govc vm.change -vm \"<vm_name>\" -e \"guestinfo.afterburn.initrd.network-kargs=USD{IPCFG}\"",
"Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied",
"mkdir USDHOME/clusterconfig",
"openshift-install create manifests --dir USDHOME/clusterconfig ? SSH Public Key ls USDHOME/clusterconfig/openshift/ 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml",
"variant: openshift version: 4.12.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true",
"butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml",
"openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2",
"INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.25.0 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.25.0 master-1 Ready master 63m v1.25.0 master-2 Ready master 64m v1.25.0",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.25.0 master-1 Ready master 73m v1.25.0 master-2 Ready master 74m v1.25.0 worker-0 Ready worker 11m v1.25.0 worker-1 Ready worker 11m v1.25.0",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.12.0 True False False 19m baremetal 4.12.0 True False False 37m cloud-credential 4.12.0 True False False 40m cluster-autoscaler 4.12.0 True False False 37m config-operator 4.12.0 True False False 38m console 4.12.0 True False False 26m csi-snapshot-controller 4.12.0 True False False 37m dns 4.12.0 True False False 37m etcd 4.12.0 True False False 36m image-registry 4.12.0 True False False 31m ingress 4.12.0 True False False 30m insights 4.12.0 True False False 31m kube-apiserver 4.12.0 True False False 26m kube-controller-manager 4.12.0 True False False 36m kube-scheduler 4.12.0 True False False 36m kube-storage-version-migrator 4.12.0 True False False 37m machine-api 4.12.0 True False False 29m machine-approver 4.12.0 True False False 37m machine-config 4.12.0 True False False 36m marketplace 4.12.0 True False False 37m monitoring 4.12.0 True False False 29m network 4.12.0 True False False 38m node-tuning 4.12.0 True False False 37m openshift-apiserver 4.12.0 True False False 32m openshift-controller-manager 4.12.0 True False False 30m openshift-samples 4.12.0 True False False 32m operator-lifecycle-manager 4.12.0 True False False 37m operator-lifecycle-manager-catalog 4.12.0 True False False 37m operator-lifecycle-manager-packageserver 4.12.0 True False False 32m service-ca 4.12.0 True False False 38m storage 4.12.0 True False False 37m",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resourses found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim: 1",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4",
"oc create -f pvc.yaml -n openshift-image-registry",
"oc edit config.imageregistry.operator.openshift.io -o yaml",
"storage: pvc: claim: 1",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.12.0 True False False 19m baremetal 4.12.0 True False False 37m cloud-credential 4.12.0 True False False 40m cluster-autoscaler 4.12.0 True False False 37m config-operator 4.12.0 True False False 38m console 4.12.0 True False False 26m csi-snapshot-controller 4.12.0 True False False 37m dns 4.12.0 True False False 37m etcd 4.12.0 True False False 36m image-registry 4.12.0 True False False 31m ingress 4.12.0 True False False 30m insights 4.12.0 True False False 31m kube-apiserver 4.12.0 True False False 26m kube-controller-manager 4.12.0 True False False 36m kube-scheduler 4.12.0 True False False 36m kube-storage-version-migrator 4.12.0 True False False 37m machine-api 4.12.0 True False False 29m machine-approver 4.12.0 True False False 37m machine-config 4.12.0 True False False 36m marketplace 4.12.0 True False False 37m monitoring 4.12.0 True False False 29m network 4.12.0 True False False 38m node-tuning 4.12.0 True False False 37m openshift-apiserver 4.12.0 True False False 32m openshift-controller-manager 4.12.0 True False False 30m openshift-samples 4.12.0 True False False 32m operator-lifecycle-manager 4.12.0 True False False 37m operator-lifecycle-manager-catalog 4.12.0 True False False 37m operator-lifecycle-manager-packageserver 4.12.0 True False False 32m service-ca 4.12.0 True False False 38m storage 4.12.0 True False False 37m",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m",
"oc logs <pod_name> -n <namespace> 1",
"govc cluster.rule.create -name openshift4-control-plane-group -dc MyDatacenter -cluster MyCluster -enable -anti-affinity master-0 master-1 master-2",
"govc cluster.rule.remove -name openshift4-control-plane-group -dc MyDatacenter -cluster MyCluster",
"[13-10-22 09:33:24] Reconfigure /MyDatacenter/host/MyCluster...OK",
"govc cluster.rule.create -name openshift4-control-plane-group -dc MyDatacenter -cluster MyOtherCluster -enable -anti-affinity master-0 master-1 master-2",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF",
"global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s",
"dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1",
"api.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>",
"api-int.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>",
"random.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>",
"console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>",
"bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.5",
"5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.96",
"96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 name: worker replicas: 0 3 controlPlane: 4 name: master replicas: 3 5 metadata: name: test 6 platform: vsphere: vcenter: your.vcenter.server 7 username: username 8 password: password 9 datacenter: datacenter 10 defaultDatastore: datastore 11 folder: \"/<datacenter_name>/vm/<folder_name>/<subfolder_name>\" 12 resourcePool: \"/<datacenter_name>/host/<cluster_name>/Resources/<resource_pool_name>\" 13 diskType: thin 14 fips: false 15 pullSecret: '{\"auths\": ...}' 16 sshKey: 'ssh-ed25519 AAAA...' 17",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"govc tags.category.create -d \"OpenShift region\" openshift-region",
"govc tags.category.create -d \"OpenShift zone\" openshift-zone",
"govc tags.create -c <region_tag_category> <region_tag>",
"govc tags.create -c <zone_tag_category> <zone_tag>",
"govc tags.attach -c <region_tag_category> <region_tag_1> /<datacenter_1>",
"govc tags.attach -c <zone_tag_category> <zone_tag_1> /<datacenter_1>/host/vcs-mdcnc-workload-1",
"apiVersion: v1 baseDomain: example.com featureSet: TechPreviewNoUpgrade 1 compute: name: worker replicas: 3 vsphere: zones: 2 - \"<machine_pool_zone_1>\" - \"<machine_pool_zone_2>\" controlPlane: name: master replicas: 3 vsphere: zones: 3 - \"<machine_pool_zone_1>\" - \"<machine_pool_zone_2>\" metadata: name: cluster platform: vsphere: vcenter: <vcenter_server> 4 username: <username> 5 password: <password> 6 datacenter: datacenter 7 defaultDatastore: datastore 8 folder: \"/<datacenter_name>/vm/<folder_name>/<subfolder_name>\" 9 cluster: cluster 10 resourcePool: \"/<datacenter_name>/host/<cluster_name>/Resources/<resource_pool_name>\" 11 diskType: thin failureDomains: 12 - name: <machine_pool_zone_1> 13 region: <region_tag_1> 14 zone: <zone_tag_1> 15 topology: 16 datacenter: <datacenter1> 17 computeCluster: \"/<datacenter1>/host/<cluster1>\" 18 resourcePool: \"/<datacenter1>/host/<cluster1>/Resources/<resourcePool1>\" 19 networks: 20 - <VM_Network1_name> datastore: \"/<datacenter1>/datastore/<datastore1>\" 21 - name: <machine_pool_zone_2> region: <region_tag_2> zone: <zone_tag_2> topology: datacenter: <datacenter2> computeCluster: \"/<datacenter2>/host/<cluster2>\" networks: - <VM_Network2_name> datastore: \"/<datacenter2>/datastore/<datastore2>\" resourcePool: \"/<datacenter2>/host/<cluster2>/Resources/<resourcePool2>\" folder: \"/<datacenter2>/vm/<folder2>\"",
"./openshift-install create manifests --dir <installation_directory> 1",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec:",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: vxlanPort: 4800",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: {}",
"rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {}",
"kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". βββ auth β βββ kubeadmin-password β βββ kubeconfig βββ bootstrap.ign βββ master.ign βββ metadata.json βββ worker.ign",
"jq -r .infraID <installation_directory>/metadata.json 1",
"openshift-vw9j6 1",
"{ \"ignition\": { \"config\": { \"merge\": [ { \"source\": \"<bootstrap_ignition_config_url>\", 1 \"verification\": {} } ] }, \"timeouts\": {}, \"version\": \"3.2.0\" }, \"networkd\": {}, \"passwd\": {}, \"storage\": {}, \"systemd\": {} }",
"base64 -w0 <installation_directory>/master.ign > <installation_directory>/master.64",
"base64 -w0 <installation_directory>/worker.ign > <installation_directory>/worker.64",
"base64 -w0 <installation_directory>/merge-bootstrap.ign > <installation_directory>/merge-bootstrap.64",
"export IPCFG=\"ip=<ip>::<gateway>:<netmask>:<hostname>:<iface>:none nameserver=srv1 [nameserver=srv2 [nameserver=srv3 [...]]]\"",
"export IPCFG=\"ip=192.168.100.101::192.168.100.254:255.255.255.0:::none nameserver=8.8.8.8\"",
"govc vm.change -vm \"<vm_name>\" -e \"guestinfo.afterburn.initrd.network-kargs=USD{IPCFG}\"",
"Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied",
"mkdir USDHOME/clusterconfig",
"openshift-install create manifests --dir USDHOME/clusterconfig ? SSH Public Key ls USDHOME/clusterconfig/openshift/ 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml",
"variant: openshift version: 4.12.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true",
"butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml",
"openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2",
"INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.25.0 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.25.0 master-1 Ready master 63m v1.25.0 master-2 Ready master 64m v1.25.0",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.25.0 master-1 Ready master 73m v1.25.0 master-2 Ready master 74m v1.25.0 worker-0 Ready worker 11m v1.25.0 worker-1 Ready worker 11m v1.25.0",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.12.0 True False False 19m baremetal 4.12.0 True False False 37m cloud-credential 4.12.0 True False False 40m cluster-autoscaler 4.12.0 True False False 37m config-operator 4.12.0 True False False 38m console 4.12.0 True False False 26m csi-snapshot-controller 4.12.0 True False False 37m dns 4.12.0 True False False 37m etcd 4.12.0 True False False 36m image-registry 4.12.0 True False False 31m ingress 4.12.0 True False False 30m insights 4.12.0 True False False 31m kube-apiserver 4.12.0 True False False 26m kube-controller-manager 4.12.0 True False False 36m kube-scheduler 4.12.0 True False False 36m kube-storage-version-migrator 4.12.0 True False False 37m machine-api 4.12.0 True False False 29m machine-approver 4.12.0 True False False 37m machine-config 4.12.0 True False False 36m marketplace 4.12.0 True False False 37m monitoring 4.12.0 True False False 29m network 4.12.0 True False False 38m node-tuning 4.12.0 True False False 37m openshift-apiserver 4.12.0 True False False 32m openshift-controller-manager 4.12.0 True False False 30m openshift-samples 4.12.0 True False False 32m operator-lifecycle-manager 4.12.0 True False False 37m operator-lifecycle-manager-catalog 4.12.0 True False False 37m operator-lifecycle-manager-packageserver 4.12.0 True False False 32m service-ca 4.12.0 True False False 38m storage 4.12.0 True False False 37m",
"oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4",
"oc create -f pvc.yaml -n openshift-image-registry",
"oc edit config.imageregistry.operator.openshift.io -o yaml",
"storage: pvc: claim: 1",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.12.0 True False False 19m baremetal 4.12.0 True False False 37m cloud-credential 4.12.0 True False False 40m cluster-autoscaler 4.12.0 True False False 37m config-operator 4.12.0 True False False 38m console 4.12.0 True False False 26m csi-snapshot-controller 4.12.0 True False False 37m dns 4.12.0 True False False 37m etcd 4.12.0 True False False 36m image-registry 4.12.0 True False False 31m ingress 4.12.0 True False False 30m insights 4.12.0 True False False 31m kube-apiserver 4.12.0 True False False 26m kube-controller-manager 4.12.0 True False False 36m kube-scheduler 4.12.0 True False False 36m kube-storage-version-migrator 4.12.0 True False False 37m machine-api 4.12.0 True False False 29m machine-approver 4.12.0 True False False 37m machine-config 4.12.0 True False False 36m marketplace 4.12.0 True False False 37m monitoring 4.12.0 True False False 29m network 4.12.0 True False False 38m node-tuning 4.12.0 True False False 37m openshift-apiserver 4.12.0 True False False 32m openshift-controller-manager 4.12.0 True False False 30m openshift-samples 4.12.0 True False False 32m operator-lifecycle-manager 4.12.0 True False False 37m operator-lifecycle-manager-catalog 4.12.0 True False False 37m operator-lifecycle-manager-packageserver 4.12.0 True False False 32m service-ca 4.12.0 True False False 38m storage 4.12.0 True False False 37m",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m",
"oc logs <pod_name> -n <namespace> 1",
"govc cluster.rule.create -name openshift4-control-plane-group -dc MyDatacenter -cluster MyCluster -enable -anti-affinity master-0 master-1 master-2",
"govc cluster.rule.remove -name openshift4-control-plane-group -dc MyDatacenter -cluster MyCluster",
"[13-10-22 09:33:24] Reconfigure /MyDatacenter/host/MyCluster...OK",
"govc cluster.rule.create -name openshift4-control-plane-group -dc MyDatacenter -cluster MyOtherCluster -enable -anti-affinity master-0 master-1 master-2",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"certs βββ lin β βββ 108f4d17.0 β βββ 108f4d17.r1 β βββ 7e757f6a.0 β βββ 8e4f8471.0 β βββ 8e4f8471.r0 βββ mac β βββ 108f4d17.0 β βββ 108f4d17.r1 β βββ 7e757f6a.0 β βββ 8e4f8471.0 β βββ 8e4f8471.r0 βββ win βββ 108f4d17.0.crt βββ 108f4d17.r1.crl βββ 7e757f6a.0.crt βββ 8e4f8471.0.crt βββ 8e4f8471.r0.crl 3 directories, 15 files",
"cp certs/lin/* /etc/pki/ca-trust/source/anchors",
"update-ca-trust extract",
"./openshift-install create install-config --dir <installation_directory> 1",
"platform: vsphere: clusterOSImage: http://mirror.example.com/images/rhcos-43.81.201912131630.0-vmware.x86_64.ova?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d",
"pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'",
"additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----",
"imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 name: worker replicas: 3 platform: vsphere: 3 cpus: 2 coresPerSocket: 2 memoryMB: 8192 osDisk: diskSizeGB: 120 controlPlane: 4 name: master replicas: 3 platform: vsphere: 5 cpus: 4 coresPerSocket: 2 memoryMB: 16384 osDisk: diskSizeGB: 120 metadata: name: cluster 6 platform: vsphere: vcenter: your.vcenter.server username: username password: password datacenter: datacenter defaultDatastore: datastore folder: folder resourcePool: resource_pool 7 diskType: thin 8 network: VM_Network cluster: vsphere_cluster_name 9 apiVIPs: - api_vip ingressVIPs: - ingress_vip clusterOSImage: http://mirror.example.com/images/rhcos-47.83.202103221318-0-vmware.x86_64.ova 10 fips: false pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 11 sshKey: 'ssh-ed25519 AAAA...' additionalTrustBundle: | 12 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 13 - mirrors: - <mirror_host_name>:<mirror_port>/<repo_name>/release source: <source_image_1> - mirrors: - <mirror_host_name>:<mirror_port>/<repo_name>/release-images source: <source_image_2>",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"govc tags.category.create -d \"OpenShift region\" openshift-region",
"govc tags.category.create -d \"OpenShift zone\" openshift-zone",
"govc tags.create -c <region_tag_category> <region_tag>",
"govc tags.create -c <zone_tag_category> <zone_tag>",
"govc tags.attach -c <region_tag_category> <region_tag_1> /<datacenter_1>",
"govc tags.attach -c <zone_tag_category> <zone_tag_1> /<datacenter_1>/host/vcs-mdcnc-workload-1",
"apiVersion: v1 baseDomain: example.com featureSet: TechPreviewNoUpgrade 1 compute: name: worker replicas: 3 vsphere: zones: 2 - \"<machine_pool_zone_1>\" - \"<machine_pool_zone_2>\" controlPlane: name: master replicas: 3 vsphere: zones: 3 - \"<machine_pool_zone_1>\" - \"<machine_pool_zone_2>\" metadata: name: cluster platform: vsphere: vcenter: <vcenter_server> 4 username: <username> 5 password: <password> 6 datacenter: datacenter 7 defaultDatastore: datastore 8 folder: \"/<datacenter_name>/vm/<folder_name>/<subfolder_name>\" 9 cluster: cluster 10 resourcePool: \"/<datacenter_name>/host/<cluster_name>/Resources/<resource_pool_name>\" 11 diskType: thin failureDomains: 12 - name: <machine_pool_zone_1> 13 region: <region_tag_1> 14 zone: <zone_tag_1> 15 topology: 16 datacenter: <datacenter1> 17 computeCluster: \"/<datacenter1>/host/<cluster1>\" 18 resourcePool: \"/<datacenter1>/host/<cluster1>/Resources/<resourcePool1>\" 19 networks: 20 - <VM_Network1_name> datastore: \"/<datacenter1>/datastore/<datastore1>\" 21 - name: <machine_pool_zone_2> region: <region_tag_2> zone: <zone_tag_2> topology: datacenter: <datacenter2> computeCluster: \"/<datacenter2>/host/<cluster2>\" networks: - <VM_Network2_name> datastore: \"/<datacenter2>/datastore/<datastore2>\" resourcePool: \"/<datacenter2>/host/<cluster2>/Resources/<resourcePool2>\" folder: \"/<datacenter2>/vm/<folder2>\"",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resourses found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim: 1",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m",
"Path: HTTPS:6443/readyz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10",
"Path: HTTPS:22623/healthz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10",
"Path: HTTP:1936/healthz/ready Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 5 Interval: 10",
"# listen my-cluster-api-6443 bind 192.168.1.100:6443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /readyz http-check expect status 200 server my-cluster-master-2 192.168.1.101:6443 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:6443 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:6443 check inter 10s rise 2 fall 2 listen my-cluster-machine-config-api-22623 bind 192.168.1.100:22623 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz http-check expect status 200 server my-cluster-master-2 192.168.1.101:22623 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:22623 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:22623 check inter 10s rise 2 fall 2 listen my-cluster-apps-443 bind 192.168.1.100:443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2 listen my-cluster-apps-80 bind 192.168.1.100:80 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2",
"curl https://<loadbalancer_ip_address>:6443/version --insecure",
"{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }",
"curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure",
"HTTP/1.1 200 OK Content-Length: 0",
"curl -I -L -H \"Host: console-openshift-console.apps.<cluster_name>.<base_domain>\" http://<load_balancer_front_end_IP_address>",
"HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.ocp4.private.opequon.net/ cache-control: no-cache",
"curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain>",
"HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"<load_balancer_ip_address> A api.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End",
"<load_balancer_ip_address> A apps.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End",
"curl https://api.<cluster_name>.<base_domain>:6443/version --insecure",
"{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }",
"curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure",
"HTTP/1.1 200 OK Content-Length: 0",
"curl http://console-openshift-console.apps.<cluster_name>.<base_domain -I -L --insecure",
"HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure",
"HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF",
"global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s",
"dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1",
"api.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>",
"api-int.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>",
"random.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>",
"console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>",
"bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.5",
"5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.96",
"96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 name: worker replicas: 0 3 controlPlane: 4 name: master replicas: 3 5 metadata: name: test 6 platform: vsphere: vcenter: your.vcenter.server 7 username: username 8 password: password 9 datacenter: datacenter 10 defaultDatastore: datastore 11 folder: \"/<datacenter_name>/vm/<folder_name>/<subfolder_name>\" 12 resourcePool: \"/<datacenter_name>/host/<cluster_name>/Resources/<resource_pool_name>\" 13 diskType: thin 14 fips: false 15 pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 16 sshKey: 'ssh-ed25519 AAAA...' 17 additionalTrustBundle: | 18 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 19 - mirrors: - <mirror_host_name>:<mirror_port>/<repo_name>/release source: <source_image_1> - mirrors: - <mirror_host_name>:<mirror_port>/<repo_name>/release-images source: <source_image_2>",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"govc tags.category.create -d \"OpenShift region\" openshift-region",
"govc tags.category.create -d \"OpenShift zone\" openshift-zone",
"govc tags.create -c <region_tag_category> <region_tag>",
"govc tags.create -c <zone_tag_category> <zone_tag>",
"govc tags.attach -c <region_tag_category> <region_tag_1> /<datacenter_1>",
"govc tags.attach -c <zone_tag_category> <zone_tag_1> /<datacenter_1>/host/vcs-mdcnc-workload-1",
"apiVersion: v1 baseDomain: example.com featureSet: TechPreviewNoUpgrade 1 compute: name: worker replicas: 3 vsphere: zones: 2 - \"<machine_pool_zone_1>\" - \"<machine_pool_zone_2>\" controlPlane: name: master replicas: 3 vsphere: zones: 3 - \"<machine_pool_zone_1>\" - \"<machine_pool_zone_2>\" metadata: name: cluster platform: vsphere: vcenter: <vcenter_server> 4 username: <username> 5 password: <password> 6 datacenter: datacenter 7 defaultDatastore: datastore 8 folder: \"/<datacenter_name>/vm/<folder_name>/<subfolder_name>\" 9 cluster: cluster 10 resourcePool: \"/<datacenter_name>/host/<cluster_name>/Resources/<resource_pool_name>\" 11 diskType: thin failureDomains: 12 - name: <machine_pool_zone_1> 13 region: <region_tag_1> 14 zone: <zone_tag_1> 15 topology: 16 datacenter: <datacenter1> 17 computeCluster: \"/<datacenter1>/host/<cluster1>\" 18 resourcePool: \"/<datacenter1>/host/<cluster1>/Resources/<resourcePool1>\" 19 networks: 20 - <VM_Network1_name> datastore: \"/<datacenter1>/datastore/<datastore1>\" 21 - name: <machine_pool_zone_2> region: <region_tag_2> zone: <zone_tag_2> topology: datacenter: <datacenter2> computeCluster: \"/<datacenter2>/host/<cluster2>\" networks: - <VM_Network2_name> datastore: \"/<datacenter2>/datastore/<datastore2>\" resourcePool: \"/<datacenter2>/host/<cluster2>/Resources/<resourcePool2>\" folder: \"/<datacenter2>/vm/<folder2>\"",
"./openshift-install create manifests --dir <installation_directory> 1",
"rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". βββ auth β βββ kubeadmin-password β βββ kubeconfig βββ bootstrap.ign βββ master.ign βββ metadata.json βββ worker.ign",
"variant: openshift version: 4.12.0 metadata: name: 99-worker-chrony 1 labels: machineconfiguration.openshift.io/role: worker 2 storage: files: - path: /etc/chrony.conf mode: 0644 3 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst 4 driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony",
"butane 99-worker-chrony.bu -o 99-worker-chrony.yaml",
"oc apply -f ./99-worker-chrony.yaml",
"jq -r .infraID <installation_directory>/metadata.json 1",
"openshift-vw9j6 1",
"{ \"ignition\": { \"config\": { \"merge\": [ { \"source\": \"<bootstrap_ignition_config_url>\", 1 \"verification\": {} } ] }, \"timeouts\": {}, \"version\": \"3.2.0\" }, \"networkd\": {}, \"passwd\": {}, \"storage\": {}, \"systemd\": {} }",
"base64 -w0 <installation_directory>/master.ign > <installation_directory>/master.64",
"base64 -w0 <installation_directory>/worker.ign > <installation_directory>/worker.64",
"base64 -w0 <installation_directory>/merge-bootstrap.ign > <installation_directory>/merge-bootstrap.64",
"export IPCFG=\"ip=<ip>::<gateway>:<netmask>:<hostname>:<iface>:none nameserver=srv1 [nameserver=srv2 [nameserver=srv3 [...]]]\"",
"export IPCFG=\"ip=192.168.100.101::192.168.100.254:255.255.255.0:::none nameserver=8.8.8.8\"",
"govc vm.change -vm \"<vm_name>\" -e \"guestinfo.afterburn.initrd.network-kargs=USD{IPCFG}\"",
"Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied",
"mkdir USDHOME/clusterconfig",
"openshift-install create manifests --dir USDHOME/clusterconfig ? SSH Public Key ls USDHOME/clusterconfig/openshift/ 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml",
"variant: openshift version: 4.12.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true",
"butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml",
"openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2",
"INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.25.0 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.25.0 master-1 Ready master 63m v1.25.0 master-2 Ready master 64m v1.25.0",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.25.0 master-1 Ready master 73m v1.25.0 master-2 Ready master 74m v1.25.0 worker-0 Ready worker 11m v1.25.0 worker-1 Ready worker 11m v1.25.0",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.12.0 True False False 19m baremetal 4.12.0 True False False 37m cloud-credential 4.12.0 True False False 40m cluster-autoscaler 4.12.0 True False False 37m config-operator 4.12.0 True False False 38m console 4.12.0 True False False 26m csi-snapshot-controller 4.12.0 True False False 37m dns 4.12.0 True False False 37m etcd 4.12.0 True False False 36m image-registry 4.12.0 True False False 31m ingress 4.12.0 True False False 30m insights 4.12.0 True False False 31m kube-apiserver 4.12.0 True False False 26m kube-controller-manager 4.12.0 True False False 36m kube-scheduler 4.12.0 True False False 36m kube-storage-version-migrator 4.12.0 True False False 37m machine-api 4.12.0 True False False 29m machine-approver 4.12.0 True False False 37m machine-config 4.12.0 True False False 36m marketplace 4.12.0 True False False 37m monitoring 4.12.0 True False False 29m network 4.12.0 True False False 38m node-tuning 4.12.0 True False False 37m openshift-apiserver 4.12.0 True False False 32m openshift-controller-manager 4.12.0 True False False 30m openshift-samples 4.12.0 True False False 32m operator-lifecycle-manager 4.12.0 True False False 37m operator-lifecycle-manager-catalog 4.12.0 True False False 37m operator-lifecycle-manager-packageserver 4.12.0 True False False 32m service-ca 4.12.0 True False False 38m storage 4.12.0 True False False 37m",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resourses found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim: 1",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4",
"oc create -f pvc.yaml -n openshift-image-registry",
"oc edit config.imageregistry.operator.openshift.io -o yaml",
"storage: pvc: claim: 1",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.12.0 True False False 19m baremetal 4.12.0 True False False 37m cloud-credential 4.12.0 True False False 40m cluster-autoscaler 4.12.0 True False False 37m config-operator 4.12.0 True False False 38m console 4.12.0 True False False 26m csi-snapshot-controller 4.12.0 True False False 37m dns 4.12.0 True False False 37m etcd 4.12.0 True False False 36m image-registry 4.12.0 True False False 31m ingress 4.12.0 True False False 30m insights 4.12.0 True False False 31m kube-apiserver 4.12.0 True False False 26m kube-controller-manager 4.12.0 True False False 36m kube-scheduler 4.12.0 True False False 36m kube-storage-version-migrator 4.12.0 True False False 37m machine-api 4.12.0 True False False 29m machine-approver 4.12.0 True False False 37m machine-config 4.12.0 True False False 36m marketplace 4.12.0 True False False 37m monitoring 4.12.0 True False False 29m network 4.12.0 True False False 38m node-tuning 4.12.0 True False False 37m openshift-apiserver 4.12.0 True False False 32m openshift-controller-manager 4.12.0 True False False 30m openshift-samples 4.12.0 True False False 32m operator-lifecycle-manager 4.12.0 True False False 37m operator-lifecycle-manager-catalog 4.12.0 True False False 37m operator-lifecycle-manager-packageserver 4.12.0 True False False 32m service-ca 4.12.0 True False False 38m storage 4.12.0 True False False 37m",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m",
"oc logs <pod_name> -n <namespace> 1",
"govc cluster.rule.create -name openshift4-control-plane-group -dc MyDatacenter -cluster MyCluster -enable -anti-affinity master-0 master-1 master-2",
"govc cluster.rule.remove -name openshift4-control-plane-group -dc MyDatacenter -cluster MyCluster",
"[13-10-22 09:33:24] Reconfigure /MyDatacenter/host/MyCluster...OK",
"govc cluster.rule.create -name openshift4-control-plane-group -dc MyDatacenter -cluster MyOtherCluster -enable -anti-affinity master-0 master-1 master-2",
"kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: vsphere-sc provisioner: kubernetes.io/vsphere-volume parameters: datastore: YOURVCENTERDATASTORE diskformat: thin reclaimPolicy: Delete volumeBindingMode: Immediate",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: test-pvc namespace: openshift-config annotations: volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/vsphere-volume finalizers: - kubernetes.io/pvc-protection spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: vsphere-sc volumeMode: Filesystem",
"./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2",
"oc scale deployment/vsphere-problem-detector-operator --replicas=0 -n openshift-cluster-storage-operator",
"oc -n openshift-cluster-storage-operator get pod -l name=vsphere-problem-detector-operator -w",
"NAME READY STATUS RESTARTS AGE vsphere-problem-detector-operator-77486bd645-9ntpb 1/1 Running 0 11s",
"oc get event -n openshift-cluster-storage-operator --sort-by={.metadata.creationTimestamp}",
"16m Normal Started pod/vsphere-problem-detector-operator-xxxxx Started container vsphere-problem-detector 16m Normal Created pod/vsphere-problem-detector-operator-xxxxx Created container vsphere-problem-detector 16m Normal LeaderElection configmap/vsphere-problem-detector-lock vsphere-problem-detector-operator-xxxxx became leader",
"oc logs deployment/vsphere-problem-detector-operator -n openshift-cluster-storage-operator",
"I0108 08:32:28.445696 1 operator.go:209] ClusterInfo passed I0108 08:32:28.451029 1 datastore.go:57] CheckStorageClasses checked 1 storage classes, 0 problems found I0108 08:32:28.451047 1 operator.go:209] CheckStorageClasses passed I0108 08:32:28.452160 1 operator.go:209] CheckDefaultDatastore passed I0108 08:32:28.480648 1 operator.go:271] CheckNodeDiskUUID:<host_name> passed I0108 08:32:28.480685 1 operator.go:271] CheckNodeProviderID:<host_name> passed",
"oc get nodes -o custom-columns=NAME:.metadata.name,PROVIDER_ID:.spec.providerID,UUID:.status.nodeInfo.systemUUID",
"/var/lib/kubelet/plugins/kubernetes.io/vsphere-volume/mounts/[<datastore>] 00000000-0000-0000-0000-000000000000/<cluster_id>-dynamic-pvc-00000000-0000-0000-0000-000000000000.vmdk"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html-single/installing_on_vsphere/index |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.